November 14, 2018

The ongoing data dilemma and fear of “garbage-in garbage-out” is alive and well.  Enterprises working earnestly to solve this problem realize it’s even harder with the emergence of the Digital Revolution, IoT, Predictive Analytics, Machine Learning, and Embedded Software technologies.  These technologies explode massive amounts of data upon business and consumer ecosystems alike requiring new levels of discipline and technical innovation.

Successful businesses need to effectively harness, cleanse, and manage massive amounts of data to make smarter decisions and action plans.  Inaccurate data and obsolete business rules applied to analytics engines lead to sub-optimal performance.  Making matters worse, organizations need to develop rules to identify, capture, and process new data streams as they become available.  Machines can identify, learn and process new data, but often human intervention is required to supervise machine learning and write additional rules or algorithms to convert data into valued insights sooner.  The goal is to run analytics in real-time against a combination of ecosystem, external and enterprise data sources to derive new levels of insight – enabling effective cross functional upstream and downstream communication, decisions, and process execution.

Predicting a Precise Time of Arrival (PTA) Example

Consider the objective to calculate a precise time of arrival (PTA) of purchase order goods flowing across a global multi-node and multi-mode network.  Ingesting planned and unplanned real-time event data from Ecosystem (carrier, co-packer, supplier), External (port congestion, weather, geopolitical), and Enterprise (route schedules, demand forecasts, capacity and inventory) is essential to accurately calibrate and recalibrate a precise time of arrival at every node of the order’s physical and information journey.  A platform that consumes some data but ignores other, or can’t decipher between useful and non-useful data, is not going to provide the best answer.  Further, the ability to consume data from multiple sources with greater frequency improves the accuracy of an event and its impact on orders/inventory in motion and therefore, the accuracy of predicting a precise time of arrival at each node.  To do this effectively a robust big data integration, analytics and continuous decision intelligence platform is required – yielding radical improvement in operational effectiveness and business performance.

Data Asset Management and Governance Best Practices

Data integrity and governance is critical to managing this dilemma.  Leading organizations such as Uber, LinkedIn and Netflix deploy advanced platforms with data integrity and governance practices such as the following:

  • Separating all data into streams to make data easier to manage.
  • Encrypt all sensitive data at rest and in motion to ensure privacy and information policy enforcement.
  • Manage and enforce data security and access to the lowest level of the data hierarchy.
  • Record all versions of the data when received, and all changes to it along each step of all processes.
  • Support scalable distributed logging.
  • Store a provenance copy of the data as read-only to ensure it is not accidentally changed.
  • Flexible and modular processing architecture to provide ample options for validation, cleansing, deduplication, normalizing, healing the data streams.
  • Enable continuous on the fly processing of the flowing data.

 

For more information on this topic, please send an email to Media@TransVoyant.com, and one of our experts will follow-up with you.  Thank you!

Leave a Reply

Your email address will not be published.