top of page
  • Writer's pictureNikhil Datta

Thinking beyond Data  -  Augmenting Data with Observability APIs to make LLMs context aware

LLMs, backed by tools such as Code Interpreter are becoming increasingly capable at data analysis. Function Calling (or Tools, as they’re referred to by GPT-4) enable applications to call APIs or execute SQL to retrieve data from databases for analyses. There’s an increasing number of Copilot-type analytics assistants that tout their LLM integrations as enabling ordinary business users to interact with data and create reports and visualisations.


Screenshot from a promotional video for Cognos Analytics

However real-world use cases don’t have perfectly-curated examples like the one above, where the data is carefully structured to give the best results. Business data is complex, messy, and generally requires an in-depth understanding of business logic, how the data is structured and where it’s located to extract meaningful results.


Giving business users a natural language, no-code interface to accessing raw data doesn’t solve any of these issues, in fact, over time this will lead to a different version of the “Excel death spiral” most businesses face.


ManagedAnalytics’ Data Platform takes a radically different approach to conventional data warehouses to enable true data observability and data exploration and vastly simplifies the user experience with accessing data and managing complex business logic.


The Problem with SQL Wrappers

As an example, Equipment Availability is a commonly used standard metric for tracking the availability of large assets like fixed plants or large mining trucks over a given period (such as a week or a month).

At face value, the formula is simple:

Availability = Available Hours /
			   Required Hours

Where Available Hours is the number of hours the asset was available for use, and Required Hours is the number of hours the asset was required for use.


We could theoretically provide users with an LLM-enabled SQL query writer to extract this data from a database(s) and use it for reporting. However there’s a number of challenges with this approach.


Business logic is complex and mostly undocumented

In our formula for Availability, Available Hours and Required Hours are themselves calculated metrics, determined by the underlying metrics Engineering Delays and Standby Delays.


Engineering Delays and Standby Delays are calculated from a long list of delay codes, which are typically captured using a combination of automated systems and manual inputs. A haul truck for example, may have over 200 possible delay codes associated with it. Our formula is thus:


Business logic gets very complex, very quickly. And unlike (well-managed) database schemas, even large enterprises rarely have knowledge dictionaries that explicitly track and document such logic.


Messy database schemas

Building a dashboard with the Availability metric requires business users to have a deep understanding of not just this business logic, but where and how the data is stored.


Information about operating hours may be stored in one database, while delays data is typically captured in a separate system with its own database. And as well all know, in the real world, data storage is messy. Without proper documentation of the schemas, writing queries to extract data in a human-readable form is hard enough - having to do this across multiple databases that don’t share common keys is tedious and requires above-average coding skills.


But isn’t the solution for this to build some form of data warehouse, where all the data is nicely structured, aggregated and stored in one place? Sure — but it still doesn’t solve our first problem, or the next.


Non-standard business rules

Remember we mentioned earlier that we wanted to retrieve this information for a month? If we ask a LLM to generate a SQL Query to retrieve this data for January-24, it will quite logically look retrieve data for from 12-Jan-2024 from 00:00:00 (12 am) through to 31-Jan-2024 at 11:59:59.


However in the business world, business logic applies - which may not align with standard definitions. For example, many businesses define artificial date ranges for reporting months — at one client, the reporting month runs from the 25th day of the previous month, through to the 24th of the next month. Further, Production Days don’t align with regular days - the Production Day starts and finishes at 06:00 to align to shift structures.


And finally, just to make it even more challenging, not all sites within the business run to exactly the same calendar — they all follow slightly different business logic driven by local reporting requirements and technology constraints.


Lack of context

We would rarely report on a metric such as Availability of a single asset, we’d most likely be reporting on a fleet of trucks for example, or a number of machines on the shop floor. We probably also want to segment those groupings further, such as ‘all 25-ton trucks’ or ‘all plants in the western region’. The members of these groups are usually not static either — so maintaining an updated dictionary of assets is important, but challenging to maintain.


Our naïve assumptions that simply providing a LLM-based interface to wrap data stored in databased will automagically allow regular business users to build dashboards and reports doesn’t hold true. In fact, it can lead to far more larger issues (and we won’t even bring up the security implications of allowing ordinary users to run untrusted SQL code!)


The ManagedAnalytics approach   - an Object-Model Data Platform

At ManagedAnalytics, we set out to build a data platform that provided capabilities that enable a truly business-user centric approach.


Unlike data-first approaches that require extensive data-wrangling on the back-end to make data somewhat usable for analytics, our approach leverages building an Object Model that encapsulate both business structure and logic.


The ManagedAnalytics Data Platform

The Object Model is a hierarchical model of the business that can represent all physical (and logical) assets in the business — from Corporate HQ down to physical assets such as machines in a plant or mobile equipment such as trucks and loaders.


The platform includes a Visual Designer — a drag-and-drop interface to build and customise models. Authorised uses have access to the model, which allow adding levels to the hierarchy, as well as creating new object definitions via templates, from an Object Configurator. Templates make it easy to create and customise object definitions, and synchronise changes across the model.


The ManagedAnalytics Visual Designer. The chevrons on the top reflect levels in the organisation hierarchy.
The ManagedAnalytics Visual Designer. The chevrons on the top reflect levels in the organisation hierarchy.

Each object within the model can contain multiple user-defined metrics and events. Metrics can be either input metrics, which accept data from a user input to external data source, or calculated metrics, which define their own business logic to enable on-the-fly calculations of outputs.


Metrics and Events
Metrics and Events

Business Logic with the Custom Calculation Engine

This ability to define custom business logic through the user interface, rather than embedding it in the application or database layers, is enabled by a Custom Calculation Engine. The Visual Designer provides an intuitive, menu driven interface that allows users to define custom formulas (such as the Availability formula above) and custom aggregations, to enable aggregating data across both levels in the hierarchy and time periods.



Context-menus allow users to build complex formulas with a point-and-click interface
Context-menus allow users to build complex formulas with a point-and-click interface

It’s worth dwelling on this for a moment.

Defining custom business logic this way is incredibly powerful from a user-perspective. Writing business logic to handle aggregations for complex formulas can become incredibly complex and is prone to error. With MA’s context-aware calculation engine, writing logic to aggregate data from multiple levels is seamless.





ManagedAnalytics provides two approaches to enable automatic data aggregation:

  • Using the object hierarchy — Organisation structures can be set up as a hierarchical tree, with the model automatically handling of aggregations up the tree.

  • Using custom tags — The model allows custom tags to be created for each asset (and metric!) — defined entirely in the front end, via a visual modeller (more on this later). Using custom tags, it’s easy to create data queries that selectively include specific assets across the hierarchy’s branches and levels.


The 'SUM_CHILDREN' calculation tag enables automatic aggregation of a metric from all child objects
The 'SUM_CHILDREN' calculation tag enables automatic aggregation of a metric from all child objects

The ability to define Aggregation Rules allows the calculation engine to automatically aggregate data across time periods. A single query can thus aggregate data for different time periods without any changes. In the Live Query example below, the model is able to return data aggregated by day or by week by simply changing a parameter (the ‘Datastream’) in a call to the API.








Configuring metrics for each object is trivial, and done within the Visual Designer by authorised users. Depending on the data type, metrics can be set up as described and data retrieve through a set of core APIs.


Configuring Data Pipelines


Business data typically falls into one of these three categories:

  • Entity lists — For example, lists of customers or products, with their attributes

  • Events — Transactional data such as customer orders, product shipments or operational events, such as loading a truck or a customer entering a queue

  • Time series data — Aggregated data such as the number of customers or orders processed in an hour, or the total costs and revenue for a week

MA stores data in datastreams as time series objects, easily referenced by key and timestamp. Multiple ‘indicators’ are supported, which allows related data to be stored together — for example, actual performance data along with a budget and target.


When a user makes a data query, the response is executed as follows:

  1. The API receives a structured query that contains the ID or name of the assets for which data is being retrieved, a date range for which to retrieve the data, the names of the metrics to retrieve, a datastream, which defines the granularity at which the data is to be aggregated, and finally a list of indicators.

  2. The API queries the object model to retrieve the metric definition. From this it determines whether the metric is an input or calculated metric, and the granularity of the source data.

  3. Input metrics only need to be aggregated before returning to the user. For example, production information collected by hour must be aggregated if the user is requesting a total for the day or month.

  4. Calculated metrics, and input metrics requiring aggregation must be processed by the custom calculation engine using the business logic defined for that metric. This can involve multiple nested and recursive calls, aggregating data from objects below in the tree, as well as business logic calculations as defined by the user.

  5. Before calculations can be run, all the input metrics dependencies must be retrieved from the data server. A high-performance time-series database serves data to the API server for processing.

  6. Finally, processed data is sorted and formatted as specified by the user and returned as JSON


Bringing it all together - Giving LLMs structure and business context with observability APIs

In addition to data retrieval, the ManagedAnalytics platform exposes custom APIs that enable LLMs to directly query the object model for structure and business logic, without the need for fine-tuning models.


Paired with OpenAI’s tools calling (or equivalent open source models), the LLM has the ability to understand the semantics of the user’s query, call the observability APIs to retrieve information about the object being retrieved, and use that to refine the data retrieval query.


The LLM has access to the following functions:

  1. getAssetInfo. This returns structure and metrics information for the specified asset and metric name

  2. getMetricData. This function accepts an Asset ID and Metric code, as well as a date range and returns the data requested

  3. displayChart. Formats the data as a chart and displays it to the user

  4. formatChartLayout. Formats the layout of the chart per the user request.

“Display daily availability data for the asset DT019 for the last 3 months as a bar chart”


Using OpenAI’s tools_calls, the LLM first calls getAssetInfo to retrieve the asset id and the list of metrics and their metric codes. The API responds with the information requested.

{  

    "Name":"DT019",  
    "Description":"Bell 45E Haul Truck",  
    "Metrics": [
                  {
                     "Name":"Availability",
                     "Tag":"AVAILABILITY",
                     "Units":"%"
                  }  

                ]
}
  1. We then use this data to call the getMetricsData query, with additional parameters to specify the date range, datastream and indicator

  2. Finally, we call the displayChart and formatChartLayout functions to display and format the data retrieved.


Data is returned to the calling application that receives, processes and displays the data, without it being passed to the LLM. This has the benefit of reducing the size of the context window required and the number of tokens processed.


Here’s a short clip of the overview and Live Query in action:



This lengthy post is a relatively short overview of how ManagedAnalytics is building deep integrations with observability APIs to enable a better user experience. Check out our website for more insights and information.


Nikhil Datta is CTO and founder at ManagedAnalytics.ai

4 views0 comments

Recent Posts

See All
bottom of page