Performance Analytics concepts: How do they all work together?
Performance Analytics is a powerful tool but when first starting to configure Performance Analytics it can be overwhelming. This document provides insights into the basic concepts, and how these concepts work together to get to snapshotted time-series data for performance indicators.
Indicator source
An 'indicator source' provides Performance Analytics with a list of rows of a table (or database view) with conditions from which we want to extract scores. Examples of indicator sources are 'open incidents', 'overdue incidents', 'closed problems', and 'new changes'. Indicator sources are like definitions of data we use in real-life when explaining a business process.
Automated Indicator
An 'automated indicator' provides Performance Analytics with the definition of the measurement that we want from the indicator source. For example, when we have an indicator source such as 'open incidents' we might want to measure the number/count, or the time that these incidents are open. Basically in its most simple format an indicator is the combination of an indicator source and what to measure from that indicator source. Indicators are the (key) performance indicators that we communicate in scorecards and dashboards. There are more types of indicator. For an overview of the automated indicator and other types watch the following video.
Collection jobs
A 'collection job' in Performance Analytics collects the measurements (or as we call them 'scores') based on the indicators related to the job, in a periodic cycle, usually daily. The collection job takes the indicator source of an indicator, uses the table and condition in that indicator source to query the rows (like 'open incidents' or 'new changes'). Then it perform calculations on those rows as defined in the indicator, so it counts the number of rows, or sums the re-assignment count column, or calculates the time between two dates (based on a 'script'). In each periodic cycle the measurements are stored as scores. And as such it creates a periodic snapshot. The below video provides more details on collection jobs.
Actually, these are the three most basic concepts. However, most of the time you also want to break down measurements to get for example, 'open incidents by assignment group', or 'new problems by priority', or 'open changes by business service'. The following concepts are used to configure these broken down measurements.
Breakdown source
A 'breakdown source' in Performance Analytics defines the items in the breakdown list. For example, it defines the list of priorities (e.g. 'high', 'medium', 'low'), or the list of assignment groups, or business services. Most of the time these items are in a table, and therefore a breakdown source points to a table, and with conditions you can make the list of items from that table as specific as you would like. For example, if you have specific assignment groups doing on HR tasks then the conditions would filter out assignment groups that do not perform HR tasks. As such you create a specific breakdown source to query only the relevant HR assignment groups. It's also possible to create breakdown sources for which the items are not in a table, but that is outside the scope of this document.
Breakdown
A 'breakdown' relates a breakdown source to a column in an indicator source table through breakdown mappings. This is probably the most difficult concept. The breakdown configuration basically tells Performance Analytics how the breakdown source should be interpreted on an indicator source table when it encounters that table in the collection cycle. An example will make this hopefully more clear. Assume you have an indicator source for 'open incidents' and a breakdown source for 'assignment groups'. The table 'incidents' of the indicator source 'open incidents' may have more than one columns referencing 'assignment groups'. In the breakdown we now configure that when breakdown 'assignment group' is applied to an indicator, it needs to use the column (field) as defined in the breakdown mapping. Performance Analytics does not automatically detect this mapping because it cannot always determine the mapping (more than one column may point to the same table as a reference), and manual mapping provides flexibility in some edge cases.
Assigning a breakdown to an indicator
Because not all breakdowns make sense for all indicators, Performance Analytics expects specific linkage of breakdowns to indicators. The collection job does not know that a breakdown needs to be applied when calculating measurements, if a breakdown is not linked to an indicator.
The below video gives more background on breakdowns.
When breakdowns have been applied to an indicator, the collection jobs will apply these breakdowns when calculating the measurements, and it will store these scores for the appropriate breakdown item. This works as follows. The collection job takes the indicator source of an indicator, uses the table and condition in that indicator source to query the rows (like 'open incidents' or 'new changes'). Then it perform calculations on those rows as defined in the indicator. At the same time, when it is calculating the overall measurement for the indicator, it applies the breakdowns. Basically, when it encounters a row in 'open incidents' for breakdown 'priority = high' it buckets that row in the corresponding breakdown item based on breakdown mapping / source. After that bucketing we have all kinds of buckets of rows ranging from a bucket for 'priority = high' to a bucket for 'assignment group = A'. Then it simply applies the measurement to the rows in each bucket, for example it counts the rows in the bucket 'priority = high', or it applies a script to calculate the open time for the rows in that bucket. Of course, technically it works a bit smarter, but conceptually this is how breakdowns are applied.
Even this explanation may be overwhelming, I admit. I encourage to take a look at our out of box content for incident management for each of the concepts mentioned below. Take for example an indicator source like 'open incidents' and see how it relates to multiple indicators like 'Number of open incidents' and 'Summed age of open incidents'. Notice in these indicators how the aggregate that calculates the measurement differs; one uses a simple count, and another one the most complex configuration by using a script. Check out a breakdown source like groups and notice how it relates to breakdown, and how the breakdown is related to the mentioned indicators. Also have a look at the Scoresheet that provides insights in the scores collected per breakdown.
https://www.servicenow.com/community/platform-analytics-articles/performance-analytics-concepts-how-do-they-all-work-together/ta-p/2304311