ServiceNow Incident Management


Introduction

With the ServiceNow Incident Management app template you can load raw input data from ServiceNow for the ServiceNow Incident Management process, extracted to either Snowflake or SQl Server using CData Sync. The raw input data is transformed through a series of transformations that take place via dbt (data build tool) and produce the required input data used in Process Mining process apps.

Incident Management process

The Incident Management process is the end-to-end process which starts with the opening of an incident up to the resolution and closing of the incident. Other steps can be assigning the incident to correct teams and users, working on and resolving incidents. Cases can be in a pending status, which is usually related to further information being needed to resolve the incident. Reassignment to different teams and users may occur during the process. For example, if the initial assignment was incorrect, or if the user assigned or the incident needs attention from multiple users and/or teams during its resolution.

The Incident Management for ServiceNow app template enables:

Tags

Tag Description
Assignment Group Change Any subsequent group assignment.
Created without Assignment Group Identify if the "Open Incident" activity has the "Team" = NULL.
Multiple assignments The incident has multiple assignments.
Priority Changes Changes the level of priority.
Reopen An activity 'Change status to "Pending"' or 'Change status to "Assigned"' happens after 'Change status to "Resolved"'.

Due dates

ServiceNow has a complete framework for tracking and categorizing SLA. Therefore, the Due dates for this connector were created in a way that utilizes the different SLA names created in ServiceNow's own system. Each company will have their own categories and priority levels for SLA shown out of the box. ServiceNow stores the maximum duration in contract_sla table for each SLA category and the planned end time in task_sla table for each incident number (time for reaching 'resolved' state). Using both information, it is possible to obtain the last SLA category assigned to an incident number and to correctly set the due dates.


ServiceNow configuration

Source system

ServiceNow systems (versions up to the version "Rome")

Environmental settings

n order to connect via CData, the user must have appropriate read permissions on each of the objects which are being extracted. All fields included in the input tables must be visible for the user extracting the data.

System specific settings

The ServiceNow Incident Management app template requires the following ServiceNow objects:

Additional events are included by tracking priority changes, assignment changes and pending status. However it is possible to add activities by tracking the Mi_definition from the supporting table "Incident_metric_input".

In order to obtain the due dates, it also necessary to have access to the following objects.

In order to obtain the Configuration Item name from the Configuration Management Database it is also necessary to have access to the following object.


Configuring CData Sync for ServiceNow

Below is a description on how to use CData Sync to set up a source connection and load data into a Process Mining ServiceNow Incident Management process app.

In general, you should follow the steps as described in Loading data using CData Sync (Snowflake) or Loading data using CData Sync (SQL Server) to set up data loading using CData Sync. Since specific settings are required when using ServiceNow, pay attention to the steps described below.

Setting up the source connection

Creating the extraction job

Define the following settings in the Replicate Options section in the Advanced tab in the Job Settings panel.

Setting up environment variables in the extraction job

Important: Make sure you edit the Pre-job Event.

CData Sync allows the use of environment variables in order to drive specific extraction logic in each query.

Variable Description Comment
start_extraction_date First date for which data will be extracted. Mandatory
end_extraction_date Last date for which data will be extracted.

Be mindful of choosing a start_extraction_date that encompasses the data that you want to capture, as orders are the starting point of the order to cash process.

In order to setup the environment variables:

Step Action
1 Go to the JOBS tab and open the extraction job created in 4: Creating the extraction job.
2 Go to the Events tab in the Job Settings panel.
3 Edit the Pre-Job Event section to add the code displayed below after `<!-- Code goes here -- >.
4 Set the appropriate dates in the value fields. By default, end_extraction_date will default to today's date. start_extraction_date must always be populated.
5 Click on Save Changes.
<!-- Modify variables here. Variable start_extraction_date must be populated. In case a specific end date is needed, replace now() with the required date in yyyy-MM-dd format --> <api:set attr="out.env:start_extraction_date" value="1900-01-01" /> <api:set attr="out.env:end_extraction_date" value="[null | now() | todate('yyyy-MM-dd')]" /> <api:push item="out" />

Important: Do not modify the api:info details that are shown by default.

In order to modify the environment variables, modify the values within the Events tab. By default, end_extraction_date will default to today's date. start_extraction_date must always be populated.

Table Replication

Once the job is correctly setup, click on Add Custom Query under the Tables tab and paste the following queries (each query needs to maintain the semicolon at the end). Make sure you save all changes.

REPLICATE [metric_instance_raw] SELECT [sys_id], [id], [sys_created_on], [value], [definition] FROM [metric_instance] WHERE ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}'); REPLICATE [cmdb_ci_raw] SELECT [name_display_value], [sys_class_name_display_value],[sys_updated_on] FROM [cmdb_ci] WHERE ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}'); REPLICATE [contract_sla_raw] SELECT [name_display_value], [duration] FROM [contract_sla] WHERE ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}'); REPLICATE [task_sla_raw] SELECT [sla_display_value], [task_display_value], [planned_end_time], [stage] FROM [task_sla] WHERE [stage] = 'completed' AND ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}'); REPLICATE [metric_definition_raw] SELECT [sys_id], [name], [table] FROM [metric_definition] WHERE [table] = 'incident' AND ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}'); REPLICATE [incident_raw] SELECT [sys_id], [sys_updated_on], [assigned_to_display_value], [assignment_group_display_value], [caller_id_display_value], [category_display_value], [close_code_display_value], [cmdb_ci_display_value], [contact_type_display_value], [priority_display_value], [number], [state_display_value], [upon_approval_display_value], [upon_reject_display_value], [urgency_display_value] FROM [incident] WHERE ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}');

Input fields

The following tables include the list of fields per input table.

Input types

Below is an overview of the different field types and their default format settings.

Field type Description
boolean true, false, 1, 0
date yyyy-mm-dd
datetime yyyy-mm-dd hh:mm:ss[.ms], where [.ms] is optional.
double Decimal separator: . (dot); thousand separator: none
integer Thousand separator: none
text N/A

cmdb_ci

Table that contains base configuration item information.

Field Type Stores
name_display_value text Name of the CI instance.
sys_class_name_display_value text Category for CI.
sys_updated_on text Date the data was updated on the system.

incident

Table that contains data related to each incident number. A filter is applied on incident_input to get only the last record (Last_line = 1) for each incident number. This filter is necessary because some incident numbers may have multiple entries on this table.

Field Type Stores
number text Incident number (Case id).
assigned_to_display_value text User assigned.
priority_display_value text Level priority for the incident.
caller_id_display_value text User that created the incident.
assignment_group_display_value text text
cmdb_ci_display_value text Affected configuration item.
close_code_display_value text Category for the close status.
state_display_value text Status of the ticket.
contact_type_display_value text Type of contact used to report the incident.
category_display_value text Incident category (Security, Software, Server, Hardware, Network, etc).
upon_approval_display_value text Instructions for next steps in case of approval.
upon_reject_display_value text Instructions for next steps in case of rejection.
urgency_display_value text Urgency level for the Incident.
sys_updated_on text Date the data was updated on the system.
sys_id text ID necessary to join the Metric_instance table. This ID is a primary key for the table, therefore each record is unique for the table.

task_sla

Data related SLA and due dates.

Field Type Stores
task_display_value text Incident number (Case id).
sla_display_value text SLA Name.
planned_end_time datetime Expected date for the due date.

contract_sla

Contains data related to the duration for each SLA type. This table is needed in conjunction with task_sla table to obtain the last SLA record (due date type) assigned to case ids.

Field Type Stores
name_display_value text SLA Name.
duration integer Max duration of an SLA type.

metric_definition

Defined metrics can track how long an audited field holds a certain value.

Field Type Stores
sys_id text Id necessary to join the metric_instance table. This ID is a primary key for the table, therefore each record is unique for the table.
name text Definition for the type of data retrieved on "value" field in metric_instance table.

metric_instance

A metric instance is a record in the metric_instance table.

Field Type Stores
definition text Id necessary to join the Metric_definition table.
id text Id necessary to join the Incident table.
sys_created_on datetime Used for creating the Event end.
value text Contains data about status, users and teams.

Design specifications

Entities

Entity Input data
Incidents Incident, Configuration Item, User, Urgency

Cases_base

Field Data Type Description
Case ID Text Number to identify the case, it is unique.
Case Text A user-friendly name to identify the case.
Case status Text The status of the case in the process. For example: 'Open', 'Closed', 'Pending', etc.
Case type Text Classifies the type of case into high, low and medium.
Case owner Text Name to identify the owner of the case.
Category Text Incident category (Security, Software, Server, Hardware, Network, etc).
Class name Text Configuration Item class name.
Close code Text Code used when closing ticket.
Cmdb ci Text Configuration item.
Contact type Text How user managed to open the ticket.
Customer Text User that created the incident.
Supplier Text First team assigned to the incident.
Upon approval Text Work instructions if approved.
Upon reject Text Work instructions if rejected.
Urgency Text Urgency.

Note: The Cases_base table filter Cases IDs based whether the table Incident_metric_input have a record for that same Case ID. This is used to guarantee that all Case IDs will have at least one activity.

Activities

Activity Activity Created on Mi_definition Description
Open Incident Open_and_incident_state_duration_events Open Creation of an Incident number.
Change status to "Assigned" Open_and_incident_state_duration_events Incident State Duration Status for the Incident.
To be Assigned Assigned_to_duration_events Assigned to Duration Assigning NULL to the user field.
Assign User Assigned_to_duration_events Assigned to Duration Assigning a user to the incident.
Assign First Assignment Group Assignment_group_events Assignment Group First time an assignment group is set to work on an Incident.
Change Assignment Group Assignment_group_events Assignment Group Any subsequent group assignation.
Change status to "Work in Progress" Open_and_incident_state_duration_events Incident State Duration Status for the Incident.
Change status to "Resolved" Open_and_incident_state_duration_events Incident State Duration Status for the Incident.
Close Incident Open_and_incident_state_duration_events Incident State Duration Status for the Incident.
Change Priority to # (1 to 4) Priority_change_events Priority Change Changes the level of priority for the Incident.
Change status to "Pending" Open_and_incident_state_duration_events Incident State Duration Status for the Incident.
Add Pending reason "Awaiting Scheduled Appointment" Incident_pending_status_metrics_events Incident Pending Status Metrics Identify the reason for a pending status.
Add Pending reason "Awaiting User Info" Incident_pending_status_metrics_events Incident Pending Status Metrics Identify the reason for a pending status.
Add Pending reason "Pending Depot Shipping" Incident_pending_status_metrics_events Incident Pending Status Metrics Identify the reason for a pending status.
Add Pending reason "Vendor Response" Incident_pending_status_metrics_events Incident Pending Status Metrics Identify the reason for a pending status.

Customizing the transformations

Dbt variables

Below is an overview of the variables that must be configured. These variables are defined in the dbt_project.yml file.

Variable Type Description
datetime_format string / Integer Specifies the date format which will be used when converting to DateTime type. SQL Server format value is 20. Snowflake format is 'YYYY-MM-DD hh24:mi:ss.ff3'.
max_datetime string Defines a maximum date that is used in the logic to recover the user and team for several tables and needs to match the datetime_format format

Limitations, known issues, common problems

Known issues

Common problems

Loading a large amount of data may cause a timeout failure in the job. If this happens follow the instructions below:

Note: Recommend values for loading a large amount of data is Batch size = 10000 and Command Timeout = 5000. This can be increased or decreased depending of the amount of records.