The Incident Management process is the end-to-end process which starts with the opening of an incident up to the resolution and closing of the incident. Other steps can be assigning the incident to correct teams and users, working on, and resolving incidents. Incidents can be in a pending status, which is usually related to further information being needed to resolve the incident. Reassignment to different teams and users may occur during the process. For example, if the initial assignment was incorrect, or if the user was assigned or the incident needs attention from multiple users and/or teams during its resolution.
The Incident Management for ServiceNow app template enables:
The following tags are available by default.
Tag | Description |
---|---|
Assignment Group Change | Any subsequent group assignment. |
Created without Assignment Group | Identify if the "Open Incident" activity has the "Team" = NULL. |
Multiple assignments | The incident has multiple assignments. |
Priority Changes | Changes the level of priority. |
Reopen | An activity 'Change status to "Pending"' or 'Change status to "Assigned"' happens after 'Change status to "Resolved"'. |
Multiple users for same activity | An incident has the same activity executed multiple times by different users. |
Due dates
ServiceNow has a complete framework for tracking and categorizing SLA. Therefore, the Due dates for this app template were created in a way that utilizes the different SLA names created in ServiceNow's own system. Each company will have their own categories and priority levels for SLA shown out of the box. ServiceNow stores the maximum duration in contract_sla table for each SLA category and the planned end time in task_sla table for each incident number (time for reaching 'resolved' state). Using both information, it is possible to obtain the last SLA category assigned to an incident number and to correctly set the due dates.
This app template uses ServiceNow (versions up to the version "Rome") as source system.
Extraction for the system is done using CData Sync. To set-up the extraction, make sure you have a valid license for CData Sync and have installed CData Sync.
To be able to extract data from ServiceNow the following configuration in ServiceNow is required.
Assign appropriate read permissions on each of the objects which are being extracted. All fields included in the input tables must be visible for the user extracting the data.
The ServiceNow Incident Management app template requires the following ServiceNow objects:
Additional events are included by tracking priority changes, assignment changes and pending status. However it is possible to add activities by tracking the Mi_definition from the supporting table "Incident_metric_input".
In order to obtain the due dates, it also necessary to have access to the following objects.
In order to obtain the Configuration Item name from the Configuration Management Database it is also necessary to have access to the following object.
displayvalue=ALL;
for the Other parameter in the Miscellaneous section.To set up the destination connection, follow the steps as defined in Create a destination connection (Automation Cloud). Note: If you are using Automation Suite, make sure to select AUTOMATION SUITE
as the DELIVERY
option and select the correct version of Automation Suite you are using.
To create the Job in CData make sure to follow the steps below.
In the Advanced tab in the Job Settings panel, edit the following settings:
If you are using Automation Suite, set the Destination Schema in the Settings panel on the overview tab. Use the schema name you retrieved when you created the destination connection.
Important: Make sure you edit the Pre-job Event.
CData Sync allows the use of environment variables in order to drive specific extraction logic in each query.
Variable | Description | Comment |
---|---|---|
start_extraction_date | First date for which data will be extracted. | Mandatory |
end_extraction_date | Last date for which data will be extracted. |
In order to setup the environment variables:
Step | Action |
---|---|
1 | Go to the Events tab in the Job Settings panel of the job you are creating |
2 | Edit the Pre-Job Event section to add the code displayed below after <!-- Code goes here -- > . Do not modify the api:info details that are shown by default. |
3 | Click on Save Changes. |
<!-- Modify environment variables here. -->
<!-- Variable start_extraction_date must be populated.-->
<api:set attr="out.env:start_extraction_date" value="2022-01-01" />
<!-- In case a specific end date is needed, replace the value string with the required date in yyyy-MM-dd format. -->
<!-- i.e api:set attr="out.env:end_extraction_date" value="2022-02-01" -->
<api:set attr="out.env:end_extraction_date" value="[null | now() | todate('yyyy-MM-dd')]" />
<api:push item="out" />
By default, end_extraction_date
will default to today's date. start_extraction_date
must always be populated.
After the data has been successfully loaded, the Process Mining platform needs to start the data ingestion process. This is done by calling the End of Upload API. In order to set this up, follow the steps below:
Step | Action |
---|---|
1 | Go to the Events tab in the Job Settings panel of the job you are creating |
2 | Edit the Post-Job Event section to add the code displayed below after <!-- Code goes here -- > . Do not modify the api:info details that are shown by default. |
3 | Fill out the End of Upload API with the value provided. |
4 | In case you need a debugging log for this call, uncomment the two lines provided in the code below and provide a location for the log file. |
5 | Click on Save Changes. |
<api:set attr="http.url" value="END_OF_UPLOAD_API"/>
<!-- <api:set attr="http.verbosity" value="5"/> -->
<!-- <api:set attr="http.logfile" value="D:/mydir/cdata_log.txt"/> -->
<api:call op="httpPost" in="http"/>
Once the job is correctly setup, go to Task tab, click + Add Tasks, enable the Custom Query option and paste the following queries (each query needs to maintain the semicolon at the end). Make sure you save all changes.
REPLICATE [metric_instance] SELECT [sys_id], [id], FORMAT([sys_created_on], 'yyyy-MM-dd hh:mm:ss') as [sys_created_on], [value], [definition] FROM [metric_instance] WHERE ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}');
REPLICATE [cmdb_ci] SELECT [name_display_value], [sys_class_name_display_value], [sys_updated_on] FROM [cmdb_ci] WHERE ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}');
REPLICATE [contract_sla] SELECT [name_display_value], [duration] FROM [contract_sla] WHERE ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}');
REPLICATE [task_sla] SELECT [sla_display_value], [task_display_value], FORMAT([planned_end_time], 'yyyy-MM-dd hh:mm:ss') as [planned_end_time], [stage] FROM [task_sla] WHERE [stage] = 'completed' AND ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}');
REPLICATE [metric_definition] SELECT [sys_id], [name], [table] FROM [metric_definition] WHERE [table] = 'incident';
REPLICATE [incident] SELECT [sys_id], [sys_updated_on], [assigned_to_display_value], [assignment_group_display_value], [caller_id_display_value], [category_display_value], [close_code_display_value], [cmdb_ci_display_value], [contact_type_display_value], [priority_display_value], [number], [state_display_value], [upon_approval_display_value], [upon_reject_display_value], [urgency_display_value] FROM [incident] WHERE ([sys_created_on] >= '{env:start_extraction_date}') AND ([sys_created_on] <= '{env:end_extraction_date}');
Below is an overview of the different field types and their default format settings.
Field type | Description |
---|---|
boolean | true , false , 1 , 0 |
date | yyyy-mm-dd |
datetime | yyyy-mm-dd hh:mm:ss[.ms] , where [.ms] is optional. |
double | Decimal separator: . (dot); thousand separator: none |
integer | Thousand separator: none |
text | N/A |
Table that contains base configuration item information.
Field | Type | Stores |
---|---|---|
name_display_value | text | Name of the CI instance. |
sys_class_name_display_value | text | Category for CI. |
sys_updated_on | text | Date the data was updated on the system. |
Table that contains data related to each incident number. A filter is applied on incident_input to get only the last record (Last_line = 1) for each incident number. This filter is necessary because some incident numbers may have multiple entries on this table.
Field | Type | Stores |
---|---|---|
number | text | Incident number. |
assigned_to_display_value | text | User assigned. |
priority_display_value | text | Level priority for the incident. |
caller_id_display_value | text | User that created the incident. |
assignment_group_display_value | text | First team assigned to the incident. |
cmdb_ci_display_value | text | Affected configuration item. |
close_code_display_value | text | Category for the close status. |
state_display_value | text | Status of the ticket. |
contact_type_display_value | text | Type of contact used to report the incident. |
category_display_value | text | Incident category (Security, Software, Server, Hardware, Network, etc). |
upon_approval_display_value | text | Instructions for next steps in case of approval. |
upon_reject_display_value | text | Instructions for next steps in case of rejection. |
urgency_display_value | text | Urgency level for the Incident. |
sys_updated_on | text | Date the data was updated on the system. |
sys_id | text | ID necessary to join the Metric_instance table. This ID is a primary key for the table, therefore each record is unique for the table. |
Data related SLA and due dates.
Field | Type | Stores |
---|---|---|
task_display_value | text | Incident number. |
sla_display_value | text | SLA Name. |
planned_end_time | datetime | Expected date for the due date. |
stage | text | stage used for filtering the SLAs. |
Contains data related to the duration for each SLA type. This table is needed in conjunction with task_sla table to obtain the last SLA record (due date type) assigned to incidents.
Field | Type | Stores |
---|---|---|
name_display_value | text | SLA Name. |
duration | integer | Max duration of an SLA type. |
Defined metrics can track how long an audited field holds a certain value.
Field | Type | Stores |
---|---|---|
sys_id | text | Id necessary to join the metric_instance table. This ID is a primary key for the table, therefore each record is unique for the table. |
name | text | Definition for the type of data retrieved on "value" field in metric_instance table. |
table | text | Contains the name of the table to which the metric definition applies. |
A metric instance is a record in the metric_instance table.
Field | Type | Stores |
---|---|---|
definition | text | Id necessary to join the Metric_definition table. |
id | text | Id necessary to join the Incident table. |
sys_id | text | Id of the metric instance. |
sys_created_on | datetime | Used for creating the Event end. |
value | text | Contains data about status, users and teams. |
This seed file is used to add automation-related properties to each activity, used for the automation potential dashboard. For more information, see Simulating Automation Potential.
Field | Type | Description |
---|---|---|
Activity | Text | Display name for the activity |
Event_cost | Double | Cost associated with the activity |
Event_processing_time | Integer | Processing time associated with the activity (in milliseconds) |
This seed file is used to define properties for the due dates. For more information, see Due Dates.
Field | Type | Description |
---|---|---|
Due_date | Text | The name of the due date |
Due_date_type | Text | The Due date type |
Fixed_costs | Boolean | An indication whether costs are fixed or time based |
Cost | Double | Fixed costs: The amount of costs. Variable costs: The amount of costs per Time and Time_type |
Time | Integer | A number indicating the amount of time in case of time-based costs |
Time_type | Text | Type of time period for cost calculations. This can be any of the following values: day , hour , minute , second or millisecond |
Below is an overview of the variables that must be configured. These variables are defined in the dbt_project.yml
file.
Variable | Type | Description |
---|---|---|
datetime_format | string / Integer | Specifies the date format which will be used when converting to DateTime type. |
max_datetime | string | Defines a maximum date that is used in the logic to recover the user and team for several tables and needs to match the datetime_format format |
Object | Input data |
---|---|
Incidents | Incident, Configuration Item, User, Urgency |
Activity | Activity Created on | Definition | Description |
---|---|---|---|
Open Incident | Open_and_incident_state_duration_events | Open | Creation of an Incident number. |
Change status to "Assigned" | Open_and_incident_state_duration_events | Incident State Duration | Status for the Incident. |
To be Assigned | Assigned_to_duration_events | Assigned to Duration | Assigning NULL to the user field. |
Assign User | Assigned_to_duration_events | Assigned to Duration | Assigning a user to the incident. |
Assign First Assignment Group | Assignment_group_events | Assignment Group | First time an assignment group is set to work on an Incident. |
Change Assignment Group | Assignment_group_events | Assignment Group | Any subsequent group assignation. |
Change status to "Work in Progress" | Open_and_incident_state_duration_events | Incident State Duration | Status for the Incident. |
Change status to "Resolved" | Open_and_incident_state_duration_events | Incident State Duration | Status for the Incident. |
Close Incident | Open_and_incident_state_duration_events | Incident State Duration | Status for the Incident. |
Change Priority to # (1 to 4) | Priority_change_events | Priority Change | Changes the level of priority for the Incident. |
Change status to "Pending" | Open_and_incident_state_duration_events | Incident State Duration | Status for the Incident. |
Add Pending reason "Awaiting Scheduled Appointment" | Incident_pending_status_metrics_events | Incident Pending Status Metrics | Identify the reason for a pending status. |
Add Pending reason "Awaiting User Info" | Incident_pending_status_metrics_events | Incident Pending Status Metrics | Identify the reason for a pending status. |
Add Pending reason "Pending Depot Shipping" | Incident_pending_status_metrics_events | Incident Pending Status Metrics | Identify the reason for a pending status. |
Add Pending reason "Vendor Response" | Incident_pending_status_metrics_events | Incident Pending Status Metrics | Identify the reason for a pending status. |
For adding new activities, one will have to identify the Definition
from the incident_metric
input table that relates to the activity being added.
Definition
should be added for the Name
filter list in Incident_metric.sql
.3_events
folder like the example SQL below.Events_base.sql
by using a UNION ALL function.The following example shows the SQL on how to add a new activity.
with Incident_metric as (
select * from {{ ref('Incident_metric') }}
),
Assignment_group_times as (
select * from {{ ref('Assignment_group_times') }}
),
Assigned_to_duration_times as (
select * from {{ ref('Assigned_to_duration_times') }}
),
New_activity_definition as (
select
Incident_metric.`Creation_date` as `Event_end`,
Incident_metric.`Incident_ID`,
Incident_metric.`Definition`,
'New activity' as `Activity`
from Incident_metric
-- New activity mapped, based on the 'Definition'
where Incident_metric.`Definition` = 'New activity definition'
),
New_activity_definition_events as (
select
New_activity_definition.`Event_end`,
New_activity_definition.`Incident_ID`,
New_activity_definition.`Definition`,
New_activity_definition.`Activity`,
Assigned_to_duration_times.`Value` as `User`,
Assignment_group_times.`Value` as `Team`,
-- Where 'number' must be filled accordingly to the logic for ordering the activities.
'number' as "Activity_order"
from New_activity_definition
left join Assigned_to_duration_times ON Assigned_to_duration_times.`Incident_ID` = New_activity_definition.`Incident_ID`
and New_activity_definition.`Event_end` between Assigned_to_duration_times.`Next_start` and Assigned_to_duration_times.`Next_end`
left join Assignment_group_times ON Assignment_group_times.`Incident_ID` = New_activity_definition.`Incident_ID`
and New_activity_definition.`Event_end` between Assignment_group_times.`Next_start` and Assignment_group_times.`Next_end`
)
select * from New_activity_definition_events
Loading a large amount of data may cause a timeout failure in the job. If this happens follow the instructions below: