The Incident-Management process is the end-to-end process related to issue tracking; from the moment the issue is reported until its solution. The process takes several steps from opening new issues, assigning them to correct users, working on them, to resolving and closing issues. In some cases, some time can be spent on pending status (usually related to a requisition for further information needed to resolve an issue) or waiting for approval status. Also, reassignments for the different users may occur during the process depending if the first assignment was correct, if the user assigned is actually the best option for resolving any specific issue or if the issue needs attention from multiple users during its resolution.
The following tags are available by default.
Tag | Description |
---|---|
Resolved in less than a day | Issue has "Set resolution to Done" activity in less than a day. This indicates that an issue was resolved in less than a day, which can also allow visibility into the efficiency with which the problem is handled. |
Estimated time set multiple times | The Issue has multiple estimated times set. The time estimated is used to identify the expected time it will take the consultant to complete their tasks. Changes to the estimated time might impact the correct expected date and can show inefficiencies in the process. |
Linked to other issue | Identify when an Issue_ID is linked to another Issue_ID . Linked issues may show the ramification of a single issue into multiple issues. The resolution of the parent issue might bring the resolution of multiple other issues. Therefore all these issues will have similar throughput times and also be part of the same process. |
Multiple assignments | The Issue has multiple user assignments. This either represents a lack of skills of the first assigned user to resolve the Issue or inefficiencies in identifying the best qualified user to work on it. |
Unassigned issue | Identify if the Issue does not have a user assigned. Opening Issues without an assigned user shows inconsistency to the assignment process. This lack of information might offer a challenge for tracking the expert that resolved/worked on the Issue. It could also reflect issues that were left without anybody working on them for a long while. |
Reopen | A ticket is reopened after it was resolved (An activity 'Remove resolution status' happens after the setting the resolution). The reopening of a ""Resolved"" Incident might indicate that the solution provided for an Incident was not satisfactory. This could mean that the support user rushed to close it to comply with the SLA providing a less optimal service. |
Multiple users for same activity | An issue has the same activity executed multiple times by different users. |
The following due dates are available by default.
Due date | Description |
---|---|
Time to resolution | Time to resolution SLA. Complying with the SLA is the most important performance tool used when assessing the overall quality of an Incident management service. It will always be frowned upon when Incidents don't meet the expected due dates for their resolutions. |
The "Stop_time" and "Breach_time" (actual date and expected date for the due dates) fields are extracted from the json-formatted field "Time to resolution" in Issues
. The SLA is a custom setup in Jira. The due date created will track the exact same Time to resolution SLA from Jira's interface.
Note: The "Stop_time" may not always be the same as the "Resolution_date".
This app template uses Jira Service Management (cloud versions) as source system. Extraction for the system is done using CData Sync. To set-up the extraction, make sure you have a valid license for CData Sync and have installed CData Sync.
To be able to connect to Jira Serive Management an API token has to be generated for a user which has appropriate read permissions on each of the objects which are being extracted. All fields included in the input tables must be visible for the user extracting the data.
The Jira Incident-Management app template requires the following Jira objects:
Follow these steps to generate an API token that is necessary to connect to the system and retrieve the data.
Step | Action |
---|---|
1 | Go to Jira Service Management environment url, click on Your profile and settings and select Manage Account. |
2 | In the new page, select Security menu and search for Create and manage API tokens. |
3 | Click in Create API token, name it as you prefer and press Create. |
4 | A new API token will be created, copy it and save it for the configuration. |
To set up the destination connection, follow the steps as defined in Create a destination connection (Automation Cloud). Note: If you are using Automation Suite, make sure to select AUTOMATION SUITE
as the DELIVERY
option and select the correct version of Automation Suite you are using.
To create the Job in CData make sure to follow the steps below.
In the Advanced tab in the Job Settings panel, define the following settings:
LongTextSize = 255
and paste it in the Additional Options field.Important: Make sure you edit the Pre-job Event.
CData Sync allows the use of environment variables in order to drive specific extraction logic in each query.
Variable | Description | Comment |
---|---|---|
start_extraction_date | Defines first date for which data will be extracted. | Mandatory |
end_extraction_date | Last date for which data will be extracted. | |
project_key_value | Project Identifier | Mandatory |
project_key_value_like | Project Identifier | Mandatory |
issue_type_name_value | Issue type Identifier |
Jira Service Management may have multiple projects. The project_key_value and project_key_value_like should be used to identify the correct project being mapped.
Because the Jira Service Management app template uses the Issue Number as the main object, consideration should be made when choosing an extraction date, as it will be used across all objects. In order to setup the environment variables:
Step | Action |
---|---|
1 | Go to the Events tab in the Job Settings panel of the job you are creating |
2 | Edit the Pre-Job Event section to add the code displayed below after <!-- Code goes here -- > . Do not modify the api:info details that are shown by default. |
3 | Click on Save Changes. |
<!-- Modify variables here. Variable start_extraction_date must be populated. In case a specific end date is needed, replace now() with the required date in yyyy-MM-dd format -->
<api:set attr="out.env:start_extraction_date" value="2022-01-01" />
<api:set attr="out.env:end_extraction_date" value="[null | now() | todate('yyyy-MM-dd')]" />
<!-- Identify and replace the correct project key for the following variables. The project key value identifies the projects from Jira that will be extracted. In case the client decide to have all the projects in a Jira environment mapped remove the next two lines and the filters for them in the replicate script. -->
<api:set attr="out.env:project_key_value" value="PROJ" />
<api:set attr="out.env:project_key_value_like" value="PROJ%" />
<!-- Identify and replace the correct issue type(s) for the following variable if needed. -->
<api:set attr="out.env:issue_type_name_value" value="%Incident" />
<api:push item="out" />
In order to modify the environment variables, modify the values within the Events tab. By default, end_extraction_date
will default to today's date. start_extraction_date
must always be populated.
After the data has been successfully loaded, the Process Mining platform needs to start the data ingestion process. This is done by calling the End of Upload API. In order to set this up, follow the steps below:
Step | Action |
---|---|
1 | Go to the Events tab in the Job Settings panel of the job you are creating |
2 | Edit the Post-Job Event section to add the code displayed below after <!-- Code goes here -- > . Do not modify the api:info details that are shown by default. |
3 | Fill out the End of Upload API with the value provided. |
4 | In case you need a debugging log for this call, uncomment the two lines provided in the code below and provide a location for the log file. |
5 | Click on Save Changes. |
<api:set attr="http.url" value="END_OF_UPLOAD_API"/>
<!-- <api:set attr="http.verbosity" value="5"/> -->
<!-- <api:set attr="http.logfile" value="D:/mydir/cdata_log.txt"/> -->
<api:call op="httpPost" in="http"/>
Once the job is correctly setup, click on Add Custom Query under the Tasks tab and paste the following queries (each query needs to maintain the semicolon at the end). Make sure you save all changes.
REPLICATE [Issues] SELECT [Id], FORMAT([Updated], 'yyyy-MM-dd hh:mm:ss') as [Updated], [AssigneeDisplayName], FORMAT([Created], 'yyyy-MM-dd hh:mm:ss') as [Created], [CreatorDisplayName], [Description], [IssueTypeName], [Key], [OriginalEstimateSeconds], [PriorityName], [ProjectName], [ReporterDisplayName], FORMAT([ResolutionDate], 'yyyy-MM-dd hh:mm:ss') as [ResolutionDate], [ResolutionName], [SecurityLevel], [StatusName], [Time_to_resolution] FROM [Issues] WHERE ([Created] >= '{env:start_extraction_date}') AND ([Created] <= '{env:end_extraction_date}') AND ([ProjectKey] = '{env:project_key_value}');
REPLICATE [IssueChangelogs] SELECT FORMAT([Created], 'yyyy-MM-dd hh:mm:ss') as [Created], [IssueId], [ItemField], [AuthorDisplayName], [ItemToString] FROM [IssueChangelogs] WHERE (ItemField IN ('resolution', 'status', 'timeestimate','Link','assignee','Product categorization', 'Operational categorization','Impact', 'Urgency', 'labels','Request participants', 'Component', 'Severity', 'priority')) AND ([IssueKey] LIKE '{env:project_key_value_like}');
REPLICATE [Statuses] SELECT [Name], [CategoryKey] FROM [Statuses];
Other scenarios
Optional fields are available
In case optional fields are available, you can add them in the already available replication queries, using the syntax [field]
. For the following table you can find the fields that can be added and that are used in the transformations:
Field | Target table |
---|---|
Severity | Issues |
In case all the projects should be mapped the following replicate query should be used.
REPLICATE [Issues] SELECT [Id], FORMAT([Updated], 'yyyy-MM-dd hh:mm:ss') as [Updated], [AssigneeDisplayName], FORMAT([Created], 'yyyy-MM-dd hh:mm:ss') as [Created], [CreatorDisplayName], [Description], [IssueTypeName], [Key], [OriginalEstimateSeconds], [PriorityName], [ProjectKey], [ProjectName], [ReporterDisplayName], FORMAT([ResolutionDate], 'yyyy-MM-dd hh:mm:ss') as [ResolutionDate], [ResolutionName], [SecurityLevel], [StatusName], [Time_to_resolution] FROM [Issues] WHERE ([Created] >= '{env:start_extraction_date}') AND ([Created] <= '{env:end_extraction_date}');
REPLICATE [IssueChangelogs] SELECT FORMAT([Created], 'yyyy-MM-dd hh:mm:ss') as [Created], [IssueId], [ItemField], [AuthorDisplayName], [ItemToString] FROM [IssueChangelogs] WHERE (ItemField IN ('resolution', 'status', 'timeestimate','Link','assignee','Product categorization', 'Operational categorization','Impact', 'Urgency', 'labels','Request participants', 'Component', 'Severity', 'priority'));
REPLICATE [Statuses] SELECT [Name], [CategoryKey] FROM [Statuses];
In case only certain types of issues should be mapped then use the following replicate query.
REPLICATE [Issues] SELECT [Id], FORMAT([Updated], 'yyyy-MM-dd hh:mm:ss') as [Updated], [AssigneeDisplayName], FORMAT([Created], 'yyyy-MM-dd hh:mm:ss') as [Created], [CreatorDisplayName], [Description], [IssueTypeName], [Key], [OriginalEstimateSeconds], [PriorityName], [ProjectName], [ReporterDisplayName], FORMAT([ResolutionDate], 'yyyy-MM-dd hh:mm:ss') as [ResolutionDate], [ResolutionName], [SecurityLevel], [StatusName], [Time_to_resolution] FROM [Issues] WHERE ([Created] >= '{env:start_extraction_date}') AND ([Created] <= '{env:end_extraction_date}') AND ([ProjectKey] = '{env:project_key_value}') AND [IssueTypeName] IN ('Issue_type_name_1', 'Issue_type_name_2', 'Issue_type_name_X');
REPLICATE [IssueChangelogs] SELECT FORMAT([Created], 'yyyy-MM-dd hh:mm:ss') as [Created], [IssueId], [ItemField], [AuthorDisplayName], [ItemToString] FROM [IssueChangelogs] WHERE (ItemField IN ('resolution', 'status', 'timeestimate','Link','assignee','Product categorization', 'Operational categorization','Impact', 'Urgency', 'labels','Request participants', 'Component', 'Severity', 'priority')) AND ([IssueKey] LIKE '{env:project_key_value_like}') AND [IssueKey] IN (SELECT [Key] FROM [Issues] WHERE [IssueTypeName] = '{env:Issue_type_name_value}');
REPLICATE [Statuses] SELECT [Name], [CategoryKey] FROM [Statuses];
The following tables include the list of fields per input table. Note: Table names and field names are case-sensitive. Always make sure that the field names (column headers) in your dataset match the field names in the table below and that the file names match the table names.
Below is an overview of the different field types and their default format settings.
Field type | Description |
---|---|
boolean | true , false , 1 , 0 |
date | yyyy-mm-dd |
datetime | yyyy-mm-dd hh:mm:ss[.ms] , where [.ms] is optional. |
double | Decimal separator: . (dot); thousand separator: none |
integer | Thousand separator: none |
text | N/A |
Table containing changes performed on each issue. This is the historical table from which most activities are created.
Field | Stores | Type | Filtering |
---|---|---|---|
AuthorDisplayName | User that made the change. | text | |
Created | Date in which the history record was created. | datetime | Time based filter |
ItemField | Field that is being changed. | text | in ('resolution', 'status', 'timeestimate', 'Link', 'assignee', 'Product categorization', 'Operational categorization', 'Impact', 'Urgency', 'labels', 'Request participants', 'Component', 'Severity', 'priority') |
IssueId | System’s number ID for each issue. | text | |
ItemToString | New value set to the field described in ItemField. | text |
This is a transactional table for issues.
Field | Stores | Type | Filtering |
---|---|---|---|
AssigneeDisplayName | Last user assigned to the issue. | text | |
Created | Date the issue started. | datetime | Time based filter |
CreatorDisplayName | User that created the issue on Jira’s system. | text | |
Description | Contains details about the reported issue. | text | |
Id | System’s number ID for each issue. | text | |
IssueTypeName | Name of the corresponding issue type. | text | |
Key | Issue identifier. | text | |
OriginalEstimateSeconds | Planned resolution time (in seconds). | integer | |
PriorityName | Defines the priority level for the reported issue. | text | |
ProjectName | Name of the project. | text | |
ReporterDisplayName | User that reports the issue to the creator of the issue. | text | |
ResolutionName | Type of resolution. | text | |
Severity | Defines the level of severity for the reported issue. | text | |
StatusName | Last status name reported for the issue. | text | |
ResolutionDate | Date for the issue resolution. | datetime | |
Updated | Date of the last update. | datetime | |
Time to resolution | Stores information related to the sla. | text (json) |
This table brings a categorization for the different statuses applied to the tickets.
Field | Stores | Type | Filtering |
---|---|---|---|
CategoryKey | Name of the corresponding status. | text | |
Name | Status type. | text |
Jira has only one object and is created based on the Issues
table. If any activity is missing a new "Item_field" value needs to be identified and added in the CDATA replicate query, IssueChangeLogs_input and Issues_events.
Objects | Input data |
---|---|
Issue | Issues |
Activity | Description |
---|---|
Assign user | Assign a user to the issue. |
Change status to | Contains a set of activities related to statuses of issues. |
Create issue | Create a new issue in the process. |
Link to other issue | Link issue to another Issue. |
Set component | Identify component. |
Set impact | Establish impact level. |
Set label | Establish label for the issue. |
Set operational category | Identify the operational category. |
Set priority | Identify priority level. |
Set product category | Identify the product category. |
Set time estimate | Set estimated time. |
Set resolution to | Contains a set of activities related to the resolution type. |
Set severity | Establish severity level. |
Set urgency | Establish urgency level. |
Remove resolution status | Remove a previously set resolution. |
Request participant | Request participant. |
This seed file is used to add automation-related properties to each activity, used for the automation potential dashboard. For more information, see Simulating Automation Potential.
Field | Type | Description |
---|---|---|
Activity | Text | Display name for the activity |
Event_cost | Double | Cost associated with the activity |
Event_processing_time | Integer | Processing time associated with the activity (in milliseconds) |
This seed file is used to define properties for the due dates. For more information, see Due Dates.
Field | Type | Description |
---|---|---|
Due_date | Text | The name of the due date |
Due_date_type | Text | The Due date type |
Fixed_costs | Boolean | An indication whether costs are fixed or time based |
Cost | Double | Fixed costs: The amount of costs. Variable costs: The amount of costs per Time and Time_type |
Time | Integer | A number indicating the amount of time in case of time-based costs |
Time_type | Text | Type of time period for cost calculations. This can be any of the following values: day , hour , minute , second or millisecond |
This file is used to set the users that are automated.
Field | Type | Description |
---|---|---|
Automated_users | Text | User that is considered automated. |