Observability Dashboard
Observability Dashboard allows you to analyze the AI capabilities used by Copilot, Autopilot Knowledge, and Autopilot (Omelia). The dashboard provides comprehensive insights into the generative response performance, enabling you to identify areas for improvement and optimize your operations.
Data in the Observability Dashboard is kept for one year. This retention period may change in the future based on product updates.
Copilot
The Observability Dashboard for Copilot provides information about AI Agents (Cognigy) for Process Automation (Task Assist) Performance Metrics, generative responses, Copilot for Agents Queries, and Automated Summary (AutoSummary).
-
Click the app selector
, go to Data & Analytics, and select Actions. -
On Actions, click Observability Dashboard.
By default, the Copilot dashboard appears. It includes the following charts:
-
If needed, update the Date Range for the dashboard and select the interaction type in Channel. Choose to view details for Voice interactions, Chat interactions, or All.
-
Click Run Query. The three graphs are updated to reflect your desired dates and channels.
AI Agents (Cognigy) for Process Automation (Task Assist) Performance Metrics
The AI Agents (Cognigy) for Process Automation (Task Assist) Performance Metrics view provides a detailed analysis of task execution, including AI-assisted and manual activity, execution status, outcomes, and performance trends. Use this view to explore how tasks progress through different stages, evaluate AI adoption, and identify opportunities to improve execution efficiency.
-
Click on the chart heading AI Agents (Cognigy) for Process Automation (Task Assist) Performance Metrics label to drill down into the statistics.
-
The AI Agents (Cognigy) for Process Automation (Task Assist) KPI cards provide a summarized view of the data displayed in the AI Agents (Cognigy) for Process Automation (Task Assist) Performance Metrics graph. Use these cards to quickly assess AI adoption, execution volume, and overall performance without analysing the full task flow.
-
Task Assist Status: Shows how tasks are distributed based on how they were handled.
-
AI-assisted: Total number of tasks where AI provided assistance.
-
Manual: Total number of tasks completed without AI assistance.
Use this card to understand the level of AI adoption.
-
-
Execution Status: Shows how tasks progressed after being initiated.
-
AI-assisted execution: Number of tasks where AI suggestions were accepted and executed.
-
Manual execution: Number of tasks executed manually.
-
Rejected: Number of tasks where AI suggestions were not accepted.
Use this card to evaluate how often AI suggestions are used versus rejected.
-
-
Execution Outcome: Shows the final result of executed tasks.
Success: Number of tasks completed successfully.
Failed: Number of tasks that did not complete successfully.
Use this card to assess overall execution effectiveness.
-
-
The AI Agents (Cognigy) for Process Automation (Task Assist) Performance Metrics graph shows how tasks move from availability to execution and final outcome. Use this graph to understand how AI assistance impacts task execution and success rates. Each value in the graph is represented as a node, and the connections between nodes show how tasks move between stages. The nodes are described as follows:
-
Available: The total number of tasks that could be executed during the selected time period. This is the starting node in the graph. All tasks originate here and move into an execution type.
-
Execution type: Tasks from Available are divided into the following categories:
-
AI-assisted: Tasks where AI suggested or assisted the action.
-
Manual: Tasks completed without AI assistance.
These categories show how tasks are initially handled.
-
-
Execution status: This stage shows whether tasks were executed and how. Tasks from each execution type move into an execution status:
-
From AI-assisted:
-
AI-assisted execution: AI suggestions that were accepted and executed.
-
Rejected: AI suggestions that were not accepted.
-
-
From Manual:
-
Manual execution: Tasks executed manually.
-
-
-
-
The Total Execution Metrics Over Time chart shows how AI-assisted and manual executions vary over the selected time period, helping you understand trends in AI adoption and usage. Each line represents an execution type (purple for AI-assisted and grey for manual), and each point on the chart shows the distribution of executions at a specific time. This allows you to compare the proportion or volume of AI-assisted versus manual executions across time, making it easier to identify patterns, shifts in behaviour, and changes in execution volume.
-
The Total Execution Metrics by Task chart shows how AI-assisted and manual executions are distributed across different tasks for the selected filters. Each bar represents a task, with segments indicating execution type (purple for AI-anssisted and grey for manual). The height of each segment shows the proportion or number of executions. Use this chart to compare how tasks are handled, identify which tasks rely more on AI assistance, and detect tasks with higher manual involvement.
-
The Avg. Execution Time chart shows the distribution of execution times for each task, helping you understand variability and efficiency across tasks. Each box represents a task and displays the range of execution times, including minimum, maximum, and median values. The position and size of the box indicate how long tasks typically take and how much variation exists. Use this chart to identify tasks with longer or inconsistent execution times and to spot opportunities for improving performance.
-
If needed, update the Date Range for the dashboard, select the interaction type in Channel, and select the Copilot configuration or persona in the Copilot Profile. Click Run Query. The graphs are updated to reflect your desired dates, channels, and Copilot profiles.
-
You can customize data that appears in the graphs:
-
In the Total Execution Metrics Over Time and Total Execution Metrics by Task graphs, you can switch between Absolute Numbers
or Percentage
. -
In all the graphs, click Maximize
to view the data in full screen.
-
-
Scroll down to view detailed performance data grouped in a table. By default, the data can be grouped by action name. The table helps you analyse how specific tasks are being handled. For each grouped row, the table shows values such as:
-
Total Executed: How many times the task was executed.
-
AI-assisted: How many times the task was assisted by AI Agents (Cognigy) for Process Automation (Task Assist).
-
AI-assisted Execution: How many times the agent used the execution suggested by AI Agents (Cognigy) for Process Automation (Task Assist).
-
Average AI-assisted Execution: The average percentage of AI-assisted execution.
-
Total Success: The number of times the task led to a successful outcome.
-
Average Success: The average success rate for that task.
-
-
Clicking on an action name reveals the specific queries and their details, such as:
-
A list of individual interactions linked to the selected action.
-
Key details such as interaction ID, query text, agent name, skill set, team name, bot name, and intent status.
-
Timestamped bot answers for deeper context on how the action was executed.
-
Performance indicators like execution time, success count, and success rate, helping users assess the effectiveness of each action.
-
-
For each query, you can click the Info
button next to the interaction and view the query feedback of the interaction. -
You can switch the way you view this data. The default view is by action name. Click Group By to change the grouping from Action name to Interaction ID or by any other option listed in the dropdown menu. The data reappears based on the new grouping.
-
To download all data, both visible and hidden, based on the filters set in the query builder see the Export section.
Viewing Data About Generative Responses
Generative responses are AI-generated answers provided during interactions.
-
Click the Generative Responses label to drill down into the statistics. Three graphs appear:
-
Over Time: Shows the percentage of answers that were used, modified, ignored, or resulted in no answer over time.
-
By Category: Displays the details by category.
-
Average KB Per Interaction: Shows the average number of knowledge base interactions per day.
-
-
You can customize data that appears in the graphs:
-
In the Over Time and Category graphs, you can toggle the display of different answer statuses by clicking on the legends.
-
In the Over Time and Category graphs, click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers. -
In all three graphs, click Maximize
to view the data in a table format.
-
-
Scroll down to see the data grouped by categories. The Categories View organizes the knowledge base answers into different categories, offering a structured approach to analysis. Each category presents:
-
Agent name
-
Team
-
Skill
-
Total volume of knowledge base answers
-
Average adherence score
-
Average number of links and images provided
-
Average knowledge score (score assigned by the knowledge base)
-
-
Click a category to view the related queries and their details, such as:
-
Query sent to the Knowledge Base
-
Suggested knowledge base answer
-
Agent's actual response
-
Number of links and images provided
-
Query Feedback
-
Adherence score (the similarity between suggested and actual response)
-
Offset from the beginning of the interaction
-
Click Play Interaction
to listen to the audio of the interaction (if available). -
Click the Info
button next to the query. You can view the query feedback of the interaction. In the Response details panel on the right, you can see whether the AI generated response received positive
or negative
feedback, along with any comments and tags provided.
-
-
You can change how the data is grouped. The default view is by category. Click Group By to change the grouping from Category to one of the following:
-
Master Contact
-
Team
-
Skill
-
Agent Name
The data reappears based on the new grouping.
-
Viewing Data About Agent Queries
Agent Queries displays information about the knowledge base answers generated on demand based on agent questions. It provides graphs that show the status of direct queries and adherence scores.
Click the chart heading, Agent Queries, to drill down into the statistics. The first graph shows the percentage of responses and no responses over time. The second graph displays the details by category.
You can toggle the display of different answer statuses by clicking on the legends.
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers.Click Maximize
to view the graph in full screen.Scroll down to see the data grouped by categories. The Categories View organizes the knowledge base answers into different categories, offering a structured approach to analysis. Each category presents:
Total volume of responses
Number and average of no responses
Average number of links and images provided
Average knowledge score (score assigned by the knowledge base)
Clicking on a category reveals the specific queries and their details, such as:
Agent query sent to the Knowledge Base
Response to the agent query
Number of links and images provided
Date and time of the response
Average knowledge score
Viewing Data About Automated Summary (AutoSummary) Queries
Automated Summary (AutoSummary) shows how auto-generated summaries perform. You can see graphs that track performance over time, with data grouped by intent and skill. Detailed tables display suggested summaries alongside actual summaries, complete with adherence scores to gauge accuracy. For more comprehensive details, you can play back specific interactions, which will give you a full picture of how summaries are generated and used in real conversations.
Auto-generated summaries are supported in the Observability Dashboard for both CXone ACD and CXone non- ACD environments.
Click on the chart heading, AutoSummary, to drill down into the statistics.
The first graph shows the percentage of summaries that were used over time. Summaries are classified as As Is, Minor Revisions, Revised, and Ignored.
The second graph displays the details By Intent.
The third graph displays the details By Skill.
The fourth graph displays the details By Team.
You can toggle the display of different answer statuses by clicking on the legends.
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers.Click Maximize
to view the graph in full screen.Scroll down to see the data grouped by categories. The Categories View organizes summaries into different categories. Each category presents:
Agent Names
Total volume of responses
Number and percentage of no responses
Average number of links and images provided
Average knowledge score (score assigned by the knowledge base)
Click a category to view the specific queries and their details, such as:
Agent query sent to the knowledge base
Response to the agent query
Overall Feedback
Number of links and images provided
Date and time of the response
Average knowledge score
Adherence Score
In Automated Summary (AutoSummary), the adherence score is determined by the LLM using the following approach:
The LLM compares the meaning of the text rather than the exact wording.
If the actual summary and suggested summary text are the same in meaning, the score is high.
If the suggested summary contains additional details, such as minor revisions not in the actual summary then the score is medium.
If the suggested summary and the actual summary text have different contexts, then the score is low.
If the agent does not save the summary, or saves the summary without making any edits, the final summary is treated as identical to the suggested summary. Observability Dashboard updates the columns as follows:
Actual Summary: Same as Suggested Summary
Adherence Score: High
Comment: As Is (until a final summary is received)
If a final summary is received within 90 days and it differs from the originally suggested summary, Observability Dashboard updates the record as follows:
Actual Summary: Updated actual summary from Automated Summary (AutoSummary)
Adherence Score: Recalculated based on the difference between the suggested and actual summaries
Comment: Updated from As Is to Minor Revisions or Revised, depending on the level of change
Currently, Observability Dashboard does not support tracking multiple sequential edits to the same Automated Summary (AutoSummary).
Click the Info
button next to the query. You can view the overall feedback of the interaction. In the Response details panel on the right, you can see whether the interaction received positive
or negative
feedback, along with any comments and tags provided.
As a CX leader, Mark wants to use the Observability Dashboard to identify knowledge gaps in the knowledge base. He notices that many knowledge base suggestions are being modified or ignored. Mark clicks on Generative Responses to view data by category. He focuses on categories with low average adherence scores, indicating misalignment between suggestions and actual responses. By expanding low-scoring categories like Billing and Payments, Mark sees low adherence to queries about payment plans and refund policies, suggesting knowledge gaps in those areas.
Through the Observability Dashboard, Mark can pinpoint topics needing knowledge base enhancements. He can then work with the knowledge base team to address these gaps, improving the quality of suggestions for better customer interactions.
Business Impact View
The Business Impact view is a dynamic panel within the Copilot dashboard that provides a consolidated view of key performance metrics influenced by agent activity. It offers visual insights into operational trends such as After Call Work (ACW
State that allows an agent to complete work requirements after finishing an interaction.) and Average Handle Time (AHT
Average Handle Time is the average amount of time an agent spent handling an interaction.) agent performance indicators. By expanding this menu, you can analyze monthly trends, apply filters to focus on specific teams, skills, or agents, and compare performance averages across selected categories. The date range and filter options in the Business Impact view are independent of those in the Observability Dashboard.
-
To open the Business Impact menu, click the upward-facing arrow located at the bottom of the Copilot dashboard. This expands the bottom panel and shows detailed performance metrics:
-
At the top of the dashboard, update the Dashboard date range to define the period for which you want to view ACW and AHT data. You can select from preset options like Last 2 days, Last 7 days, Current month, or set a custom range. By default, the date range is the same range defined in the Copilot dashboard. Once selected, the graph automatically updates to show the average ACW and AHT duration for each month within that range.
-
Use the Filter By options to narrow down the data based on Teams, Skills, or Agent Name. You can select up to 5 values per filter type. For example, choose 5 teams, 5 skills, or 5 agent names. Apply filters to compare performance across different groups or individuals.
-
To remove a filter category from the ACW and AHT graph, click the X icon next to the selected team, skill, or agent name. Once removed, the graph updates automatically to exclude that category from the visualization.
-
In the dashboard, the metrics All Skills Avg., All Teams Avg., and All Agent Names Avg. appear dynamically based on the filters you apply. When you filter by Skills, the graph displays the All Skills Avg. line to show the average ACW and AHT duration across the selected skill groups. Similarly, filtering by Teams or Agent Name will show All Teams Avg. or All Agent Names Avg. respectively.
-
Interpreting the graphs:
-
Upward Trend: Indicates an increase in time. For ACW, agents are spending more time on post-call tasks. For AHT, calls are taking longer to handle.
-
Downward Trend: Indicates a decrease in time. For ACW, agents are completing post-call work faster. For AHT, calls are being handled more efficiently.
-
Sudden Spikes or Drops: May signal changes in workload, process, or tool usage. These should be reviewed to understand the cause.
-
Viewing the ACW Data
The ACW (After Call Work) graph helps track how much time agents spend on post-call tasks each month, showing the average duration across all calls. ACW refers to the time an agent spends completing tasks after ending a customer call—such as writing notes, updating systems, or tagging the interaction. The average ACW duration is calculated by dividing the total ACW time by the number of calls handled during the selected period:
Average ACW= Total ACW time / Number of calls
This graph allows you to compare performance across teams, skills, and individual agents, identify patterns or unusual changes in workload, and make informed decisions. Additionally, it helps indicate whether there is an increase or decrease in ACW for a specific team, skill, or agent over time. For example, a rising line may indicate growing post-call workload, while a downward trend may suggest improved efficiency or process changes.
The ACW graph is presented as a line graph, where the X-axis represents time in monthly intervals based on the selected date range, and the Y-axis shows the average ACW duration, typically measured in seconds or minutes. Each data point on the graph reflects the average ACW for that specific month. The graph includes interactive features such as hover-to-view exact values, dynamic updates based on applied filters and date range, and legends that display the selected filters for easy reference.
Viewing the AHT Data
The AHT (Average Handle Time) graph helps track how much time agents spend handling customer interactions each month, showing the average duration across all calls. AHT includes the entire interaction time—such as talk time, hold time, and After Call Work (ACW). The average AHT duration is calculated by dividing the total handle time by the number of calls handled during the selected period:
Average AHT=Total Handle Time / Number of Calls
This graph allows you to compare performance between teams, skills, and individual agents, identify patterns or unusual changes in interaction duration. Trends in the AHT graph can help identify increases or decreases in AHT for specific teams, skills, or agents over time. For example, a rising line may indicate longer customer interactions due to complexity or inefficiencies, while a downward trend may suggest streamlined processes, improved agent performance, or better system support.
The AHT graph is presented as a line graph, where the X-axis represents time in monthly intervals based on the selected date range, and the Y-axis shows the average AHT duration, typically measured in seconds or minutes. Each data point on the graph reflects the average AHT for that specific month. The graph includes interactive features such as hover-to-view exact values, dynamic updates based on applied filters and date range, and legends that display the selected filters for easy reference.
The Observability Dashboard supports new Copilot features for Engagement Hub, allowing you to monitor feature performance across different client types.
-
For CXone ACD clients: You see data for auto-generated summaries, team details, and skill information.
-
For non-CXone ACD clients: Team and skill data is not available, and related features are hidden.
-
For tenants using both ACD and non- ACD applications: The dashboard displays only ACD-related data.
Autopilot Knowledge
The Observability Dashboard for Autopilot Knowledge shows data about how well your automated system handles customer questions. You'll see a graph that displays performance trends over time. This lets you track changes daily, weekly, or monthly.
-
Click the app selector
and select Actions. -
On Actions, click Observability Dashboard.
-
Click the Autopilot Knowledge tab. Set the desired Date Range for the dashboard, and click Run Query. It displays three charts:
Viewing Data About Overall Effectiveness
This graph displays a high-level summary of the Autopilot Knowledge chatbot's performance and status.
-
Click the Overall Effectiveness graph heading to drill down into the statistics. Four graphs appear:
-
Engaged: Displays the number of visitors who engaged with the chatbot, helping you understand the engagement trends over time.
-
Contained: Displays the percentage and count of chatbot users who completed their conversation without needing escalation to a live agent. With this metric, you can assess how effectively the chatbot resolves queries independently.
-
Elevated: Displays the percentage and count of chatbot users who escalated their conversation to a live agent, highlighting cases that required human intervention. With this metric, you can monitor how often the chatbot hands off conversations to human agents.
-
Abandoned: Displays the percentage and count of chatbot users who abandoned an ongoing conversation. With this metric, you can identify drop-off points and improve user engagement.
-
-
You can customize the data that appears in the graphs:
-
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers. -
Click Maximize
to view the graph in full screen.
-
Viewing Data About GenAI Performance
This graph displays the percentage of user queries that were effectively addressed by the generative AI engine.
-
Click the GenAI Performance label to drill down into the statistics. Three graphs appear:
-
Over Time: Displays the percentage of chatbot responses over time.
-
By Category: Displays the percentage of chatbot responses by category.
-
Queries to Generative Model: Displays the total number and percentage of chatbot queries processed by the generative engine. This metric provides insight into how frequently the generative engine is utilized in handling user interactions.
-
-
You can customize the data that appears in the graphs:
-
In the Over Time and Category graphs, to toggle the display of different answer statuses, click on the Response or No Response legends in the graphs.
-
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers. -
Click Maximize
to view the graphs in full screen.
-
-
Scroll down to see the data grouped by categories. The Categories View organizes the chatbot responses into different categories. Each category presents:
-
Total volume of responses
-
Total number of no responses
-
Average number of links and images provided
-
Average knowledge score (score assigned by the knowledge base)
-
-
Clicking on a category reveals the specific queries and their details, such as:
-
Contact number of the interaction
-
Query that initiated the chatbot interaction
-
The chatbot’s reply, generated based on the user’s input, intent, and context.
-
Number of links and images provided
-
Date and time of the response
-
Average knowledge score
-
-
You can switch the way you view this data. The default view is by category. Click Group By to change the grouping from Category to Contact Number. The data reappears based on the new grouping.
Viewing Data About Bot Performance
This graph displays the distribution of chatbot intents, highlighting the top six most common user requests along with fallback occurrences. It helps you understand what users ask and how the chatbot responds.
-
Click the Bot Performance label to drill down into the statistics. Two graphs appear:
-
All Bot Intent: Displays the most common user requests and fallback cases, helping you improve how your chatbot responds.
-
Abandonment Indicator: Displays which chatbot intents were most common before users abandoned the conversation. It helps you identify drop-off points and improve user retention.
-
-
You can customize the data that appears in the graphs:
-
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers. -
Click Maximize
to view the graphs in full screen.
-
Autopilot (Omelia)
The Observability Dashboard for Autopilot (Omelia) shows how effectively your knowledge base can handle customer questions. Use this dashboard to identify areas where you can add articles to your knowledge base or improve existing articles to better answer customer questions.
-
Click the app selector
and select Actions. -
On Actions, click Observability Dashboard.
-
Click the Autopilot (Omelia) tab. It displays the GenAI Performance graph.
-
Set the desired Date Range for the dashboard, and click Run Query.
Viewing Data About GenAI Performance
This graph shows the number of user questions that received relevant and complete answers from the AI engine.
-
Click the GenAI Performance label to view details about the questions asked and the articles provided for the time period specified.
Two graphs appear:
-
Over Time graph: Shows the number of successful responses and no-responses over the selected time period. A response indicates the user was shown an article. A no-response indicates that no matching article was found.
-
By Category graph: Shows the number of successful responses and no-responses by categories.
You can customize the appearance of data in the graphs:
-
To switch between percentages and absolute numbers, click Absolute Numbers
or Percentage
.
-
To view the graph in full screen, click Maximize
. -
To toggle the display of different answer statuses, click on the Response or No Response legends in the graphs.
-
-
Scroll down beneath the graphs to see a table that provides details on the queries. The queries are grouped by categories.
-
Clicking on a category reveals the specific queries and the provided responses. Use this to identify areas where you might be missing articles in your knowledge base.
Generate AI Powered Knowledge Articles
-
Click the app selector
and select Actions. -
On Actions, click Observability Dashboard.
-
Click the Generative Responses label to view detailed statistics.
-
Scroll down to the data grouped by categories section. Click a category to view specific queries.
-
Select a query and click the Info
button. -
In the Response details panel on the right, click Create Article
. An AI-generated article is drafted based on the transcript. You can edit the article as needed and then publish it.For complete information on editing and publishing an article, see the knowledge generation help.
-
When an article is already published, the Create Article icon appears in purple with a checkmark
. This means a knowledge article is available and you can view it, even if it was created by someone else.
Export Data from Observability Dashboard
-
Click the app selector
and select Actions. -
On Actions, click Observability Dashboard.
-
Click the Generative Response label to view detailed statistics.
-
Scroll down to the data grouped by categories section. Click Export
. You can download all data, both visible and hidden, based on the filters set in the query builder. -
When you export data from theObservability Dashboard, some fields in the spreadsheet are represented by numeric codes. These codes correspond to specific tags and feedback types, as shown below:
Tag
value
Accurate 1 Inaccurate 2 Complete
3 Incomplete 4 Relevant 5 Irrelevant 6 Slow 7 Other 8 Feedback type
value
Positive 1 Negative 2