Observability Dashboard
Observability Dashboard allows you to scrutinize the AI capabilities employed by Copilot, Autopilot Knowledge, and Autopilot. The dashboard provides comprehensive insights into the performance of the generative responses, enabling you to identify areas for improvement and optimize your operations.
Copilot
The Observability Dashboard for Copilot provides information about generative responses, Copilot for Agents Queries, and AutoSummary.
-
Click the app selector
and select Actions. -
On Actions, click Observability Dashboard.
By default, the Copilot dashboard appears. It contains three charts:
-
If needed, update the Date Range for the dashboard and select the interaction type in Channel. Choose to view details for Voice interactions, Chat interactions, or All.
-
Click Run Query. The three graphs are updated to reflect your desired dates and channels.
Mpower Agent Performance Metrics
Mpower Agent Performance Metrics offers a comprehensive view of how agent actions are suggested, executed, and evaluated across different stages. The graphs help visualize task performance—showing how actions succeed or fail, highlighting trends over time, and revealing how long different tasks take to complete.
-
Click the Mpower Agent Performance Metrics label to drill down into the statistics. Four graphs appear:
-
Mpower Agent Performance Metrics: This chart shows how agent performance metrics move through different stages:
-
Available: The total number of actions that are eligible or ready to be evaluated or considered for performance metrics.
-
Suggested: A subset of Available actions that are recommended for a specific evaluation or intervention based on performance criteria.
-
Not Suggested: A subset of Available actions that are not recommended for execution, possibly due to meeting performance standards or not qualifying for the criteria.
-
Rejected: From the Suggested actions, these are the ones that were not approved or declined for execution after further review or decision-making.
-
Executed: From the Suggested actions, these are the recommended actions that were carried out successfully.
-
Success: A subset of Executed actions that meet all required criteria and contribute positively to operational goals.
-
Failure: A subset of Suggested that are rejected during the evaluation process and do not meet the final criteria for execution.
-
-
Total Actions: This graph provides a clear breakdown of how agent actions progress through various stages. The graph displays the aggregate count of actions across all agents. The center of the circular chart shows the total available actions, indicating the total number of actions available for evaluation. The surrounding segments break this total down into categories, such as, Suggested, Rejected, Execution, Success, Failure, and so on.
This view helps you understand the overall volume of actions and how they are distributed across different performance stages.
-
Task Assist Performance Metrics: This graph provides a comprehensive view of how actions are being suggested, executed, and rejected—both over time and by action type.
-
Avg. Execution Time: This graph shows how long different actions take to complete. It helps you compare the execution time for each action like Troubleshooting, Scheduling, or Data Entry so you can see which ones are quick and which ones take longer.
-
Each action has a box plot that shows the range and average time it takes to complete.
-
Actions with shorter or more consistent execution times have narrower boxes.
-
Actions with more variation or outliers have wider boxes or dots outside the range.
-
-
-
If needed, update the Date Range for the dashboard, select the interaction type in Channel, and select the Copilot configuration or persona in the Copilot Profile. Click Run Query. The three graphs are updated to reflect your desired dates, channels, and Copilot profiles.
-
You can customize data that appears in the graphs:
-
In the Over Time and By Actions graph, you can toggle the display of how agent tasks are being suggested, executed, and rejected by clicking on the legends.
-
In the Over Time and By Actions graph, click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers. -
In all the graphs, click Maximize
to view the data in a table format.
-
-
Scroll down to view performance data grouped by action name, allowing users to analyze how specific tasks are being handled. For each action, the table shows:
-
Total Suggested: How many times the action was recommended to agents.
-
Total Executions: How many times agents actually performed the action.
-
Avg. Execution: The average execution rate, often highlighted in green to indicate high performance.
-
Total Success: The number of times the action led to a successful outcome.
-
Avg. Success: The average success rate, with color highlights, such as, blue or red, to indicate performance levels.
-
-
Clicking on a action name reveals the specific queries and their details, such as:
-
A list of individual interactions linked to the selected action.
-
Key details such as interaction ID, query text, agent name, skill set, team name, bot name, and intent status.
-
Timestamped bot answers for deeper context on how the action was executed.
-
Performance indicators like execution time, success count, and success rate, helping users assess the effectiveness of each action.
- For each query, you can:
Click Play Interaction
to listen to the audio of the interaction (if available).Click the Info
button next to the interaction. You can view the query feedback of the interaction.
-
-
You can switch the way you view this data. The default view is by action name. Click Group By to change the grouping from Action name to Interaction ID or by any other option listed in the dropdown menu. The data reappears based on the new grouping.
-
To download all data, both visible and hidden, based on the filters set in the query builder see the Export section.
Viewing Data About Generative Responses
Generative responses are answers automatically generated during calls.
-
Click the Generative Responses label to drill down into the statistics. Three graphs appear:
-
Over Time: Shows the percentage of answers that were used, modified, ignored, or resulted in no answer over time.
-
By Category: Displays the details by category.
-
Average Kb Per Interaction: Shows the average number of knowledge base interactions per day.
-
-
You can customize data that appears in the graphs:
-
In the Over Time and Category graphs, you can toggle the display of different answer statuses by clicking on the legends.
-
In the Over Time and Category graphs, click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers. -
In all three graphs, click Maximize
to view the data in a table format.
-
-
Scroll down to see the data grouped by categories. The Categories View organizes the knowledge base answers into different categories, offering a structured approach to analysis. Each category presents:
-
Agent name
-
Team
-
Skill
-
Total volume of knowledge base answers
-
Average adherence score
-
Average number of links and images provided
-
Average knowledge score (score assigned by the knowledge base)
-
-
Clicking on a category reveals the specific queries and their details, such as:
-
Query sent to the Knowledge Base
-
Suggested knowledge base answer
-
Agent's actual response
-
Number of links and images provided
-
Query Feedback
-
Adherence score (the similarity between suggested and actual response)
-
Offset from the beginning of the interaction
-
Click Play Interaction
to listen to the audio of the interaction (if available). -
Click the Info
button next to the query. You can view the query feedback of the interaction. In the Response details panel on the right, you can see whether the AI generated response received positive
or negative
feedback, along with any comments and tags provided.
-
-
You can switch the way you view this data. The default view is by category. Click Group By to change the grouping from Category to one of the following:
-
Master Contact
-
Team
-
Skill
-
Agent Name
The data reappears based on the new grouping.
-
Viewing Data About Agent Queries
Agent Queries displays information about the knowledge base answers generated on-demand based on agent questions. It provides graphs that show the status of direct queries and adherence scores.
Click the chart heading, Agent Queries, to drill down into the statistics. The first graph shows the percentage of responses and no responses Over Time. The second graph displays the details Over Category.
You can toggle the display of different answer statuses by clicking on the legends.
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers.Click Maximize
to view the graph in full screen.Scroll down to see the data grouped by categories. The Categories View organizes the knowledge base answers into different categories, offering a structured approach to analysis. Each category presents:
Total volume of responses
Number and average of no responses
Average number of links and images provided
Average knowledge score (score assigned by the knowledge base)
Clicking on a category reveals the specific queries and their details, such as:
Agent query sent to the Knowledge Base
Response to the agent query
Number of links and images provided
Date and time of the response
Average knowledge score
Viewing Data About AutoSummary Queries
AutoSummary provides a comprehensive view of summary performance. You can see graphs that track performance over time, with data grouped by intent and skill. Detailed tables display suggested summaries alongside actual summaries, complete with adherence scores to gauge accuracy. For more comprehensive details, you can play back specific interactions, which will give you a full picture of how summaries are generated and used in real conversations.
Auto-generated summaries are supported in the Observability Dashboard for both CXone ACD and CXone non- ACD clients. After an interaction ends, it may take up to 15 minutes for the summary data to appear in the dashboard. This delay ensures the summaries are fully processed and logged before being displayed.
Click on the chart heading, AutoSummary to drill down into the statistics.
The first graph shows the percentage of summaries that were used over time. Summaries are identified as being used in one of the following fashions: As Is, Revised, with Minor Revisions, and Ignored.
The second graph displays the details By Intent.
The third graph displays the details By Skill.
The fourth graph displays the details By Team.
You can toggle the display of different answer statuses by clicking on the legends.
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers.Click Maximize
to view the graph in full screen.Scroll down to see the data grouped by categories. The Categories View organizes the knowledge base answers into different categories, offering a structured approach to analysis. Each category presents:
Agent Names
Total volume of responses
Number and average of no responses
Average number of links and images provided
Average knowledge score (score assigned by the knowledge base)
Clicking on a category reveals the specific queries and their details, such as:
Agent query sent to the knowledge base
Response to the agent query
Overall Feedback
Number of links and images provided
Date and time of the response
Average knowledge score
Adherence Score
In AutoSummary, the adherence score is determined by the LLM using the following approach:
The LLM compares the meaning of the text rather than the exact wording.
If the actual summary and suggested summary text is the same in meaning, the score is high.
If the suggested summary contains additional details, such as minor revisions not in the actual summary then the score is medium
If the suggested summary and the actual summary text have different contexts then the score is low.
If there is no actual summary, for instance, the user did not physically save the summary, no score will be given, and it will be marked as Summary not saved by the agent.
Click the Info
button next to the query. You can view the overall feedback of the interaction. In the Response details panel on the right, you can see whether the interaction received positive
or negative
feedback, along with any comments and tags provided.
As a CX leader, Mark wants to use the Observability Dashboard to identify knowledge gaps in the knowledge base. He notices that many knowledge base suggestions are being modified or ignored. Mark clicks on Knowledge Base Answers to view data by category. He focuses on categories with low average adherence scores, indicating misalignment between suggestions and actual responses. By expanding low-scoring categories like Billing and Payments, Mark sees low adherence to queries about payment plans and refund policies, suggesting knowledge gaps in those areas.
Through the Observability Dashboard, Mark can pinpoint topics needing knowledge base enhancements. He can then work with the knowledge base team to address these gaps, improving the quality of suggestions for better customer interactions.
Business Impact View
The Business Impact view is a dynamic panel within the Copilot dashboard that provides a consolidated view of key performance metrics influenced by agent activity. It offers visual insights into operational trends such as After Call Work (ACW
State that allows an agent to complete work requirements after finishing an interaction.) and Average Handle Time (AHT
Average Handle Time is the average amount of time an agent spent handling an interaction.) agent performance indicators. By expanding this menu, you can analyze monthly trends, apply filters to focus on specific teams, skills, or agents, and compare performance averages across selected categories. The date range and filter options in the Business Impact view are independent of those in the Observability Dashboard.
-
To open the Business Impact menu, click the upward-facing arrow located at the bottom of the Copilot dashboard. This action expands the bottom panel, revealing detailed performance metrics, that is,
-
At the top of the dashboard, update the Dashboard date range to define the period for which you want to view ACW and AHT data. You can select from preset options like Last 2 days, Last 7 days, Current month, or set a custom range. By default the date range is the same range defined in Copilot dashboard. Once selected, the graph automatically updates to show the average ACW and AHT duration for each month within that range.
-
Use the Filter By options to narrow down the data based on Teams, Skills, or Agent Name. You can select up to 5 values per filter type. For example, choose 5 teams, 5 skills, or 5 agent names. Apply filters to compare performance across different groups or individuals.
-
To remove a filter category from the ACW and AHT graph, click the X icon next to the selected team, skill, or agent name. Once removed, the graph updates automatically to exclude that category from the visualization.
-
In the dashboard the metrics All Skills Avg., All Teams Avg., and All Agent Name Avg. appear dynamically based on the filters you apply. When you filter by Skills, the graph displays the All Skills Avg. line to show the average ACW and AHT duration across the selected skill groups. Similarly, filtering by Teams or Agent Name will show All Teams Avg. or All Agent Name Avg. respectively.
-
Interpreting the graphs:
-
Upward Trend: Indicates an increase in time. For ACW, agents are spending more time on post-call tasks. For AHT, calls are taking longer to handle.
-
Downward Trend: Indicates a decrease in time. For ACW, agents are completing post-call work faster. For AHT, calls are being handled more efficiently.
-
Sudden Spikes or Drops: May signal changes in workload, process, or tool usage. These should be reviewed to understand the cause.
-
Viewing the ACW Data
The ACW (After Call Work) graph helps track how much time agents spend on post-call tasks each month, showing the average duration across all calls. ACW refers to the time an agent spends completing tasks after ending a customer call—such as writing notes, updating systems, or tagging the interaction. The average ACW duration is calculated by dividing the total ACW time for all calls by the number of calls handled during the selected period:
Average ACW= Total ACW time for all calls / Number of calls
This graph allows you to compare performance between teams, skills, and individual agents, identify patterns or unusual changes in workload, and make informed decisions. Additionally, it helps indicate whether there is an increase or decrease in ACW for a specific team, skill, or agent over time. For example, a rising line may indicate growing post-call workload, while a downward trend may suggest improved efficiency or process changes.
The ACW graph is presented as a line graph, where the X-axis represents time in monthly intervals based on the selected date range, and the Y-axis shows the average ACW duration, typically measured in seconds or minutes. Each data point on the graph reflects the average ACW for that specific month. The graph includes interactive features such as hover-to-view exact values, dynamic updates based on applied filters and date range, and legends that display the selected filters for easy reference.
Viewing the AHT Data
The AHT (Average Handle Time) graph helps track how much time agents spend handling customer interactions each month, showing the average duration across all calls. AHT includes the entire interaction time—such as talk time, hold time, and After Call Work (ACW). The average AHT duration is calculated by dividing the total handle time for all calls by the number of calls handled during the selected period:
Average AHT=Total Handle Time for All Calls / Number of Calls
This graph allows you to compare performance between teams, skills, and individual agents, identify patterns or unusual changes in interaction duration. Trends in the AHT graph can help identify increases or decreases in AHT for specific teams, skills, or agents over time. For example, a rising line may indicate longer customer interactions due to complexity or inefficiencies, while a downward trend may suggest streamlined processes, improved agent performance, or better system support.
The AHT graph is presented as a line graph, where the X-axis represents time in monthly intervals based on the selected date range, and the Y-axis shows the average AHT duration, typically measured in seconds or minutes. Each data point on the graph reflects the average AHT for that specific month. The graph includes interactive features such as hover-to-view exact values, dynamic updates based on applied filters and date range, and legends that display the selected filters for easy reference.
The Observability Dashboard supports new Copilot features for Engagement Hub, allowing you to monitor feature performance across different client types.
-
For CXone ACD clients: You see data for auto-generated summaries, team details, and skill information.
-
For non-CXone ACD clients: Team and skill data is not available, and related features are hidden.
-
For tenants using both ACD and non- ACD applications: The dashboard displays only ACD-related data.
Autopilot Knowledge
The Observability Dashboard for Autopilot Knowledge shows data about how well your automated system handles customer questions. You'll see a graph that displays performance trends over time. This lets you track changes daily, weekly, or monthly.
-
Click the app selector
and select Actions. -
On Actions, click Observability Dashboard.
-
Click the Autopilot Knowledge tab. Set the desired Date Range for the dashboard, and click Run Query. It displays three charts:
Viewing Data About Overall Effectiveness
This graph displays a high level summary of the Autopilot Knowledge chatbot's performance and status.
-
Click the Overall Effectiveness graph heading to drill down into the statistics. Four graphs appear:
-
Engaged: Displays the number of visitors who engaged with the chatbot, helping you understand the engagement trends over time.
-
Contained: Displays the percentage and count of chatbot users who completed their conversation without needing escalation to a live agent. With this metric you can assess how effectively the chatbot resolves queries independently.
-
Elevated: Displays the percentage and count of chatbot users who escalated their conversation to a live agent, highlighting cases that required human intervention. With this metric you can monitor how often the chatbot hands off conversations to human agents.
-
Abandoned: Displays the percentage and count of chatbot users who abandoned an ongoing conversation. With this metric you can identify drop-off points and improve user engagement.
-
-
You can customize the data that appears in the graphs:
-
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers. -
Click Maximize
to view the graph in full screen.
-
Viewing Data About GenAI Performance
This graph displays the percentage of user queries that were effectively addressed by the generative AI engine.
-
Click the GenAI Performance label to drill down into the statistics. Three graphs appear:
-
Over Time : Displays the percentage of chatbot responses over time.
-
By Category: Displays the percentage of chatbot responses over category.
-
Queries to Generative Model: Displays the total number and percentage of chatbot queries processed by the generative engine. This metric provides insight into how frequently the generative engine is utilized in handling user interactions.
-
-
You can customize the data that appears in the graphs:
-
In the Over Time and Category graphs, to toggle the display of different answer statuses, click on the Response or No Response legends in the graphs.
-
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers. -
Click Maximize
to view the graphs in full screen.
-
-
Scroll down to see the data grouped by categories. The Categories View organizes the knowledge base answers into different categories, offering a structured approach to analysis. Each category presents:
-
Total volume of responses
-
Total number of no responses
-
Average number of links and images provided
-
Average knowledge score (score assigned by the knowledge base)
-
-
Clicking on a category reveals the specific queries and their details, such as:
-
Contact number of the interaction
-
Query that initiated the chatbot interaction
-
Chatbot’s reply based on the user’s input, intent, and context.
-
Number of links and images provided
-
Date and time of the response
-
Average knowledge score.
-
-
You can switch the way you view this data. The default view is by category. Click Group By to change the grouping from Category to Contact Number. The data reappears based on the new grouping.
Viewing Data About Bot Performance
This graph displays the distribution of chatbot intents, highlighting the top six most common user requests along with fallback occurrences. It helps you understand what users ask and how the chatbot responds.
-
Click the Bot Performance label to drill down into the statistics. Two graphs appear:
-
All Bot Intent: Displays the most common user requests and fallback cases, helping you improve how your chatbot responds.
-
Abandonment Indicator: Displays which chatbot intents were most common before users abandoned the conversation. It helps you identify drop-off points and improve user retention.
-
-
You can customize the data that appears in the graphs:
-
Click Absolute Numbers
or Percentage
to switch between percentages and absolute numbers. -
Click Maximize
to view the graphs in full screen.
-
Autopilot
The Observability Dashboard for Autopilot shows how well your knowledge base is able to handle customer questions. Use this dashboard to look for areas where you can add more articles to your knowledge base or customize the articles to better answer the customers' questions.
-
Click the app selector
and select Actions. -
On Actions, click Observability Dashboard.
-
Click the Autopilot tab. It displays the GenAI Performance graph.
-
Set the desired Date Range for the dashboard, and click Run Query.
Viewing Data About GenAI Performance
This graph shows the number of user questions that received relevant and complete answers from the AI engine.
-
Click the GenAI Performance label to view details about the questions asked and the articles provided for the time period specified.
Two graphs appear:
-
Over Time graph: Shows the number of successful responses and no-responses over the period of time. A response indicates the user was shown an article. A no-response indicates that no matching article was found.
-
By Category graph: Shows the number of successful responses and no-responses by categories.
You can customize the appearance of data in the graphs:
-
To switch between percentages and absolute numbers, click Absolute Numbers
or Percentage
.
-
To view the graph in full screen, click Maximize
. -
To toggle the display of different answer statuses, click on the Response or No Response legends in the graphs.
-
-
Scroll down beneath the graphs to see a table that provides details on the queries. The queries are grouped by categories.
-
Clicking on a category reveals the specific queries and the provided responses. Use this to identify areas where you might be missing articles in your knowledge base.
Generate AI Powered Knowledge Articles
-
Click the app selector
and select Actions. -
On Actions, click Observability Dashboard.
-
Click the Generative Response label to view detailed statistics.
-
Scroll down to the data grouped by categories section. Click a category to view specific queries.
-
Select a query and click the Info
button. -
In the Response details panel on the right, click Create Article
. An AI-generated article is drafted based on the transcript. You can edit the article as needed and then publish it. For complete information on editing and publishing an article, see the knowledge generation help.
-
When an article is already published, the Create Article icon appears in purple with a checkmark
. This means a knowledge article is available and you can view it, even if it was created by someone else.
Export Data from Observability Dashboard
-
Click the app selector
and select Actions. -
On Actions, click Observability Dashboard.
-
Click the Generative Response label to view detailed statistics.
-
Scroll down to the data grouped by categories section. Click Export
. You can download all data, both visible and hidden, based on the filters set in the query builder. -
When you export data from theObservability Dashboard, some fields in the spreadsheet are represented by numeric codes. These codes correspond to specific tags and feedback types, as shown below:
Tag
value
Accurate 1 Inaccurate 2 Complete
3 Incomplete 4 Relevant 5 Irrelevant 6 Slow 7 Other 8 Feedback type
value
Positive 1 Negative 2
Observability Copilot
Observability Copilot is an AI-powered assistant that helps you interact with the Observability Dashboard using natural language. It enables you to analyze system data, manage categories and intents dynamically, and gain actionable insights to improve operational efficiency.
Accessing Observability Copilot
-
Open the Observability Dashboard.
-
Select the Sparkle AI
icon to launch the Observability Copilot panel.
Start and Manage Conversations in Observability Copilot
To start a conversation:
-
In the Copilot panel, locate the input field at the bottom.
-
Type your question or command using natural language.
Example: “Tell me what was the generative response statistics of Miscellaneous category over the last year?”
-
Press Enter to submit your query.
-
Copilot provides detailed, structured responses based on your query.
-
Use the Up (↑) and Down (↓) arrow keys to navigate through previous queries in Observability Copilot.
To edit and resubmit a query:
You can edit and resubmit only the last query in the Copilot panel
-
Locate the most recent query in the conversation history.
-
Click on the query text. The selected query becomes editable in the input field at the bottom of the panel.
-
Modify the query as needed. Update the wording, correct any typos, change the parameters of your query to refine the results, or update the entire query.
For example:
Original query: “Show me the response stats for the Billing category in 2023.”
Updated query: “Show me the response stats for the Billing category in Q1 2023.”
-
Press Enter to submit the updated query.
-
The Copilot processes the revised input and returns a new response based on the updated query.
Managing Categories and Intents
You can add, remove, and rename categories and intents directly in the dashboard using natural language commands. Just type your command and press Enter to apply the change.
-
To Add: “Add a new category called ‘Billing Issues’ with the description ‘Queries related to billing discrepancies.’”
-
To Remove: “Remove the category ‘Legacy Support’ from the dashboard.”
-
To Rename: “Rename the intent ‘Login Help’ to ‘Authentication Assistance.’”
-
To Update: "Update the description of category 'Billing Issues' to 'Includes all queries related to invoices, charges, and payment discrepancies.'"
"Update the description of intent 'Login Help' to 'Assists users with authentication and log in-related issues.'"