Auto QA
Overview
AutoQA is a Gen AI-powered Quality Assurance platform, designed to transform the way organizations evaluate and enhance customer support interactions. Built with the core users in mind, AutoQA combines streamlined workflows, intuitive scoring mechanisms, and actionable coaching tools into a user-friendly product.
View prototype
Background
In customer support, quality assurance is a critical function to ensure high-quality interactions and process adherence. Kapture CX, an enterprise-grade customer experience platform, simplifies this process by leveraging generative AI to evaluate customer support interactions and provide actionable insights.
However, despite high interest in the capabilities, Kapture's QA module suffered from a low adoption rate. Usage data and client feedback highlighted significant usability gaps in the product. Users often struggled with its fragmented and overly complex workflows. Additionally, many prospective customers were hesitant to switch from their current ticketing platforms to Kapture solely for its QA capabilities.
I was tasked with reimagining quality assurance as AutoQA — not a module over Kapture's existing platform, but a standalone product that specializes in quality assurance and delivers value to evaluators, agents, managers and administrators alike. The design project aimed to address core pain points from the current solution and transform a disjointed, complex feature into a delightful product experience.
Process
The redesign of AutoQA involved iterative ideation, wireframing, and prototype testing, with every decision tied to resolving user pain points.
User Research
I began the design process by speaking to core users from 15+ client accounts to understand their needs and key challenges while using the current solution. Key findings:
Fragmented workflows
Basic actions like scoring tickets and viewing scorecards required switching between multiple screens.
Overwhelming configurations
Many administrators struggled with the complicated and time-intensive setup process of the evaluation scorecards.
Evaluator fatigue
Evaluators cited experiencing burnout due to the repetitive and manual scoring process.
No coaching mechanisms
Managers couldn’t provide targeted feedback or track agent improvement based on evaluations. They were concerned whether the evaluations had any lasting impact.
These insights painted a clear picture of the existing module’s limitations and informed the direction for the redesign.
Competitor Analysis
Next, to benchmark AutoQA and gather inspiration, I analyzed specialized quality assurance platforms like Kaizo, Klaus and Maestro, studying their interfaces, workflows, and design choices to identify best practices. I was particularly interested in how they approached configuration, scoring, and integrations. Key insights:
AI-driven scoring
Some platforms integrated AI to suggest evaluation responses, reducing manual effort and evaluation time.
Platform terminology
Most platforms had adopted industry-standard terminology to help users understand features more intuitively.
Ideation & Wireframing
With the research insights in mind, I began to map out potential user journeys on FigJam, ensuring each touchpoint was optimized for ease and efficiency.
Configurations
The design process began with rethinking the most challenging aspect — scorecard setup. I explored multiple layouts and workflows to make the setup process seamless. My goal was to move away from the cramped side overlay format in the current platform and define an easier setup workflow for administrators.
After several iterations, I finalized a wireframe structure with segmented tabs that organized complex configurations into manageable sections. This structure significantly reduced cognitive overload. To further ease the setup process, I added the option for administrators to build scorecards by using pre-configured templates or uploading their evaluation policy documents.
Scoring
Scoring was the next focus area. To enhance the evaluator's experience, I introduced a Conversations workspace that allowed evaluators to score support interactions and access reports of previously scored tickets within a unified interface, eliminating the need to switch between separate modules. To keep the evaluator engaged, I converted the form-filling process of evaluation into a question-by-question journey with the score and the progress bar filling up as they moved through the journey.
Now, to enhance the evaluator's experience and make the tedious evaluation process more engaging, I ditched the traditional form-filling method of filling a scorecard and adopted a question-by-question journey. As evaluators progressed through the scorecard, the score and the progress bar dynamically updated, providing a sense of accomplishment and continuity throughout the task.
While scoring the support interactions, evaluators could reference the conversation transcript to manually evaluate the interaction and use AI-powered suggestions whenever needed. AI-powered suggestions provided assistance for evaluators, allowing them to shift their focus on decision making rather than manual scoring. The scored conversations are then sent to the agent for review. The agent can view an informative scorecard report of their conversation and choose to either accept or rebut the score.
Coaching
The coaching module was another critical addition. I designed a space where administrators could upload training materials, create assessments, and set automated criteria for assigning coaching resources to agents based on evaluation results. This feature closed the loop between evaluation and actionable feedback and ensured that QA evaluations had a lasting impact.
Dashboards
Once the foundation for the core modules was established, I turned my attention to the dashboards. By using data and insights from active users, I designed role-specific dashboards to give everyone a clear view of performance metrics. Agents and evaluators could track their own performance, while managers and leaders could monitor the system's effectiveness to ensure everything runs smoothly.
Style Guide
Drawing inspiration from leading design systems, I created a consistent, accessible style guide with structured typography, color palettes, and interaction patterns. Features like dark mode, adjustable text sizes, and high-contrast themes were also included to ensure inclusivity across the platform.
Design
Results
Prototypes were tested with a select group of clients. The new platform was met with overwhelmingly positive feedback during client trials. Platform setup time is expected to be reduced by over 60% with the new workflows. Managers also anticipate significant improvements in agent and evaluator performance.
The design was successfully handed off to development, with detailed prototypes and documentation. AutoQA now stands tall as a specialized, independent product beyond Kapture CX’s ecosystem. Stakeholders cited that the redesign not only resolved the pain points of the previous module but also opened new opportunities for market expansion and innovation.