Currently our voice AI is all about customer-facing interactions.
When the call is transferred to the agent, the agent can see the transcript of the call so far, and can use AI Copilot. But the call stops transcribing at this point.
This new feature keeps the AI listening throughout the call. The main technical change is, rather than transferring the call back to the phone system, we create a conference.
This means that:
- the entire call is transcribed in real-time
- agents get real-time AI copilot to help, and instant summaries / wrap forms at the end of calls
- supervisors can use our analytics to monitor sentiment etc.
The idea is this would be a core enrichment to the handoff concept after a voice AI is used. However technically this can be used without any voice AI frontend (similar to how AI copilot can be used with live chat).
The biggest gamechanger for me would if you could continue to transcribe voice in real-time and offer the agent assist for voice in real time. Customer engages with virtual voice agent, transfers to live agent, transcript loads in Ignite, and then continues to transcribe live voice call and offer suggestions / copilot suggestions to agent.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article