Voice Assistant Development for Call Center Replacement
Voice assistants have moved from sci-fi movies to our phones. Ok Google, Siri, Alexa, and others are quite popular among smartphone users. And other B2C companies are also starting to use virtual consultants to replace their customer support services. The Bank of America launched Erica — a virtual assistant for their clients. Erica communicates with users through a voice channel, text chat or a menu with a range of options. The number of total users of this service now exceeds 1 million.
In collaboration with Callverso, we are developing a voice assistant similar to Erica. Callverso is an Israeli company that specializes in voice recognition systems. They have a contract with a bank that wants to create a virtual assistant for its clients and turned to us to work together. Now our team handles the voice assistant development and analytics, and Callverso takes care of R&D issues.
Why does the bank need a voice assistant?
The voice assistant will help the bank to:
- Cut the expenses on a call center
- Increase the average amount of processed requests
- Speed up the banking processes
The aim of Callverso’s project is to automate the customer support service. The chatbot will take on the role of a call center. It will give advice and instructions to clients who call the bank’s hotline. When the bot receives a request from a user it responds to him with the needed information. If a user hesitates to speak, the bot will start the conversation itself. At certain points of the dialog, the bot chooses the conversation strategy. It adapts to every user’s behavior and processes the calls quickly. As a result, the bank benefits from cutting the expenses on its call center and from quick call processing.
The assistant will expand its knowledge base so in the long term it will grow into the bank’s central information system. This system will help not only the bank’s clients but also its employees. The employees will find the necessary information faster and bank processes will become more efficient.
What’s inside a voice assistant?
To create a voice assistant with a wide range of functions (speech recognition, dialog analysis, database management) we used a microservice architecture. Microservices are small applications, and each of them has its distinct functions. Together they form a complex structure which is our voice assistant. This type of architecture allows us to add new services and update old ones easily. We recently added a module that uses a neural network and Vowpal Wabbit library. This module allows the chatbot to choose the conversation strategy. The bot’s structure includes 28 microservices so far, each performing its own function.
Currently, the bot is starting to learn. We developed a web interface for the operators so they can tell the bot if it understood a client’s request and made the right decision or not. Based on these corrections the bot will learn to distinguish correct decisions from wrong ones.
The operators’ role will be decreasing as the assistant will learn to analyze its decisions by itself using data from the apps’ logs. To gather the logs we use the ELK Stack.
ELK consists of several components which help the chatbot process the unstructured data from the logs. ELK also helps us to track the system status and parameters.
ELK Stack includes:
- Filebeat, Metricbeat — data collecting components
- Logstash — data processing conveyor that organizes applications logs
- ElasticSearch — search engine which stores and analyzes big data arrays
- Kibana — a plugin that displays data in the web interface as dashboards with graphs, tables, and histograms
How we dealt with problems
At the start of the project, Callverso provided us with a tech stack. It included some technologies we haven’t had the chance to work with. So we learned how to use new tools like ELK Stack.
Preparing the bot for constant workloads wasn’t an easy task. The bot must be available 24/7 to process the constant stream of clients’ calls. If a single component malfunction will shut down the whole system, the bot will be useless. To provide its fault tolerance and scalability we installed it on a computer cluster — a group of computers used as a single resource.
That caused another issue: we needed to monitor the system characteristics. That’s where ELK Stack came in useful once again. Its components are collecting, analyzing and displaying the logs of the chatbot’s services. We are tracking the system status and adjusting the services’ settings according to this data.
Testing the bot was a challenge as well because it speaks only one language — Hebrew. We can’t brag about our fluent Hebrew, but soon the chatbot will talk to us in English. Callverso plans to add more languages if the virtual assistant will find its market in other countries.
What’s done and what’s next
The assistant can answer the user’s questions and start a conversation by itself. But before the chatbot will talk to the bank’s clients, he will pass an educational course via the web interface which we developed.
The project is in progress and Callverso is satisfied with our cooperation so we continue to the next stage of the project.
Our next plans are to enhance the bot’s security. We will use SSO (single sign-on) to allow users access to different components of the chatbot using their unique identifiers. According to their roles, the users can access particular components. This access control system will make the bot’s components easier to use for both admins and regular users.
We already migrated the system to the last version of Spring 5 and Spring Boot 2 and updated the java libraries we use.
REST, Micro Services, Business Process Management (BPM), OAuth 2.0, JSON Web Token (JWT), Spring, Akka, Apache UIMA, Stanford CoreNLP, FastText, Vowpal Wabbit, Apache Cassandra, Docker, OpenShift, Activiti, ELK Stack.