Alex Levin is the Co-Founder and CEO of Regal, a voice AI platform that helps enterprises drive revenue through compliant, AI-powered customer conversations. Prior to founding Regal in 2020, he led growth and product teams at Handy, Thomson Reuters, and other startups. A Harvard graduate and member of the Forbes Technology Council, Alex focuses on building scalable, voice-first infrastructure that blends innovation with enterprise-grade guardrails.
Regal provides AI voice agents for sales, support, scheduling, and collections—designed to sound natural, integrate with CRM systems, and handle millions of conversations at scale. The platform features a no-code builder, real-time analytics, A/B testing, and built-in compliance for regulated industries like healthcare, insurance, and financial services.
What inspired you to move from leadership roles at Angi and Handy into founding Regal, and was there a specific moment when you and your co-founder realized the contact center experience needed to be completely rebuilt?
While at Angi/Handy, we saw the power of voice for building trust with customers. Customers told us that when they had an important issue they wanted to call, customers we served over the phone had a higher lifetime value and when we called customers, they answered at a much higher rate than any other channel. Yet, contact center software vendors were focused on “deflection” and “automation” over what was right for the customers. The result was a never-ending game of hide the phone number that unnecessary punished customers.
My co-founder and I left because we strongly believed that we could make voice the most effective channel by bringing down the cost and making it easier to operate. I wish I had had Regal while I was running a large contact centre.
You launched Regal in 2020, just before the generative AI boom. How did you evaluate whether voice AI was technically viable—and what gave you the conviction to act early?
We were convinced long before 2020 that voice was the most important channel. And in 2020 we knew we could build orchestration, A/B testing and personalization tools that would lower costs and simplify managing voice as a channel — whether it was a human, an old school voice bot or something better at the tip of the sphere. So we sold tools for contact centers to better manage human agents at the start. That product grew very quickly.
But to your point, starting a company is a leap of faith, and it took time to really see how we could move beyond the limitations of human agents. It wasn’t till the launch of ChatGPT at the end of 2022 that we really saw “AI” that was good enough to hold a conversation. And it wasn’t till the end of 2023 that we were able to make a demo for a voice agent that we thought a customer would want to talk with.
What were some of the most difficult technical challenges in training voice agents that could match or exceed human performance in natural conversations?
There are so many wonderful technical challenges to work on. From ensuring the latency is around 500ms, to finding out how to make sure AI agents are able to have all the context of the company knowledge bases and customer data in real time, to having ai agents take action in calls and after, to guardians or safety features, and how to make the agent interaction feel human with turn taking and the right verbal cues.
One of my favorite projects our team is working on today is how to improve automated evaluations so an AI Agent can be tested more easily before putting it into production. This would cut out hundreds of hours of manual QA that is happening constantly today for every change to every ai agent.
We have to first create hundreds of varied simulated customer conversations (using AI), they have the AI agent go through them, then have the AI supervisor QA them and return suggested improvements to the AI Agent or the companies’ policies and knowledge base. We have a working evaluation product now, the customer feedback has been great, and it’s getting better at an amazing clip.
This is critical for the new metric of count of managers per ai agent. Soon very few managers will be able to manage hundreds of different ai agents.
How does Regal leverage machine learning to personalize conversations in real-time? Is it based on customer history, tone, intent recognition—or a combination?
We have invested deeply in personalization compared to the rest of the market because we believe in helping brands test millions of customers like one in a million. Not just recreating the generic human agent handling that is often used today.
We started by building a unified customer profile that links every piece of CRM data, event data and conversation history. In building an agent, companies can then give the AI Agent access to everyone about a customer or just the specific data points necessary for a particular conversation. The LLM provides a human like, conversational response using the data on hand.
LLMs are still limited in what they do well so we needed the ability to leverage other tools like 3rd party data services, custom applications and ML. So we built “Custom Actions” which can be used in an AI Agent prompt to take advantage of other services. For instance, many brands have propensity models to indicate what product to suggest to the customer next and we can hook into those fitting the conversation.
How does your system use retrieval-augmented generation (RAG) without sacrificing the responsiveness or natural cadence that customers expect from a live call?
RAG is an area of differentiation for us as it needed to be faster for voice AI Agents than for AI agents in chat or other digital channels. A few seconds of dead air would completely ruin the call.
We both lowered the latency of retrieval, and ensured that if retrieval took longer, the AI Agent could keep talking with the customer to let them know it would take longer.
Regal’s agents are modeled after real human voices, including those of actual investors. What does it take—technically and ethically—to build such high-fidelity replicas?
It is surprisingly easy technically to “clone” a voice so that an AI Agent can sound like a professional voice actor or a friend. 5-10 minutes of high quality audio is all it takes.
For instance, I was asked recently how to do this for a dying family member so the younger generation could experience them when they are older. So with a bit of guidance, they are going to record the dying grandparent now.
To your second point, the grandparent is consenting to this, and professional voice actors or our investors consent to this. Bad actors that allow voice cloning without consent (like what happened during the last presidential election) should be shut down.
A piece of advice – if you allow a voice clone (or your a public figure who might be cloned by bad actors), make sure your come up with a safe word that only your family knows so they can identify the real you on a call.
You highlight the importance of integrating Regal into CRMs, payment systems, and internal APIs. What were some of the toughest integration challenges you had to solve?
Integrating with major products from CRMs like Salesforce to Contact Center Software like NICE is straight forward. The hardest ask is to make sure that the brand makes APIs available to us for any action the AI Agent might need to take. A human agent might click a button to book a hotel room. But the AI Agent really needs a booking API.
How do you approach measuring and improving model performance over time? What role does supervised fine-tuning or reinforcement learning play in this process?
We built an A/B testing suite from the start so it’s trivial for customers to test ai agents vs human agents or the agent with LLM version 1 vs version 2. That gives us a clear way to see variations in outcome for different models.
However, we do not use reinforcement learning today as it makes legal teams uncomfortable (they do not want a situation where there is a change to the scent that is in-intended). I think we are 13 months from legal teams allowing reinforcement learning in our use case. Instead today we focus on suggesting changes that a human manager can accept. These could be to a prompt, a knowledge base, fine tuning an LLM, etc.
Talking to a VC—or a voice clone of one—is a bold concept. What was your intention in making these AI advisors available to founders, and how are they being used today?
We have been lucky to have access to wonderful investors and we wanted to pay it forward with this project. I have fun talking to Satya AI anytime, and I’ve heard great feedback from execs who have used the AI VCs for everything from advice on how to make a product roadmap to what pricing model to use.
We love to show instead of tell and this project really highlights the power of our RAG/knowledge base capabilities. We even had two of our investors’ parents gave us the thumbs up!
But a word to the wise – you can’t delegate decision making to advisors and one of the harder parts of being an exec is deciding between two bad options or even too seemingly good options.
Do these investor agents rely on generalized startup knowledge, or are they trained on firm-specific advice and philosophies tied to the individual VC?
All AI agents have some generic knowledge from the LLM training. But to get the results we needed, we uploaded the investors’ prolific writings into the respective AI Agent Knowledge Bases.
Beyond that and the voices cloning, I also think we were able to capture some of the investors’ unique personalities or essence like Jake Saper’s positivity or Alexa Von Tobel’s ebullience.
Looking ahead, how do you see Regal’s AI evolving—will we see more autonomous decision-making, more emotional intelligence, or even multimodal support?
The most exciting part of the last year has been seeing our ai agents perform better than human agents. I think that in the next year, improvements in the underlying AI models and advances in Regal’s application and will result in AI Agents that are indistinguishable from humans, and more importantly, that far exceed human agent abilities. Companies that lean into AI Agents will drop their costs and improve customer experience faster than anyone anticipated.
Thank you for the great interview, readers who wish to learn more should visit Regal.
The post Alex Levin, Co-Founder and CEO of Regal – Interview Series appeared first on Unite.AI.