Tavus Raises $40M to Build the Next Frontier of Intelligence: Human Computing
Share This Article
Tavus is bringing sci-fi to life with PALs and the models that power them—emotionally intelligent AI humans that can see, hear, act, and even look like us.
SAN FRANCISCO, November 12, 2025-- Today, Tavus announced $40 million in Series B funding to build the future of human computing, led by CRV with participation from Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital. This vision takes shape with the launch of PALs: AI humans built by Tavus with emotional intelligence, agentic capabilities, and true multimodality with text, voice, and face-to-face.
Human-computer interfaces haven't fundamentally evolved since the 1980s. We moved from command-line interfaces to graphical user interfaces—from typing commands to clicking buttons. Today's AI chatbots feel like a return to the command-line era: text-based interfaces where humans must spell out every action and instruction. For decades, science fiction promised us something better—Star Trek, Her—computers that could see and hear us, but also look like us, respond with emotion, and feel alive. Tavus is fulfilling this promise by creating AI that makes conversations with computers feel like second nature, just like talking to a friend.
“We've spent decades forcing humans to learn to speak the language of machines,” said Hassaan Raza, CEO of Tavus. “With PALs, we're finally teaching machines to think like humans—to see, hear, respond, and look like we do. To understand emotion, context, and all the messy, beautiful stuff that makes us who we are. It's not about more intelligent AI, it’s about AI that actually meets you where you are.”
Meet the PALs
Tavus launched PALs (Personal Affective Links): Agentic AI humans that see, hear, evolve, remember, and act, just like humans do. Powered by foundational models for rendering, conversational intelligence, and perception, PALs represent the next era of human computing.
PALs are built to communicate the way people do. They maintain a lifelike visual presence, read expressions and gestures, and understand emotion and timing in real time. They remember context, pick up on subtle social cues, and move fluidly between video, voice, and text, so interaction always feels natural. And like humans, they have agency—taking initiative, reaching out, and acting on your behalf to manage calendars, send emails, and follow through without supervision.
For years, computers made us speak their language. PALs finally speak ours, forming genuine connections by learning individual habits, adapting to personality, and improving with every interaction.
The Models Powering PALs
Behind every PAL is a suite of foundational models that teach machines to see, feel, and act the way people do. These proprietary, state-of-the-art systems were built entirely in-house by the Tavus research team to understand and simulate human behavior with unprecedented depth. Each model sets a new standard for realism and intelligence, expanding the boundary of what “human-like” AI can become.
• Phoenix-4 — A SoTA rendering model that drives lifelike expression, headpose control, and emotion generation at conversational latency.
• Sparrow-1 — An audio understanding model that uses deep conversational intelligence and audio and semantic-based emotional understanding to manage timing, tone, and intent to adapt in real time to know not just what to say, but when.
• Raven-1 — A contextual perception model that interprets context, people, environments, emotions, expressions, and gestures, giving PALs a sense of presence and enabling them to see and understand like humans do.
These, paired with a SoTA orchestration and memory management system, bring face-to-face video, speech, text, and agentic capabilities to life, enabling the world’s first AI humans. What makes them powerful isn’t just how they look or talk; it’s that they understand, remember, and act, just as a human would. This is the beginning of computers that finally feel alive.
Get started for free at https://www.tavus.io/
About Tavus
Tavus is a San Francisco-based AI research lab pioneering human computing: the art of teaching machines to be human. Backed by CRV, Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital, Tavus builds foundational models that teach machines to see, hear, respond, and act like people do, pioneering AI humans. The company’s research team brings experience from leading universities and top AI labs, led by researchers specializing in rendering, perception, and affective computing, including Professor Ioannis Patras and Dr. Maja Pantic. Over one hundred thousand developers and enterprises use Tavus to deploy AI for recruiting, sales, education, and customer service.
Contact:
Leigh Disher
GMK Communications for Tavus
leigh@gmkcommunications.com
VCPro Database 2025
2025 New Edition Available! (Updated July 2025)
Price: $119.5 (including a free update in January 2026)
Discover venture capital effortlessly with the affordable VCPro Database, the top directory for venture capital and private equity.
