may have Last week, when I interviewed Brett Taylor and Clay Baber about their new AI startup, I accidentally insulted them. Their new company, Sierra, develops AI-powered agents that “improve the customer experience” for large enterprises. Initial customers include WeightWatchers, Sonos, SiriusXM, and OluKai (a “Hawaiian-inspired” clothing company). Sierra's end market is any company that communicates with customers, which is a huge opportunity. Their plans strike me as confirming the widely voiced prediction that 2024 will be the year that the AI models that have been confusing us for the past year turn into real products. Ta. So when I said hello to the co-founders, who I've known for years, they told me that their company seemed “very weird.”
Was that the wrong word? “I don't know if that's a compliment, a criticism, or just a fact,” says Taylor, who left his job as co-CEO of Salesforce to start Sierra. I assured him that I thought it was the latter. “We're not building.” girl friend! ” I took notes.
Rather than chasing the geeky trophy of superintelligence, two visionary Silicon Valley leaders are building an AI startup to harness recent AI advances to future-proof non-technical mainstream companies. What you do is important. Their experience rivals that of well-known industry luminaries. Taylor was the primary developer of Google Maps in the '09s, and Bavor led Google's VR efforts. They want to assure me that their minds are still in moonshot mode. Both feel that conversational AI is an advance on par with graphical user interfaces and smartphones, and will have at least as much impact on our lives. Sierra just happens to focus on a specific enterprise aspect of this problem. “In the future, a company's AI agent, the AI version of that company, will be just as important as its website,” he says. “The way businesses exist digitally will be completely changed.”
To build the bot in a way that allows it to perform its tasks effectively, comfortably, and safely, Sierra had to come up with several innovations that advance AI agent technology in general. And to address perhaps the most worrying problem: hallucinations that can misinform customers, Sierra uses several different AI models simultaneously, with one model acting as a “supervisor.” and prevent AI agents from straying into dozing territory. . When something with real consequences is about to happen, Sierra invokes a strength-in-numbers approach. “When you chat with a WeightWatchers agent and write a message, about four to five different large-scale language models are invoked to decide what to do,” he says.
Thanks to the power of AI's powerful and large language models, its vast knowledge, and its incredible understanding, these digital agents will be able to grasp company values and procedures just as humans can, and perhaps the frustration of northern better than workers with Dakota's boiler room. The training process is more like onboarding employees than building rules into a system. Additionally, these bots are capable enough to give callers a degree of agency in responding to their needs. “Many of our customers had a policy, but behind that policy was another policy that turned out to be really important,” Baber says. Sierra's agents are sophisticated enough to recognize this, and also smart enough not to spill the beans right away and only grant special deals if the customer requests them. Sierra's goal is nothing less than to take automated customer interactions from hell to happiness.
This sounded strange to WeightWatchers, one of Sierra's first clients. She was intrigued when Taylor and Baber told her CEO Sima Sistani that AI agents could be real and empathetic. But the clincher, she said, was when her co-founder told her that conversational AI could achieve “massive empathy.” She's on board, and WeightWatchers now uses agents created by Sierra to interact with customers.
I understand, but empathy• Merriam-Webster Dictionary defines it as “the act of understanding, recognizing, being sensitive to, and experiencing vicariously the feelings, thoughts, and experiences of another person.” I asked Sistani if it was a contradiction to say that robots can empathize. After a moment of silence where you could almost hear her gears turning in her brain, she stammered out her answer. “It's funny when you look at it that way, but we live in a two-dimensional world. Algorithms help determine the next connections we see and the relationships we form. We’ve gotten through it as a society.” that This refers to the concept that interactions with robots cannot be real. Of course, IRL is an ideal, she hastens to say, and agents are meant to complement, not replace, real life. But she does not withdraw her claims of empathy.
When I asked her for an example, Sistani told me about an interaction in which a WW member said he had to cancel his membership due to difficulties. Her AI her agent threw her love bombs at her. “I'm really sorry to hear that…Those difficulties can be very difficult…Let me help you get through this.” And the agent, like the Fairy Godmother, said that she was a replacement. I helped them explore ideas. “It’s clear that this is a virtual assistant,” Sistani said. “But if we weren’t, I don’t think I would have known the difference.”