Interview

May 10, 2023

What the 'Father of Alexa' did next

William Tunstall-Pedoe built the AI that powered Amazon's voice assistant — and his new startup has just raised $20m to counter problems "intrinsic" to ChatGPT

Tim Smith

4 min read

A lot has changed for William Tunstall-Pedoe since he launched his first AI startup in 2005. At the time, the most sophisticated AI in anyone’s home was a Roomba. 

Since his business Evi was bought by Amazon seven years later, its voice assistant technology powered the sale of more than 100m Alexa units worldwide. Today Tunstall-Pedoe is working on a new AI startup, UnlikelyAI, in a world where the power of the AI available to the average man on the street has jumped exponentially — for better and for worse.

UnlikelyAI raised a $20m seed round last year — co-led by Octopus Ventures and Amadeus Capital Partners — and is still in stealth mode, so Tunstall-Pedoe can’t share  details on what exactly the company's building. But the "Father of Alexa" sat down with Sifted to share his concerns about generative models, why god-like artificial general intelligence (AGI) isn’t our most pressing worry and some sly hints about what he’s working on now.

Advertisement

Just say 'no'

Like many founders behind big ideas, Tunstall-Pedoe says that the spark of inspiration that would later lead to Alexa came from science fiction.

“The Star Trek computer, or the computers in Blake's 7 which was the sort of sci-fi I watched as a child, all the computers spoke — you had fluid conversations with them. It is the ultimate user interface,” he says. “Part of the reason why voice assistants like Alexa are so popular is that everybody instantly knew how to use the device.”

But, while large language models (LLMs) like ChatGPT have broadened the scope of the conversations we can have with our computers, Tunstall-Pedoe says they also represent a step backwards.

“The worst possible user experience is to give the user a wrong answer, but one that the user believes looks plausible,” he says. “That is almost exactly what LLMs do by design — they produce answers to everything, they're always extremely confident and when they're wrong they produce something that looks right.”

Voice assistants like Alexa might cause frustration by telling us “Sorry, I can’t help with that,” when posed with a tricky question, but Tunstall-Pedoe believes no answer is better than a wrong answer. 

He adds that generative AI’s tendency to make mistakes — known as hallucinating — “appears to be an intrinsic property” of the technology that will be difficult to eliminate and a “major concern”. 

We are the losers

Tunstall-Pedoe isn’t ready to divulge too much about UnlikelyAI’s work, but he does say that the company is “looking to build safe, capable, explainable artificial intelligence”.

This mission echoes what the founder has to say about what he believes are the main risks for society from advanced AI — an increase in misinformation and a “world we trust less”. Think more “deepfake” content, as well as hallucinated information creeping into online search. 

He isn’t the only one trying to make more reliable models — European startups like Iris.AI, Zeta Alpha and Aleph Alpha are also working on explainable AI — but Tunstall-Pedoe seems to think LLMs might not be compatible with those aims.

LLMs' hallucination problem is compounded by the fact that no one — including the people who made them — has any concrete idea how these models formulate their responses.

Advertisement

This, says Tunstall-Pedoe, is because each response from a tool like ChatGPT is the product of a “formula with a trillion numbers… almost the definition of ‘not understandable'".

“There's no real way of turning that into something that concisely explains to the user how it was generated. Citing sources is definitely possible — Bing does that — but even that hallucinates frequently as well. I don't think it's a problem that can be completely removed.”

Tunstall-Pedoe adds that he thinks the likes of Google and Microsoft are “deeply worried about this as well”, but are unlikely to slow down product rollouts, given the huge value of the online search market.

“Microsoft is on record saying that every 1% of the search market that they take from Google is worth $2bn per year in revenue to them. That is a very major commercial incentive to roll out an alternative search experience. Similarly Google, their trillion-dollar market cap is based on their search dominance,” he says. “The losers are potentially us, being exposed to a technology that will expose false information to us much more frequently than we're currently getting.”

God'like AI

When asked about whether we should be worried about the risk of god-like AI, he says he personally finds it hard to see that LLMs will lead to runaway AI in the near future, but that doesn’t mean we shouldn’t consider these risks seriously.

“If there's a 1% chance [AI] is going to wipe out the human race in the next 10 years, that's worth spending an awful lot of time worrying about,” he says.

For now, Tunstall-Pedoe is focused on building AI that will help us harness the benefits of the technology — finding solutions to problems like disease and the climate crisis — without the tendency to mislead us.

It’s a big challenge, but he says that if his work on Alexa taught him one thing, it’s the need to be working on something big.

“Building a startup is really tough. It's lots of hard work, it's lots of uncertainty, it's lots of risk,” he says. “If you're not working on something enormous that has a really big outcome, it's not worth it.”

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn