Edge AI is the next step to ambient computing

a man walking up a flight of stairs, carrying a backpack on his shoulder

If you’re a smartphone user in North America, chances are good you’re being inundated with spam calls on a daily basis. 

Google, for one, believes that artificial intelligence will be a big part of the solution to this problem, which is why the search giant began rolling out its Call Screen feature to Pixel devices in 2018.

Call Screen gives users the option to intercept incoming calls with their own robot assistant. The AI answers and asks the caller questions, then relays answers to the user through real-time text transcription. The user can request further information, reject the call, send it to voicemail or accept it.

It’s not a be-all and end-all solution, but the company believes it’s a step in the right direction.

“We’re trying to think of using technology in a way where it helps us, similar to if we had another person helping us,” says Yossi Matias, vice-president of engineering at Google. “Hopefully we’re able to use technology to get control back over some of our incoming calls.”

Call Screen, which Google upgraded in December to automatically answer suspected spam calls without even bothering the user, doesn’t rely on traditional artificial intelligence. Rather, it’s made possible by edge AI – advanced language processing capability resides on the smartphone itself, rather than on a server in the cloud. 

The phone itself thus processes the call without having to access a server, speeding the process to the point of near instantaneousness.

The cost, the profits, and the benefits

a man answering a spam call on his cellphone while working on his laptop in a coffee shop

Edge AI, Matias says, is the next step toward making computing ambient, or seamless and ubiquitous in the environment around us. Having devices that are independently smarter, faster and with less need to connect to other machines not only means they’ll be more useful, they’ll also be more private and secure. 

Industry analysts expect these benefits will drive an explosion in the edge AI market over the next few years. 

In its annual Technology, Media and Telecommunications report, Deloitte predicts more than 750 million edge AI chips – dedicated chips or parts of them that perform or accelerate on-device machine-learning tasks – will be sold in 2020. 

That number will amount to $2.6 billion USD in revenue and more than double the 300 million chips sold in 2017. By 2024, the financial services firm expects sales to exceed 1.5 billion units for a CAGR of 20 percent, or more than double the 9 percent longer-term forecast for the overall semiconductor industry.

Industry analysts expect these benefits will drive an explosion in the edge AI market over the next few years.

In 2020, consumer devices will make up about 90 percent of that market, with the vast majority of edge AI chips going into high-end smartphones. About a third of smartphones will have these advanced chips in 2020, Deloitte says.

For smartphone makers – such as Google and its Pixel lineup – AI is rapidly becoming a selling feature.

“These companies have discovered that making your phones faster or have better graphics are indeed useful things,” says Duncan Stewart, director of technology, media and telecommunications research for Deloitte Canada. “However, making the AI better is one of the things that allow you to differentiate.”

Deloitte also expects AI chips to increasingly migrate into other devices and appliances. Stewart says it costs between $1 and $5 to add AI optimization to a chipset, either as an independent unit or a part that’s optimized for it. 

That isn’t cheap enough to put into something that costs $10 – like say, a light switch – but it starts to make more sense once a product gets over $100. Voice-controlled stoves and washing machines, which need only a basic level of AI to respond to a relatively small set of commands, are thus on the horizon.

Edge AI for security

CCTV security camera with skyscrapers in background

Edge AI is also expected to prove useful in security cameras as it will allow them to utilize and process facial recognition without having to connect to a server. That would mean faster and more accurate recognition with less potential for data abuse or interception. 

“You’ll have the ability for that particular camera to discern if a particular movement is something that needs to be alerted for,” says Ziad Ashgar, vice-president of product management at chipmaker Qualcomm.

“Some of the privacy concerns really go away. There is no need for your critical information [to be transmitted] – whether that’s your face or your voice – a lot of that processing can happen on the device.”

Alexander Wong, Canada Research Chair in the area of Artificial Intelligence and chief scientist at the Waterloo-based startup DarwinAI, says edge AI will also be important for vehicles – both self-driving and traditional.

Cars are increasingly using AI for automated driving assist systems, in functions such as blind-spot detection and rear collision avoidance. Internalizing the AI means vehicles will still be able to access it even when they are unable to connect, such as when they are passing through a tunnel.

“It’s the true game-changer when it comes to AI,” Wong says. “It makes AI available to anyone, anywhere at any time. It completes the whole process in unison with cloud AI.”

If there’s a downside to edge AI, it’s that it could deepen so-called “Terminator fears” – or robots that can act without human intervention. While achieving sentience is still in the realm of science-fiction, self-acting killer military robots driven by edge AI are seen as likely.

“These will exist one day,” Stewart says. “No one currently has great ideas on how we’ll counteract these threats.”

In the meantime, proponents expect the technology will be used in more benevolent ways. 

Last year, for example, Google revealed Project Euphonia, an AI-enabled speech-to-text transcription service for people with speech impairments. Company researchers helped Tim Shaw, a former NFL linebacker who was diagnosed with ALS, recreate his original voice. The illness has otherwise left him unable to speak, swallow, or breathe without assistance.

As with the problem of spam calls, Google doesn’t believe it has fully solved this particular challenge – but more powerful and useful AI is delivering big steps forward. 

“It can be helpful for everybody, but for some people, it can be life-changing,” Matias says.

About Futurithmic

It is our mission to explore the implications of emerging technologies, seeking answers to next-level questions about how they will affect society, business, politics and the environment of tomorrow.

We aim to inform and inspire through thoughtful research, responsible reporting, and clear, unbiased writing, and to create a platform for a diverse group of innovators to bring multiple perspectives.

Futurithmic is building the media that connects the conversation.

You might also enjoy
office boardroom meeting
Four keys to developing a winning IoT service: Business View