Generative AI – a new arena in phone chip design

generative ai a new arena in phone chip design 653c0d71c22ac | Dang Ngoc Duy

Generative AI has exploded over the past year, with a series of applications being released to create text, photos, music and even general-purpose assistants. Smartphone and semiconductor chip companies are also building the latest hardware so as not to miss the wave. Leading the way is Google’s Pixel 8, while Qualcomm’s Snapdragon 8 Gen 3 processor is also about to launch in the coming days.

The latest sign that phone manufacturers are welcoming generative AI from Google. The Pixel 8 series are the first smartphone models capable of operating and processing Google’s AI-generated Foundation Models right on the device without needing an Internet connection. The company says the model on the Pixel 8 cuts a lot compared to cloud services, but operating on the device provides more security and reliability when data is not available.

SoC chip on Google phones. Photo: Android Authority

SoC chip on Google phones. Photo: Android Authority

This is made possible thanks to the Tensor G3 chip, whose Tensor processing unit (TPU) is much improved compared to last year. The company usually keeps the workings of its AI chip a secret, but has revealed some information such as the Pixel 8 has twice as many machine learning models on the device as the Pixel 6. Generative AI on the Pixel 8 also has 150 times more computing power than the largest model of the Pixel 7.

Google is not the only phone company using artificial intelligence at the hardware level. Samsung announced earlier this month that the Exynos 2400 chipset is being developed with AI computing performance increased 14.7 times compared to the 2200 series. They are also developing AI tools for new phones using the 2400 chip, allowing Users run the application to create images from text right on the device without needing an Internet connection.

Qualcomm’s Snapdragon chip is the heart of many of the world’s leading Android smartphones, causing users to place high expectations on the AI-generated operating capabilities of the Snapdragon 8 Gen 3 model.

Qualcomm earlier this year demonstrated a version of the Stable Diffusion text-image generation application running on Snapdragon 8 Gen 2 devices. This suggests that image generation support could be a new feature on Gen 3 chipsets, especially when Samsung’s Exynos 2400 also has similar features.

Qualcomm senior director Karl Whealton said upcoming devices “can do almost anything you want” if their hardware is powerful, efficient and flexible enough. He said people often look at specific features related to generative AI and question whether existing hardware can handle them, and emphasized that Qualcomm’s existing chip lines are powerful and flexible enough. to meet user needs.

Several smartphones with 24 GB RAM were also launched this year, signaling that they could be leveraged for generative AI models. “I won’t speak for device manufacturers, but large RAM capacity will bring many benefits, including increased performance. The understanding capacity of an AI model is often related to the size of the training model “, Whaleton said.

AI models are often loaded and reside continuously in RAM, as conventional flash memory significantly increases application load times.

“People want to reach 10-40 tokens per second. That will ensure good results, providing almost real-life conversations. This speed can only be achieved when the model is in RAM, which is why RAM capacity is so important,” he said.

Close-up of Snapdragon 8 Gen 2 chip. Photo: Qualcomm

Close-up of Snapdragon 8 Gen 2 chip. Photo: Qualcomm

However, this does not mean that smartphones with low RAM will be left behind.

“Generative AI on the device will not set a minimum RAM level, but the amount of RAM will be proportional to the level of feature enhancement. Phones with low RAM will not be out of the game, but the results from generative AI will Much better with products with large RAM capacity,” commented Whaleton director.

Qualcomm communications director Sascha Segan proposed a hybrid plan for smartphones that cannot accommodate large AI models on the device. They can accommodate small models and allow on-device processing, then compare and validate results with large models in the cloud. Many AI models are also being miniaturized or quantized, allowing them to run on mid-range and older phones.

According to experts, generative AI models will play an increasingly important role on upcoming mobile devices. The majority of phones still rely on the cloud, but on-device processing will be key to expanding security and functionality. This requires more powerful chips, more memory and smarter AI compression technology.

Diep Anh (according to Android Authority )

Leave a Reply

en_USEN