Whether you realize it or not, you probably benefit from machine learning on your phone every day. Where could the technology go next?
How On-device machine learning is changing the way we use mobile phones
The smartphone chipset has made great strides since the introduction of Android. Just a few years ago, most cheap smartphones were too weak, but today’s mid-range smartphones are the same. And the flagship store one or two years ago.
After ordinary smartphones are capable of daily tasks, chip manufacturers and developers have set higher goals for themselves. From this perspective, it becomes clear why assistive technologies such as artificial intelligence and machine learning (ML) are now emerging. But what does machine learning mean on devices, especially for end-users like you and me?
In the past, machine learning tasks required sending data to the cloud for processing. This approach has many disadvantages, from slow response time to privacy issues and bandwidth limitations. However, due to advancements, modern smartphones can make predictions completely offline. In chipset design and machine learning research.
To understand the implications of this progress, let’s look at how machine learning can change the way we use smartphones every day.
The birth of machine learning on the device: better-predicting photos and text
In the mid-2010s, the industry worked hard year after year to improve the image quality of cameras. This in turn has become an important driving force for the introduction of machine learning. It can help bridge the gap between smartphones and dedicated cameras, even if the former has the worst hardware.
For this reason, almost all major technology companies have begun to make their chips more efficient for machine learning tasks. By 2017, Qualcomm, Google, Apple, and Huawei launched SoCs or smartphones with accelerators designed for machine learning. Cameras are getting better in batches, especially in terms of dynamic range, noise reduction, and low-light photography.
Recently, manufacturers such as Samsung and Xiaomi have proposed new use cases for this technology. For example, the first single shooting feature uses machine learning to automatically convert a single 15-second video clip into a high-quality album. At the same time, Xiaomi’s use of technology has evolved from simply recognizing objects in a camera application to replacing the entire sky. you want.
Many Android original equipment manufacturers (OEMs) are now using machine learning on their devices to automatically tag faces and objects in smartphone galleries. This feature was previously only provided by cloud services such as Google Photos.
Of course, machine learning on smartphones is more than just photography. It is safe to say that text-based applications are roughly the same, or even longer.
Swiftkey can be said to be the first person in 2015 to use neural networks to better predict the keyboard. The company says it trains its model on millions of sentences to better understand the relationship between different words.
A few years later, when Android Wear 2.0 (now Wear OS) was able to predict the response to incoming chat messages, another special feature emerged. Google later copied the smart reply function and made it the mainstream of Android 10. When you reply to a message with a notification sound on your phone, you may take this feature for granted.
Although machine learning on the device has matured in text and photo prediction, the two areas of speech recognition and computer vision still see significant and impressive improvements every few months.
Take Google’s instant translation function as an example, which superimposes the real-time translation of foreign languages directly into the real-time camera. Although the results are not as accurate as of the online equivalent results, this feature is very useful for travelers with limited data plans.
High-quality body tracking is another futuristic augmented reality feature that can be implemented on the device using powerful machine learning. Imagine the Air Motion gestures in the LG G8, but they are smarter and suitable for larger purposes such as motion tracking and sign language translation.