The world of artificial intelligence (AI) is primarily housed in cloud-computing facilities and doesn’t often interact with smartphones. For example, when using a tool like ChatGPT, the training of the program is done days, weeks, and months in advance in massive AI data centers built by companies like Microsoft.
However, there are efforts underway to bridge this gap in 2024 where AI will begin to learn on personal devices, independent of the cloud. On-device training has numerous benefits, like faster learning with constant personalized local information and maintaining privacy by avoiding the need to send personal data to a cloud data center. The impact of on-device training could transform the capabilities of neural networks, allowing AI to be personalized to individual actions and learn from daily environments, among other things.
Companies like Apple and Google are developing technology to run larger neural networks locally on smartphones, enabling on-device learning in space-constrained environments. Researchers are exploring ways to process training tasks on memory-constrained devices and update the neural network with new training data without involving data centers.
In addition, there are various initiatives to address these technical challenges such as federated learning and binary neural networks. These efforts aim to reduce the memory and processing required for each neural weight through innovative approaches.
Use cases of training a neural net locally have also been explored in cybersecurity and mobile app UI development. Apple, for example, has developed methods to learn mobile app qualities and trained privacy-preserving neural network parameters for speech recognition using its federated learning approach.
The big takeaway is that researchers are working to find ways to move the training of neural networks out of the cloud and onto personal devices that are battery-operated and have less memory and processing power.