Google has introduced a new model-based AI, with which smartphones can be controlled natural language. This was stated in the report of the conference of the Association for computational linguistics (ACL) 2020.
The researchers suggest this method for training the models, which will be primarily useful for people with visual impairments.
Researchers have created a source database of commands for the AI, which could help in the interaction with the devices. They handle the incoming request, predict the sequence of application actions and screens and interactive elements to navigate from one screen to another.
They, with the help of AI, has created three set of instructions that can be used for multi-stage work with smartphones. In addition, the researchers already have almost 300 thousand single-step commands that are related to the user interface, they will work on almost all Android devices
The scientists said that during the experiments, the AI was able to translate natural speech of the user into actions with precision in 89,21%. However, as the complexity of the patterns of speech, or the creation of artificial interference while speaking of commands the efficiency of AI fell sharply to 70,59%. Google are confident that the model will cope with its task better.
Google is preparing a new feature for the Maps app
The researchers will post all datasets, models and results in open access on GitHub. They invite the participation of other scholars and hope that this will be the first step to solve control devices using natural language.