⇧ [VIDÉO] You may also like this partner content (by ad)
DeepMind, a company (owned by Google) specializing in artificial intelligence, has just unveiled its new artificial intelligence named “Gato”. Unlike “classic” AIs, which specialize in a specific task, Gato is capable of performing more than 600 tasks, often much better than humans. Controversy has been launched over whether this is really the first “generalized artificial intelligence” (GAI). Experts remain skeptical of DeepMind’s announcement.
Artificial intelligence has positively changed many disciplines. Incredible specialized neural networks are now capable of producing results far beyond human capabilities in many areas.
One of the major challenges in AI is the realization of a system that integrates generalized artificial intelligence (GAI), or strong artificial intelligence. Such a system must be able to understand and master any task that a human being would be capable of. She would therefore be able to compete with human intelligence and even develop some degree of consciousness. Earlier this year, Google unveiled a type of AI capable of coding like an average programmer. Recently, in this race for AI, DeepMind announced the creation of Gato, an artificial intelligence presented as the first AGI in the world. The results are published in arXiv.
An unprecedented generalist-agent model
A single AI system capable of solving many tasks is nothing new. For example, Google recently started using a system for its search engine called the “unified multitasking model” or MUM that can handle text, images and video to perform tasks, from research to cross-linguistic variations. word, and the association of search queries with relevant images.
Incidentally, Senior Vice President Prabhakar Raghavan provided an impressive example of MUM in action using the fake search query: I hiked Mount Adams and will now hike Mount Fuji next fall, what do I need to do differently to prepare? “. MUM enabled Google Search to show the differences and similarities between Mount Adams and Mount Fuji. He also brought articles on the equipment needed to climb the latter. Nothing too impressive you would say, but concretely with Gato is the innovative diversity of. the tasks that are intervened and the training method in a simple and unique system.
Gato’s guiding design principle is to train on the widest possible range of relevant data, including various applications such as images, text, proprioception, joint moments, button presses and other discrete and continuous observations and actions.
To enable processing of these multimodal data, scientists encode them into a flat sequence of “tokens”. These tokens are used to represent data in a way that Gato can understand, which allows the system, for example, to find out which combination of words in a sentence makes grammatical sense. These sequences are grouped together and processed by a transformative neural network typically used in language processing. The same network, with the same weight, is used for the different tasks, unlike traditional neural networks. In fact, each neuron in the latter is assigned a particular weight and therefore a different meaning. Simply put, the weight determines what information enters the network and calculates output data.
In this representation, Gato can be trained and sampled from a standard large-scale language model on a large number of datasets, including agents’ experience in simulated and real-world environments, in addition to a range of natural language datasets and images. During operation, Gato uses context to collect these sampled tokens to determine the form and content of its response.
The results are quite heterogeneous. When it comes to dialogue, Gato is far from competing with the skill of GPT-3, Open AI’s text generation model. He may give incorrect answers during conversations. For example, he replies that Marseille is the capital of France. The authors point out that this could probably be improved with further scaling.
Nevertheless, he still proved to be extremely skilled in other areas. Its designers claim that Gato, half the time, performs better than human experts in 450 of the 604 tasks listed in the research paper.
” The game is over “, Actually ?
Some AI scientists see AGI as an existential disaster for humans: a “super-intelligent” system that transcends human intelligence would replace humanity on Earth in the worst case. Other experts believe that it will not be possible in our lifetime to see the emergence of these AGIs. This is the pessimistic opinion that Tristan Greene argued in his editorial on the site TheNextWeb. He explains that it is easy to confuse Gato with a real IAG. The difference, however, is that a general intelligence could learn to do new things without prior training.
The answer to this article did not wait. On TwitterNando de Freitas, researcher at DeepMind and professor of machine learning at the University of Oxford, said the game was over (“ The game is over ”) In the long search for generalized artificial intelligence. He adds: ” It’s about making these models bigger, more secure, more computationally efficient, faster to sample, with smarter memory, more modalities, innovative data, online / offline … It is by solving these challenges that we achieve IAG “.
Nevertheless, the authors warn against the development of these AGIs: ” Although generalist agents are still a growing field of research, their potential impact on society requires a thorough interdisciplinary analysis of their risks and benefits. […] Tools for harm reduction of generalists are relatively underdeveloped and require further research before these funds are implemented “.
In addition, generalist agents capable of performing actions in the physical world pose new challenges that require new mitigation strategies. For example, physical embodiment can cause users to anthropomorphize the agent, leading to misplaced trust in the event of a faulty system.
Beyond these risks of seeing the AGI become a detrimental operation to humanity, no data at present show the ability to produce solid results in a consistent manner. This is mainly due to the fact that human problems are often difficult, do not always have a single solution, and for which there is no possibility of prior training.
Despite Nando de Fraitas’ answer, Tristant Greene maintains her opinion just as harshly TheNextWeb : ” It’s nothing short of miraculous to see a machine perform performance of diversion and enchantment a la Copperfield, especially when you realize that said machine is no smarter than a toaster (and obviously dumber than the dumbest mouse) “.
Whether we agree with these statements or not, or whether we are more optimistic about the development of AGIs, it nevertheless seems that the upscaling of such intelligences that compete with our human minds , is still far from finished. .