Recently, we reported that the day has come when machines make other machines. To be more concrete, it is the beginning of an era in which artificial intelligence systems can build other systems of artificial intelligence. An advance that has made the tech giant Google’s AutoML project a reality by designing a computer vision system that far exceeds the most cutting-edge models.
It was in May of this same year when the tech giant Google’s Brain researchers announced the creation of this initiative, an automatic learning algorithm that learns to build other machine learning algorithms.
The intention was to see what an artificial intelligence was capable of, creating another artificial intelligence without human intervention, with the ultimate aim of achieving greater deployment of these technologies.
There are few humans capable of developing them, they are highly coveted and similar projects would help bring artificial intelligence to many other fields and companies, much more quickly. Otherwise, more slowness would imply a great risk for the AI itself, according to experts like Dave Heiner, the tech giant Microsoft’s advisor. Part of its success implies that the implementation is broad.
If robots now want to have babies it was only a matter of time before they could develop an awareness that went beyond simply performing the task for which they were programmed. And one of the first examples of this evolution could be Vestri.
This robotic arm, developed by a group of engineers from the University of California at Berkeley, comes with the ability to “imagine”, or project the future of their own actions, or the consequences of their actions.
With the use of this awareness, which allows it to preview the immediate future when performing its functions, Vestri is able to deduce what order of actions its cameras will record before making a specific sequence of movements, to determine how to manipulate objects without failures.
Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology said that “In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it. This can enable intelligent planning of highly flexible skills in complex real-world situations.”
The demonstration of this can be appreciated in the video above, where Vestri’s arm is able to move different items on a test table without colliding with obstacles. Each of these movements was “imagined” by the robot before making them and achieved a precision range close to 90%.
This technology makes it possible for Vestri not to require human intervention to learn to perform simple tasks on its own. Although the initial learning process of the robot was slow and complicated. Video prediction, the basis of Vestri’s “imagination”, is a field with a lot of potentials in which the University will continue investigating its scope.