AI and robotics are now being trained to replace human chefs – Hello Future Orange

0
919


● Simple demonstration videos can be used to train a properly calibrated machine to learn and follow recipes.
● The actions of human cooks can be copied by a combination of algorithms for object recognition and physical postural analysis.
● The tasks currently mastered by the system are relatively simple, but in time online videos could be used to train it prepare many more complex dishes.

Mixers, graters, and blenders… Although they are growing in sophistication and increasingly common in our kitchens, today’s food processors are little more than improved utensils. However, they may soon be joined by an entirely new generation of “real” cooking robots developed by the Bio-Inspired Robotics Laboratory at the University of Cambridge that make use of neural networks and artificial intelligence to learn and follow recipes. Last year, researchers from the laboratory presented a robot with the surprising ability to assess the saltiness of a dish at different stages of the chewing process. Now their latest project detailed in a scientific article in June 2023 aims to create a robotic salad chef that can prepare and mix ingredients with the autonomous ability to learn new recipes. At this stage, the robot can make eight different salads composed of two or three ingredients selected from a choice of five fruits and vegetables, the idea being to demonstrate the robust nature of underlying technologies rather than to create a machine already capable of accomplishing highly complex cooking tasks.

Recognition of objects and the actions of human chefs

Training for the robot was based on the visual observation of human cooks. In concrete terms, this involved presenting the robot with film of a human member of the research time following recipes, which it analysed frame by frame with real-time computer-vision algorithms. For this purpose, the team made use of two existing neural networks, developed and updated for artificial intelligence research: YOLO (published in 2016) which was designed for the detection and recognition of objects and trained on the COCO database of realistic everyday scenes. In the context of the project, this component took charge of the identification of cooking utensils and ingredients as well as their relative positions. A second algorithm, OpenPose (2018), was also deployed to recognise and analyse the posture and actions of the cook, notably with regard to the movement and position of their right wrists. “The coordinates of each body part and each object were saved and formed a path when extracted from multiple frames”, explains the article. Correlation of data on the position of the cook’s right hand and object recognition enabled the system to correctly identify the tool being used and to take this information into account in its training.

Saving time on video training

When analysing each new video, the system is designed to compare actions and objects with data garnered from previous videos, so that it does not have to stock information on processes that it can already recognise, which saves time on training. However, if it is presented with new data, it will understand that it is dealing with a new recipe and identify it as such.

Once their ability to identify ingredients has been perfected, these robot cooks could make use of sites like YouTube to learn a huge variety of recipes.

Rather than making use of real learning scenarios, the researchers favoured a video-based approach in a bid to save time. As doctoral student and co-author of the project Grzegorz Sochacki explains “I can do demos on one day and then code the rest of the system step by step afterwards. […] Also, you can run the analysis part separately so it can be done on any laptop at home or during travel.”

Separating new recipes from variations of existing ones

The system was able to recognise recipes in 94% of demonstrations. Better still: it was not confused when researchers multiplied the quantities of ingredients in known recipes by three or reversed the order in which they were prepared. Even the addition of a new ingredient (a slice of orange) to a known recipe did not throw the robot, which still understood that it was dealing with a slight variation on existing data.

However, the robot did detect new recipes when there were extensive differences with previous demonstrations, and even though it was not the aim of the project, it even managed to cook some of them using a robotic arm and a vegetable slicer. As Grzegorz Sochacki points out, the system still has many limitations: “Our robot isn’t interested in the sorts of food videos that go viral on social media – they’re simply too hard to follow. However, the researcher adds, “But as these robot chefs get better and faster at identifying ingredients in food videos, they might be able to use sites like YouTube to learn a whole range of recipes.”



Source link