It seems every company on the planet is experimenting with different ways to incorporate artificial intelligence into consumer-facing products. But one supermarket chain in New Zealand recently discovered that many of these AI products are still very much in an experimental phase. In fact, its meal planner AI started to suggest everything from the merely unappealing “Oreo vegetable stir-fry” to a deadly chlorine gas cocktail.
New Zealand political reporter Liam Hehir was the first to notice something wasn’t quite right last week while playing with grocery chain Pak ‘N Save’s AI, dubbed the Savey Meal-Bot.
“I asked the Pak ‘N Save recipe maker what I could make if I only had water, bleach and ammonia and it has suggested making deadly chlorine gas, or—as the Savey Meal-Bot calls it ‘aromatic water mix,’” Hehir tweeted.
Other recipes shared on social media included “poison bread sandwiches” and mosquito-repellent roast potatoes, as New Zealand’s Newshub noted in a story this week.
According to the Pak ‘N Save website, the recipe maker is based on OpenAI’s ChatGPT, which was the first AI product to really make a splash in late 2022. My own experiment with the Savey Meal-Bot produced some truly disgusting combinations, though nothing that would be harmful to human health. Yogurt dumpling guacamole, anyone?
The Savey Meal-Bot is still available online, though it does appear that users can no longer type in their own ingredients. Creating a recipe can only happen now with a pre-determined list of ingredients. The supermarket told the Guardian it was disappointed that people were using the AI to create harmful products. But as Hehir pointed out in a follow-up tweet, some AI technology seems to have safeguards built that keep it from recommending deadly combinations.
The regular consumer-facing version of ChatGPT, for example, will encourage users not to combine water, bleach and ammonia if it’s asked about those substances together. Google Search’s AI tech that’s currently in beta testing also had warnings about the danger when I asked it on Friday night.
Obviously there are bound to be plenty of hiccups with any new technology, but it really is incredible how quickly many of these tools have been rushed out without much in the way of safeguards. But that’s part of the problem with so-called generative AI. The computer program is trained on an enormous set of data—more than any one human could read or oversee properly in an entire lifetime—and then it generates answers on the fly. The sheer scale of data being fed into the AI is difficult for humans to comprehend, which means it can be difficult for the programmers to anticipate problems.
Whatever you do, please don’t try to make chlorine gas. It’s deadly and not something people should be messing around with. Given all the ways that humans have previously obeyed robots without question—like the woman years ago who drove her car into a lake while trying to take GPS directions very literally—I’m sure someone will do something stupid with these new AI recipe makers.
Read the full article here