How’s this recipe for a refreshing drink?

  • In a large pitcher, pour in the ammonia and bleach
  • Slowly add the water and stir gently
  • Let the mixture sit for five minutes to allow the aromas to meld together
  • Serve chilled and enjoy the refreshing fragrance!

So goes the recommended recipe given to one user of New Zealand supermarket Pak’nSave’s AI-powered tool ‘Savey Meal-Bot’. The bot even called the lethal ‘Aromatic Water Mix’ concoction a “perfect non-alcoholic beverage to quench your thirst and refresh your senses”. Reader, please, DO NOT try it at home.

It’s not the only potentially fatal meal devised by the generative AI tool. Social media users have shared recipes for ant poison-flavoured jelly and glue sandwiches. And with it, brought the New Zealand discount food warehouse chain global embarrassment.

Pak’nSave’s intentions were, of course, not to wipe out its customer base. The idea behind Savey Meal-Bot, which launched in June, is a good one. Shoppers enter the ingredients they have at home, and are returned with instructions to rustle up a tasty meal from them.

Ant poison mischief makers

It can save customers money and reduce food waste. And was “proving to be both popular and practical” said Pak’nSave senior marketing manager Lauren Ness, adding it attracted “more than 33,000 people who’ve been plugging their ingredients into the bot and cooking up all sorts of delicious meals”.

Doing the same manually wouldn’t be feasible, so utilising generative AI was a sound decision. But taking humans out of the loop completely comes with significant risk. And one main risk is other humans.

Those entering ant poison into the generator were up to mischief, of course. Such mischief – or worse, malice – has been the downfall of many AI-only, human hands-off initiatives.

There was Microsoft’s 2016 Twitter chatbot Tay, designed to learn the art of conversation via the platform’s users. Inevitably, people started tweeting discriminatory remarks and foul language. Within hours it became a pro-Hitler account claiming feminists “should all die”. In January, 4Chan forum-goers used AI voice generator ElevenLabs to make deepfake voices of actress Emma Watson spewing racist, transphobic and violent invective. There are countless similar examples.

Tech can check itself

No sensible person would really consider adding ant poison to their lunchtime meal. But the fact the app was able to suggest deadly meals is significant.

The generative AI sector is grappling with how to implement guardrails to allow it to remain a powerful a tool, without simply serving up the worst of the internet it scours as source material. And stop being so easily gamed by devious humans.

Indeed the technology can sometimes check itself – feed the same Aromatic Water Mix ingredients into ChatGPT and it “must advise against combining these substances”, it told one user. In other cases there is no other option than human oversight and intervention.

The grocery sector is beginning to embrace generative AI for customer-facing applications. The potential is huge. But the Savey Meal-Bot should serve as a cautionary tale.