Artificial intelligence — especially generative AI — is a buzzy term these days, and retailers and e-commerce providers have jumped at the chance to innovate with the technology. But it hasn’t always worked out.
With grocers facing tight profit margins and seeking out ways to improve efficiencies to drive down costs, AI solutions have been particularly attractive for tasks like estimating the freshness of produce for food waste prevention to bolstering retail media.
Amid this ongoing evolution and trialing of new solutions, Grocery Dive is here with a reminder that learning from others can be vital to sidestep mistakes.
Here are some recent examples of companies’ AI usage that haven’t gone well.
Instacart’s unappetizing food images
The grocery technology company came under fire on Reddit for using AI-generated “pictures” of food that include quirks like a hot dog slice that resembles a tomato and an abnormally shaped chicken, Insider reported in January. Instacart confirmed to Insider that it was, in fact, using AI-generated visuals along with AI-generated recipes.
The media buzz soon followed with headlines like, “Please don’t make me eat this terrifying AI-generated food” and “Instacart Got Caught Using Gross AI Images In Place Of Real Food.”
A few days later, Insider said that Instacart had quietly taken down the weird images — but seemingly missed some.
While those images were a fail, perhaps Instacart was onto something: Findings from a recent study from the University of Oxford suggest that consumers think AI-generated food images look tastier than real photos.
Instacart is using AI art for some of its food pics. The results are pretty gross. https://t.co/K1DEGiRK16
— Jake Swearingen (@JakeSwearingen) January 28, 2024
Rite Aid’s use of facial recognition
The retailer is in the hot seat for tapping facial recognition technology in a manner the Federal Trade Commission called “reckless.”
Rite Aid used third-party AI-powered facial recognition technology from 2012 to 2020 to identify potential shoplifters and other problematic behavior, but failed to enact reasonable measures to prevent harm to consumers, the FTC claimed.
The technology led to workers erroneously accusing people of wrongdoing because the facial recognition technology falsely flagged them as matching previously identified shoplifters or other troublemakers, the agency said in its complaint.
In a recent settlement over the FTC’s charges against Rite Aid, the retailer agreed to a five-year ban on using facial recognition technology for surveillance.
A New Zealand grocer’s recipe for disaster
Supermarket chain Pak ‘n’ Save’s use of AI on its app to create meal plans for its customers using ingredients they had on hand backfired when the technology served up a recipe that would create chlorine gas, The Guardian reported last summer.
But that’s not all: The technology also offered up “poison bread sandwiches” and mosquito-repellent roast potatoes — suggestions that definitely did not meet the supermarket’s stated intention of helping customers think of ways to save money by creatively using leftovers, the paper reported.
A Pak ‘n’ Save spokesperson told The Guardian that the supermarket would “keep fine tuning our controls” of the bot to make it safe and useful to its adult users. Pak ‘n’ Save has an appended warning notice on the meal planner saying that the recipes are not reviewed by a human being and that the company does not guarantee “complete or balanced” meals that are “suitable for consumption,” according to Fox News.