The modern cook lives in a golden age of recipes. Anyone looking to make a certain dish or something from a particular cuisine or with a specialized diet in mind has thousands of recipes at their fingertips.
Now, though, they might encounter some that were created not by a professional recipe developer, or even an enthusiastic home cook. Recipes generated by artificial intelligence are increasingly popping up — and following, or trying to follow, them might lead to unexpected results, and not necessarily in a good way.
Last month, grocery-delivery service Instacart drew horrified reactions for its AI-generated recipes after a report in the tech publication 404 Media. Some included ingredients that didn’t exist (what, people wondered, is “monito sauce”?) and stomach-churning images (a dubious “hot-dog stir-fry” depicted the interior of a sausage that more closely resembled that of a tomato).
Apps are springing up promising to make AI an indispensable kitchen tool. And food brands such as Heinz and Avocados From Mexico are seeking to integrate the technology into their websites.
Curious cooks are tinkering with AI on their own, too, using systems such as OpenAI’s ChatGPT and Google’s Gemini. Ethan Mollick, a University of Pennsylvania professor known for his viral experiments with AI systems, said writing new recipes — along with creating children’s stories and birthday cards — has become one of the most common ways people start out using the technology. “It is both an illustration of the power of AI and one of the worst use cases,” he said. “That doesn’t mean it doesn’t work. But your chance of weirdness and nonsense is very high.”
That may be, experts say, because AI is not as smart as we expect it to be — especially about food. You might think you’re asking it a question and it’s answering you, said Emily Bender, a University of Washington computational linguist who studies AI. But all it’s doing is spitting out a sequence of words similar to what it has “read” before.
Thanks to years of blogs and social media posts, the internet is loaded with recipe webpages and food photos, allowing AI models to offer duplicates that seem relatively right. But because they are designed to match two-dimensional words and colors, they can’t suss out all the multisensory experiences that make good food great. They also can’t check their work, given their lack of noses and taste buds.
Bender noted that an AI system might have often come across, for example, chicken and beef in similar contexts. “So when the language model is accessing the parts of its training data that has to do with recipes, it doesn’t have a good way of differentiating between things,” she said. “You talk about beef being rare or well done, but not chicken — but that could come out.”
Undercooked chicken is just one potential hazard. “It is also very capable of putting together things that could be poisonous, or harmful, or interact with people’s medications and have no way of flagging that or noticing that,” said Margaret Mitchell, a computer scientist at Hugging Face, a prominent open-source AI start-up. Last year, a New Zealand supermarket made news when a chatbot it offered customers to make use of the ingredients they had on hand provided recipes for “dishes” including a toxic chlorine gas, “bleach-infused rice surprise” and turpentine-flavored French toast. And AI-produced books on mushroom foraging being sold on Amazon were found to include potentially deadly advice.
While some AI-generated recipes have obvious and possibly lethal problems, others might have more subtle issues. For example, many recipes generated on the Instacart website don’t list ingredients in the order that they are used, a technique that most well-crafted recipes employ to make them easier to follow. They don’t always offer readers cooking times or visual cues for different steps, possibly leaving less experienced cooks confused. And serving sizes can be off: One serving of “Cowboy Steak with Herb Butter” — a two-inch-thick rib-eye steak — would probably feed at least a couple of cowpokes.
One problem with these bot-generated recipes is that the people who need help the most — that is, novice cooks — are the least likely to spot their shortcomings.
Things are even weirder when it comes to food images. AI generators tend to invent dishes that are physically impossible, with disappearing ingredients and gravity-defining shapes. Food-delivery apps DoorDash and Grubhub have been using the technology to sometimes laughable effect, with reports calling out photos of shrimp with two tails or a pizza described as a “pie” accompanied by an image of a sweet dessert. And that cowboy steak from Instacart? It looked like a Frankensteinian butcher had pieced it together, with mismatched grill marks.
Just as there is with political news, there has always been a hierarchy when it comes to recipes. There are mainstream, legacy publications (including The Washington Post) as well as leading specialty sources, such as Bon Appétit and Cook’s Illustrated, whose recipes are thoroughly tested and carefully constructed. Cooks can also find vetted recipes on a handful of well-respected sites such as Serious Eats or Food52. Cookbook authors are often a go-to source for competently created offerings. Then there are the legions of food blogs and influencers, some of whom offer high-quality recipes and others whose content is dubious at best.
The internet tends to flatten all that, though; a casual user might not know the difference between a trusted source and a slapdash amateur. Plenty of cooks grabbing the first recipe that pops up from a search might soon discover that just because a human created a recipe, that doesn’t mean it’s any good. Couldn’t AI at least do better than that?
David Eastwell, a British physicist who has a side hustle as a stock photographer, started tinkering around with generative AI tools during pandemic lockdowns and quickly realized he could create photographs as good as the ones he’d shot himself. Curious about what more the technology could do, he set out to create an entire website and picked cooking as his topic, just for fun. “I’m actually a very average cook,” Eastwell said. The resulting product, the Air Fryer Chef, offers recipes for such dishes as nachos and soy-marinated cod, all of which Eastwell generated using AI tools.
Eastwell said he runs the site as an experiment. He was interested in the technology’s potential as well as its limitations. AI can create workable recipes, he said, but it can also go awry — and readers should be careful, just as they are with the stuff flesh-and-blood people come up with. “You can go online and look for a recipe for roasted potatoes, and you might find one from Jane’s Cookery School, and maybe you don’t know who Jane is, but because it’s made by a person, there’s an implicit trust written into it,” he said. “It still could be rubbish.”
Mollick assesses AI-generated recipes and other creations by what he calls the “best available human” standard: The AI probably won’t beat the creation of a trained recipe developer, but an amateur cook might find something to like, even love, based on what an AI trained on the world’s recipes might craft. When Wired magazine in January ran a test in which bar patrons were given two cocktails — one from a bartender, the other from AI — based on their preferred flavors, half the patrons couldn’t tell which was man-made. Some even preferred the AI mixologist’s drink.
Some users are turning to AI for highly tailored recipes, either for specialized diets or to find ways to combine specific ingredients they might have on hand — such as DishGen, a subscription service that offers an AI-generated recipe tool for $8 a month and promotes itself as helping reduce food waste. Still, experts say users should use caution. DishGen’s recipes come with a disclaimer saying that the company “has not verified it for accuracy or safety” and that people should use their “best judgment when making AI-generated dishes.”
Heather John Fogarty, a food writer and recipe developer who teaches journalism at the University of Southern California, is confident that bots will never be able to replace humans when it comes to telling other humans how to prepare food. For one, she noted, there are often factors she takes into consideration that a robot wouldn’t. Scaling a restaurant recipe that serves 40 people to be appropriate for a home cook isn’t just a matter of simple division, she said. It requires considering the differences between commercial and home equipment, and it demands an understanding of the ingredients — something that AI just can’t replicate.
More importantly, she said, there’s something intangible about the creativity and sensory awareness that goes into a recipe. “The question is why do we still buy cookbooks or look to publications like The Washington Post for food content? And the answer is the human element,” Fogarty said. “You can’t underestimate context. There is a true art to a recipe headline or a recipe note that I don’t see AI being able to generate.”
Mitchell sees more novelty than utility in AI-generated recipes. Companies might be rushing to embrace the technology, offering features on their websites or developing apps that promise to make AI as much a part of people’s kitchens as their ovens, she noted, when there are already options such as tools that can sort through preexisting human-created recipes and pick out the most suitable one rather than create a new one. “Now there’s a hammer, and so people are saying everything is a nail,” she said. “There are so many other ways to have automated recipe generation that are much, much more likely to work well. Generative AI should not be your starting point.”
https://ift.tt/OkPem4p
Entertainment
No comments:
Post a Comment