Go ahead and make your own AI recipe. It won’t go well.

In the mid-2010s, scientist and engineer Janelle Shane made a name for herself exposing the ridicule of the neural network. Her blog, AI Weirdness, chronicles what happens when she trains neural networks on everything from paint colors to animal names. Time and time again, Shane has tried to feed neural network databases of recipes, only to spew complete nonsense. A recipe for small sandwiches from 2017 included measure 1 salad dressing. Another of that year was given the name CAKE OF OTHER LIE AL FORNO 1993, and instructed, if on the side, as it heated up, to glaze them carefully with a sauce. Shane uses her blog to show what neural networks can and can’t do, and readers walk away understanding that these tools, while impressive, have no semblance of what we know as intelligence or critical thinking. They just regurgitate patterns.

Of course, AI has come a long way since Shanes’ experiments in the 2010s. Now it can create recipes that can actually be followed, with mandatory stunt blogs following in their wake, all trying to answer the question of whether AI-generated recipes AI are valid. While this question is far from settled, it hasn’t stopped tech optimists and venture capitalists with a foodie bent from throwing all their hopes into technology. Last month, BuzzFeed launched Botatouille, a chatbot that recommends recipes from the company’s food vertical, Tasty. Startup CloudChef says it’s using AI to digitize not just recipes but also a chef’s techniques, to guide staff so that someone who doesn’t know a shallot from a shallot will cook a Michelin Guide-worthy dish of pulao from chicken, as Eater SF editor Lauren Saria put it.

Despite the enthusiasm of wealthy investors, by most accounts AI-generated recipes still aren’t very good. Priya Krishna wrote that an AI-generated menu supposedly designed just for her tastes gave her mushy chaat and dry turkey (it required no butter or oil). Chef Donald Mullikin had to make his own recipe changes because ChatGPT kept suggesting the wrong kind of pepper and didn’t include salt. Recently, I participated in a chili contest where a contestant raved that their bone marrow chili came from typing bone marrow chili into ChatGPT. The result was bland and mealy, with just a hint of the enticing bone marrow. And my attempts to use Botatouille have resulted in disappointment; requests to use non-Western ingredients like hing powder and ong choy were met with recipes that didn’t include them, and a low-FODMAP Mexican recipe request came up with three options with high-FODMAP ingredients. Simply requesting a recipe that uses both cabbage and tomato conjured up three tomato-based recipes with no cabbage in sight.

At the heart of any technology is the promise that it will solve a problem. There’s sunscreen for when your problem is getting burned, and the printing press for when your problem is the church keeping the illiterate masses. But the goal of any capitalist enterprise is to tell you what problems you need to solve, and more importantly, that your biggest problem is not having the thing they’re offering you.

Unfortunately, these tools as they currently exist do not solve any problems in the kitchen. If the problem is not having a pasta salad recipe in front of you, search engines can produce it. If the issue is ensuring that a recipe comes from a trusted and trusted source, merging the information from these language patterns doesn’t actually give you anything more trusted and in fact obscures that knowledge. If the problem is that you don’t know how to scan a recipe and tell if it looks like it’s going to be okay, the AI ​​can’t teach you.


On some level, I understand the person who made the bone marrow chili. It’s easy to imagine ChatGPT as some sort of mega brain. What if you could take all the recipes in the world for something, mix them together, and from that make a super recipe? Surely that would be the best, right?

That’s not how ChatGPT or any other neural network works. AI platforms retrieve patterns and relationships, which they then use to create rules and then make judgments and predictions, when answering a prompt, writes the Harvard Business Review. In the New Yorker, Ted Chiang likens ChatGPT to a fuzzy, lossy JPEG, it may mimic the original, but if you’re looking for an exact sequence of bits, you won’t find it; all you will get is an approximation. It doesn’t work much differently than a more traditional search engine like Google, but while those may give you direct quotes or primary sources, ChatGPT gives you a summary of that information, based on what it thinks you’re looking for, without the ability to check the sources from which it draws.

The ability to use ChatGPT to, for example, suggest a week’s worth of chicken thigh meals or a Korean-influenced cacio e pepe recipe depends both on the linguistic model presenting the information that was provided consistently (n 1 salad dressing measures) and the recipient’s existing knowledge of food and cooking. You already need to know what a muffin recipe looks like to know if ChatGPT has given you one that could make a successful muffin. And while Mullikin claims he was able to work with ChatGPT, what he described was basically fixing the algorithm until he gave him ingredients like kimchi juice and chili sauce that he already knew about. want to use.

So while AI doesn’t appear to be solving the problems of actual cooking, could it still improve the way we approach cooking and eating? A popular application is meal planning, especially for people who have dietary restrictions that make shopping difficult. But the Washington Post notes that ChatGPT’s training data ends in 2021, which means it cannot provide up-to-date information. He’s also trained primarily on English-language recipes, says Nik Sharma, which favor Western flavors and diets—a drawback if someone wants to follow both a gluten-free diet and one that includes a lot of Chinese food. And it just does the wrong things. The document still advises people to double-check anything given to them, which defeats the point of convenience. Olivia Scholes, who used ChatGPT to create a meal plan to help with polycystic ovary syndrome, told the Send, Our world is full of prejudices and full of things that are not true. I kind of worry about the ethics of AI and what it’s built on.


One of the biggest concerns with current AI tools is generating content from someone else’s IP. It’s one of the major issues the Writers Guild of America is hitting on, and the artists have already taken the AI ​​developers to court. Essays, cartoons, photographs and songs are used to train these language models without the knowledge or consent of the creators and with no way to cite these influences.

But proper quoting has long been a problem in recipes, which can’t be copyrighted, as they’re considered lists of ingredients and instructions. A language model trained only on instructions does not legally violate anyone’s rights.

This may seem like a point in favor of AI. But legality and morality have never completely overlapped. While recipes cannot be copyrighted, cookbooks and recipe writings can. Language models eliminate that context and therefore the ability to pay someone fairly for their creative endeavors. If a recipe cache informs what a language model is telling you to cook, it’s a bad thing that the creators are not just uncompensated, but unrecognized. Language models also strip recipes of the things that could actually teach you to be a better cook. Cuisine is the sum of every bite we’ve ever taken informing our palates, writes Alicia Kennedy, who notes that you can’t quote any recipe correctly even if you try. That’s why recipes need context, an explanation of a story, a point of view, or a decision as to why a choice was made. When ChatGPT gives you a recipe, it doesn’t say who came up with it, what they were trying to make, why they chose to use more than one spice, or swapped out a common ingredient. They are instructions devoid of the thing he is trying to educate you about.


In the Financial Times, Rebecca May Johnson wondered what would happen if she treated the kitchen as if she thought, that is, if she was present in the moment of cooking and not just following the instructions. When I cook, I use the knowledge produced through the work of generations of cooks in kitchens around the world, she says. It is only thanks to this thought that it is possible for me to understand what will happen when I add salt, or cover the pan, or let a sauce rest.

I can’t force you to worry about the origins of a recipe, or accept that reading, thinking, and paying attention to how a recipe is created are things that should be valued. There will always be people who just want to make pasta salad. And as much as I personally think I’m robbing you of an amazing experience, that’s fine. Sometimes you just need pasta salad.

Nobody prevents you from opening Bard or ChatGPT and asking him to give you a recipe. Language models are tools, meant to be used however we find them useful. But these tools as they exist right now and as they are being marketed by the companies that have invested in your use, do not solve your cooking problems. They don’t make the process easier, faster, or more intuitive. They can’t provide options that don’t already exist. They make the task more confusing, more opaque, and more likely to fail. And a future where they could be better, where they could actually solve some problems in the kitchen, relies on a mountain of knowledge and creativity that, for now, these tools won’t acknowledge or give credence to. We need to fix this first.

#ahead #recipe #wont

Leave a Comment