When you ask an AI to recreate your mother’s French toast, you’re testing more than just recipe accuracy—you’re testing how well the model understands sensory detail and kitchen technique. AI recipe prompting has become a genuine kitchen utility in 2025, but not all AI assistants handle nostalgic, texture-focused requests equally.
Key Takeaways
- Gemini delivered a crispy-exterior, caramelized-finish French toast using double-dipping and post-fry sugar coating techniques.
- ChatGPT provided a basic, functional recipe lacking the crispiness method, resulting in softer toast.
- Detailed sensory prompts (crispy, caramelized, custardy) yielded significantly better AI output than generic requests.
- Both recipes use common pantry staples available globally; no specialized equipment required beyond a skillet.
- Prompt specificity proved more influential than model choice in achieving kitchen success.
Why This Test Matters Right Now
AI assistants are increasingly positioned as kitchen helpers, yet most users prompt them with vague requests: “give me a French toast recipe.” This test—asking both ChatGPT and Gemini to replicate a specific childhood memory with precise texture and flavor goals—reveals how much AI recipe prompting depends on the detail you provide. The author used a hyper-specific prompt: “Make me French toast exactly like my mother used to make it. It had a crispy exterior with a sweet cinnamon-sugar coating that caramelized perfectly, soft and custardy inside, made with thick bread slices soaked just right, cooked in butter until golden.” That level of sensory specificity fundamentally changed the output quality.
Gemini’s Winning Approach to Crispy-Sweet Finish
Gemini’s recipe succeeded because it incorporated two critical techniques absent from ChatGPT’s response: double-dipping and immediate post-fry cinnamon-sugar coating. The method begins with a custard of eggs, whole milk, heavy cream, sugar, vanilla, cinnamon, and salt—whisked smooth. Each slice of thick challah or brioche bread (day-old preferred for better absorption) gets soaked 20 to 30 seconds per side, allowed to drip, then dipped again briefly for 5 to 10 seconds. This double-dip approach prevents sogginess while ensuring the interior stays custardy.
The frying technique matters equally. Gemini’s recipe specifies medium-high heat with 2 tablespoons of unsalted butter until foaming—not browned—then cooks two slices for 2 to 3 minutes per side until deep golden-brown and crispy. The decisive step comes immediately after: the hot toast is transferred to a cinnamon-sugar mixture (half a cup granulated sugar mixed with one teaspoon cinnamon) and flipped to coat both sides fully. The residual heat caramelizes the sugar coating, creating that crispy, candied exterior the author remembered. The recipe calls for fresh butter for the second batch to maintain consistent crisping.
ChatGPT’s Generic Formula and Why It Failed
ChatGPT’s recipe read like a textbook introduction to French toast. It called for eight slices of unspecified bread, four eggs, one cup milk, two tablespoons sugar, one teaspoon vanilla, and cinnamon (quantity unspecified). The custard instructions were identical to Gemini’s, but the cooking method diverged critically. ChatGPT suggested a 10 to 15 second soak per side—shorter than Gemini’s 20 to 30 seconds—with no double-dip mention. It specified medium heat (not medium-high) and 2 to 3 minutes per side until golden, but included no post-fry technique. The result, when tested: softer, less crispy toast lacking the caramelized sugar crust. There was no mention of cinnamon-sugar dredging, no guidance on bread thickness, no emphasis on butter temperature. ChatGPT delivered functionality without finesse.
The author reported that the first bite of Gemini’s version transported them straight back to their mother’s kitchen, with that crispy, caramelized edge hitting exactly right. ChatGPT’s version was adequate—edible, recognizably French toast—but generic. It tasted like a recipe you’d find on any cooking website, not like a memory.
The Real Lesson: Prompt Engineering Beats Model Loyalty
This test reveals a counterintuitive truth about AI recipe prompting in 2025: the model matters less than what you ask it. Both ChatGPT (available free or via Plus at $20 USD monthly for GPT-4o access) and Gemini (free via gemini.google.com, or Advanced at $19.99 USD monthly for Gemini 1.5 Pro) are capable systems. The difference emerged because the author supplied sensory anchors—crispy, caramelized, custardy, perfectly soaked—that Gemini translated into technique-specific steps while ChatGPT treated as flavor descriptors only.
This pattern suggests that users asking AI for recipes should move beyond ingredient lists and basic instructions. Instead, describe the texture you want, the browning level you remember, the mouthfeel of success. Tell the AI what went wrong with past attempts. Specify bread type and thickness. Mention heat intensity preferences. The more sensory detail you provide, the more likely the AI will generate instructions that account for technique, not just ingredients and timing.
Ingredients and Cost Breakdown
Gemini’s winning recipe serves two people with four thick slices and requires: four 1-inch-thick slices of challah or brioche (day-old preferred), three large eggs, three-quarters cup whole milk, two tablespoons heavy cream, two tablespoons granulated sugar, one teaspoon vanilla extract, half a teaspoon ground cinnamon, one-quarter teaspoon salt, four tablespoons unsalted butter for frying, and one-half cup granulated sugar plus one teaspoon cinnamon for the coating. All ingredients are pantry staples available globally. Eggs cost roughly $3 USD per dozen in most Western markets; a challah loaf runs approximately $5 USD. No specialized equipment is needed beyond a skillet, bowl, and whisk.
Should You Trust AI for Nostalgic Recipes?
The honest answer: only if you prompt carefully. AI excels at recipe adaptation when you supply the emotional or sensory goal, not just the dish name. Asking “How do I make French toast?” will yield a serviceable recipe. Asking “How do I make French toast with a crispy, caramelized cinnamon-sugar crust and a soft, custardy center like my grandmother made?” invites the AI to think in techniques, not just ingredient ratios. This test suggests Gemini currently performs better at technique-rich, texture-focused recipes, while ChatGPT defaults to reliable but generic formulas. Yet that gap narrows when you provide explicit sensory direction.
Frequently Asked Questions
Does bread thickness really affect the texture of French toast?
Yes. Thin bread soaks through too quickly and becomes soggy; thick slices (1 inch or more) absorb custard without falling apart and develop a crispy exterior while staying custardy inside. Day-old bread also absorbs better than fresh because it has less moisture to begin with.
Can you make this recipe with regular white bread instead of challah?
Regular white bread will work but produces a less rich result. Challah and brioche contain more eggs and butter, which creates a richer custard flavor and better browning. If using standard sandwich bread, reduce the milk by two tablespoons and increase the egg to four to compensate for the bread’s lower fat content.
Why does the prompt matter more than the AI model?
AI models are pattern-matching systems trained on millions of recipes and cooking instructions. When you specify only a dish name, the model defaults to the most common recipe pattern in its training data—usually generic. When you describe sensory outcomes (crispy, caramelized, custardy), you’re giving the model specific architectural targets to build techniques around. Gemini and ChatGPT both have access to similar recipe data, but detailed prompts force them to prioritize technique over convenience.
The takeaway is this: AI recipe prompting works best when you treat the AI like a sous chef who needs clear sensory direction, not a cookbook you’re browsing. Gemini’s success here came not from superior recipe knowledge but from translating your specific textural request into a method. ChatGPT failed not because it lacks cooking knowledge, but because it didn’t receive enough sensory scaffolding to justify technique-heavy steps. Next time you ask an AI for a nostalgic recipe, skip the vague request. Describe exactly what you remember tasting, and watch the output improve.
Edited by the All Things Geek team.
Source: TechRadar


