How Context Influences AI Responses
- Sarah Howell
- Nov 15, 2024
- 1 min read
We used Sora to transform this digital butterfly into a blue morpho by combining two Midjourney images. What is worth noting is that the first image prompt was simply, “natural language processing with butterflies --v 6”
Chances are, if you try out that prompt yourself, you won't get anything like this blue noded butterfly. Try it. Here is a screenshot of what I get today when I try this same prompt.

Maybe if I keep pressing "reuse prompt" something interesting will happen. But a year ago, the first result contained a gem. This was my result in June of 2024:

The blue-noded network-like structures in the wings were not part of the prompt, but they appeared in many of the results on the day this image was generated.
In June 2024, these nodes of light appeared in my Midjourney images for about a week, following a series of prompts where I asked for “nodes of light.”
Midjourney and other generative AI models do not produce results completely at random; their behavior is shaped by the user in the session prompt history, chat dynamics, and other contextual factors. That's why results from generative AI can be improved by carefully priming the system with content, tone, and style.
Language models show a similar sensitivity to context as Midjourney, though their responses are shaped by many factors, including vocabulary, emotion, and the flow and structure of conversation. Both systems respond to patterns in our history, but the sources of influence are different.
Published: November 2024
Updated: June 2025
コメント