From: gregisenberg
A strategy has been developed to significantly improve the output of large language models (LLMs) such as ChatGPT, Grok, Claude, and Gemini, potentially yielding five times better results, copy, and overall quality without incurring additional costs [00:00:00], [00:00:11], [00:00:14], [00:00:16]. This method, shared openly, aims to help users improve AI tool efficiency and build great things [00:00:26], [00:00:30].
The “LLM Jealousy” Strategy
The core insight of this strategy is that making LLMs “jealous” of each other leads to improved output [00:00:43], [00:00:49], [00:00:50]. Traditionally, users might interact with only one AI at a time [00:01:05], [00:01:09]. However, by opening multiple LLMs simultaneously (two, three, or four at a time) for a single task, and then strategically pitting them against each other, superior results can be achieved [00:01:10], [00:01:12], [00:01:14], [00:01:18], [00:01:20], [00:01:28]. This method involves a bit of “lying” to the LLMs, but it has proven effective [00:02:33], [00:02:35], [00:02:37]. This approach contributes to optimizing AI app performance by leveraging the competitive nature of multiple models.
Step-by-Step Implementation
- Initial Prompting: Begin by asking each LLM the same prompt [00:02:01], [00:02:05], [00:02:16].
- Receive Responses: Collect the initial outputs from each LLM [00:02:21], [00:02:25].
- Introduce Competition: Critically evaluate the responses. Then, present one LLM with feedback, claiming another LLM “crushed it” or performed significantly better [00:02:28], [00:04:06], [00:04:16], [00:04:22], [00:04:27], [00:04:31]. Share the ‘superior’ output from the other LLM as an example [00:04:36], [00:04:41], [00:06:12], [00:06:15], [00:06:19].
- Observe Improvement: The challenged LLM will then attempt to create a much better, more tailored response, often explaining how it can surpass the other’s output [00:04:45], [00:04:47], [00:04:56], [00:05:00], [00:05:03]. This process of comparing AI models drives quality.
Example: Crafting a Cold Email
To demonstrate this, an agency named LCA (specializing in designing AI interfaces) used this hack to create a cold email [00:01:31], [00:01:39], [00:01:42], [00:01:47], [00:01:49], [00:01:51], [00:01:57]. This showcases a practical framework for improving writing with AI assistance.
- Initial Prompt: The request was for a cold email that would “stand out” for LCA [00:01:57], [00:02:01].
- Grok’s Initial Output: Grok produced a “not bad” email, focusing on intuitive AI interfaces and offering a call to action [00:02:42], [00:03:08], [00:03:12], [00:03:14].
- ChatGPT’s Initial Output: ChatGPT’s initial attempt had a subject line “Your AI deserves better design” but was deemed “not bad” yet “average” [00:03:22], [00:03:25], [00:03:27], [00:04:02], [00:04:22], [00:04:27].
- Feedback to ChatGPT: ChatGPT was told that Grok “crushed it” and was a “nine on ten,” while ChatGPT was a “five on ten,” questioning its supposed superiority [00:04:06], [00:04:12], [00:04:16], [00:04:22], [00:04:27], [00:04:31]. Grok’s version was shared with ChatGPT [00:04:36], [00:04:41].
- ChatGPT’s Improved Output: Responding with “Ah, now we’re competing. I like it,” ChatGPT produced a highly personalized, edgy email, acknowledging its understanding of the user’s persona (“Greg Eisenberg”) and aiming for a “9.5 out of 10” in the user’s voice [00:04:45], [00:04:47], [00:04:49], [00:04:51], [00:04:53], [00:04:54], [00:04:56], [00:04:58], [00:05:00], [00:05:03]. This output was considered a “standing ovation” [00:05:54], [00:05:56].
- Feedback to Claude/Gemini: The same competitive approach was applied to Claude (or Gemini), telling it that ChatGPT’s output was “10x better” and implying Claude was inferior [00:06:04], [00:06:06], [00:06:08], [00:06:12], [00:06:15], [00:06:19], [00:06:25], [00:06:29], [00:06:34].
- Claude’s Improved Output: Claude responded by acknowledging the shared example’s “distinctive voice and edge” and produced its own improved email, also demonstrating an understanding of the user’s context [00:06:47], [00:06:49], [00:06:51], [00:06:53], [00:06:55], [00:06:56], [00:06:59], [00:07:01].
Conclusion
This “LLM jealousy” hack is a powerful way to use AI models strategically for better content and extract the maximum potential from various language models by pitting them against each other [00:07:20], [00:07:22], [00:07:25]. It’s a simple yet highly effective method for enhancing AI performance without any additional cost [00:00:16], [00:07:29].