From: aidotengineer
This article outlines strategies for intentionally hindering communication in AI deployment, based on an “inverted” presentation designed to show how to spectacularly mess up an AI strategy [00:00:27]. By embracing “worse practices,” the aim is to torpedo projects and alienate colleagues [00:00:54].
Divide and Conquer Your Company
The foundational step to failure in AI project implementation is to actively divide and conquer your own company [00:02:27].
Create Impenetrable Silos
- Attend every AI industry conference, but never share what you learned with your team [00:02:48].
- Actively create “impenetrable silos” and incentivize secrecy among your teams [00:03:01].
Make Unreasonable Promises
When communicating with customers about AI, engage in “wishful thinking promises” (WTP) [00:03:32]. Tell them AI will do absolutely everything, from writing emails to solving climate change, without worrying about the details—just promise the moon [00:03:35].
Define Your Strategy Vaguely
When defining your AI strategy, ensure it is ambiguous and vague to guarantee confusion and hinder effective communication.
Fake Any Diagnosis
- Grab an old annual report or operating plan and highlight random, poorly understood paragraphs, declaring them as “must fix” [00:05:09].
- Crucially, avoid talking to anyone who actually does the work [00:05:20].
Craft Ambiguous Policies
Your guiding policy should be incredibly ambiguous, such as “become the global AI leader in everything” without defining what “everything” means, leaving it as “someone else’s problem” [00:05:27].
Embrace Perpetual Beta
- Forget about timelines, which are for companies that intend to finish projects [00:06:07].
- Instead, embrace “Perpetual Beta” by creating a massive GitHub backlog and stuffing it with highlighted financial reports [00:06:13].
- Alternatively, create a 4,000-page document, post it across all Slack channels, and erode people’s willpower to engage with the material [00:06:25].
Drown Everyone in Jargon
One of the most effective ways to cause dysfunction is to communicate in a way that nobody understands, drowning everyone in a “tsunami of jargon” [00:06:48].
“Our multimodal agentic Transformer-based system leverages few-shot learning and Chain-of-Thought reasoning to optimize the synergistic potential of our dynamic hyperparameter space” [00:06:53].
The goal is to look incredibly smart, even if no one understands a word, prioritizing obfuscation [00:07:08].
Hide Jobs to Be Done
Strategically use jargon to hide the actual “jobs to be done” [00:07:41]. For example, instead of saying “we need to write a prompt,” say “we’re building agents” [00:07:47]. This ensures that relevant domain experts, like mental health experts in one case, are not in the room and do not know how to participate, which is the desired outcome [00:07:52].
Other examples of strategic jargon:
- Instead of “make sure the AI has the right context,” say “RAGs” [00:08:12].
- Instead of “make sure users can trick the AI into doing something bad,” say “prompt injections” [00:08:17].
Disempower Non-Engineers
Encourage engineers, rather than those who understand customers best, to write prompts, as “what could possibly go wrong?” [00:08:24]. The objective is to make everything, even writing prompts, seem “super technical and out of reach” for everyone else [00:08:50].
Avoid Data and Rely on Gut Feelings
To truly torpedo AI efforts, avoid looking at data and ensure no one else does either [00:13:42].
Trust Tools Blindly
- When a RAG system fails, don’t analyze the problem; just buy a new, more expensive vector database [00:10:51].
- If agents aren’t working, simply pick a new framework and vendor, then fine-tune without any measurement or evaluation [00:11:15]. Assume it will be better because it’s “kind of like Alchemy with a lot more electricity” [00:11:22].
- Adhere to a “one size fits all solution” mentality for evaluations, letting vendors figure it out as “you’re too busy being an executive” [00:12:06].
Disregard Custom Metrics
- When measuring progress, use every off-the-shelf evaluation metric you can find [00:11:01]. Never bother customizing them to business needs; blindly trust the numbers even if they make no sense [00:11:05].
- Create a dashboard with unintelligible numbers that obscure the difference between success and failure [00:12:35]. Keep hoarding random metrics until one goes up and to the right, then claim success [00:12:44].
- Adopt all eval frameworks blindly, never asking if they actually measure success [00:13:04]. Prioritize metrics like “cosine similarity, BLEU, and ROUGE,” ignoring actual user experience [00:13:17].
- Never cross-check with domain experts or users [00:13:24]. If an LM says it’s accurate, who are we to argue? [00:13:30]
Isolate Data Access
- Ensure that engineers handle everything, even if they haven’t spoken to a customer in years [00:14:54]. Quickly forget simpler options like spreadsheets for data annotation and review [00:15:05].
- Insist on putting data into complex systems that only engineers can access, making it unavailable to domain experts [00:15:26]. As an executive, insist on buying a custom data analysis platform requiring a team of PhDs to operate and understand [00:15:37]. Bonus points if it takes six months to load and has incessant errors [00:15:49].
Rely on Gut Feelings
- Trust your gut, as it got you this far in life [00:14:34]. Feelings are always a reliable substitute for data, especially for million-dollar decisions [00:14:40].
- Remember, customers are your best Q&A [00:14:23].