From: aidotengineer

The definitive guide to completely, utterly, and spectacularly messing up your AI strategy focuses on embracing “worse practices” to achieve “full-blown company crippling, career ending failure” [00:00:43]. The goal is to torpedo projects and alienate colleagues [00:00:57].

Presenters

  • Greg: An executive leader who has spent years crafting AI strategies in the C-Suite, including as Chief Product Officer at Pluralsight. He has witnessed how executive teams can turn “clear strategic opportunities into labyrinthine disasters” [00:01:35].
  • Hamill: A machine learning engineer and independent consultant who has worked with many companies on AI. He has seen “every conceivable way AI strategies can fail” [00:01:50].

Together, they refer to themselves as the “dream team of disaster” [00:01:59], advising on how to “invert always invert” based on Charlie Munger’s wisdom [00:02:15].

Steps to Achieve AI Project Failure

1. Divide and Conquer Your Company

A key step to failure is to actively divide and conquer your own company [00:02:27].

  • Embrace Disconnect: Foster a disconnect between the willingness to pay (customer value) and the actual cost of implementation [00:02:35].
  • Unreasonable Goals: Contemplate unreasonable goals to create value [00:02:40].
  • Siloed Knowledge: Attend every AI industry conference, but never discuss what was learned with your team [00:02:50]. The objective is to create “impenetrable silos and incentivize secrecy between your teams” [00:03:01].

2. Adhere to the Anti-Value Stick

Embrace the “anti-value stick,” which is the opposite of good and useful principles in value creation and strategy [00:03:18].

  • Wishful Thinking Promises (WTP): Tell customers that AI will do everything for them, like writing emails, blocking dogs, solving climate change, and achieving world peace, without worrying about details [00:03:32].
  • Particularly Ridiculous Infrastructure Costs Everywhere (Price): Buy the most expensive GPUs and avoid cost-benefit analysis, maxing out the company credit card [00:04:03].
  • Cascade of Spectacular Technical Debt (Cost): Build systems so convoluted and intertwined that even executives can barely understand them, ensuring job security when they inevitably break [00:04:16].
  • Why This System? (WTS): The answer to why a system is being built should always be “because AI,” with no further explanation needed, treating it like “magic but much more expensive and less reliable” [00:04:45].

3. Define Your Strategy Poorly

This is a critical step in orchestrating AI strategy pitfalls.

  • Fake Diagnosis: Grab last year’s annual report, highlight random, least-understood paragraphs, and declare them “must fix” items without consulting anyone who does the work [00:05:06].
  • Ambiguous Guiding Policy: Create an incredibly ambiguous and vague guiding policy, such as “become the global AI leader in everything,” without defining what “everything” means [00:05:27].
  • Unrealistic Action Plan: Propose an AI-powered SEO tool guaranteeing top Google results (even for garden gnomes), a generative art plug-in for NFTs of the CEO’s cat, and an AI drone lunch delivery service [00:05:41]. Announce these at an All-Hands meeting, using buzzwords like “disruptive” [00:05:54].
  • Embrace Perpetual Beta: Disregard timelines, create a massive GitHub backlog, and stick all highlighted financial reports into it, eroding people’s willpower to engage [00:06:07]. A 4,000-page document posted in all Slack channels can also achieve this [00:06:25].

4. Communicate Through Jargon

To ensure ineffective communication in AI deployment, drown everyone in a “tsunami of jargon” [00:06:48].

  • Obfuscation: Use complex phrases like “our multimodal agentic Transformer based system leverages F shot learning and Chain of Thought reasoning to optimize the synergistic potential of our Dynamic hyper parameter space” [00:06:53]. The goal is to appear smart without anyone understanding [00:07:08].
  • Hide “Jobs to Be Done”: Use jargon strategically to hide the actual tasks, for example, calling prompt writing “building agents” to exclude mental health experts from participation [00:07:38].
  • Misdirect Expertise: Instead of saying “make sure the AI has the right context,” say “Rags” [00:08:12]. Instead of “make sure users can trick the AI into doing something bad,” say “prompt injections” [00:08:17].
  • Engineer-Centric Prompting: Encourage engineers, rather than those who understand customers, to write prompts [00:08:24], contributing to challenges in building reliable AI agents and developing AI agents. Making everything, including prompt writing, seem technical and out of reach for others is desired [00:08:53].

5. Mobilization - Zoning to Lose

Pioneer a “revolutionary framework” about “zoning to lose,” designed specifically for failure [00:09:19].

  • Random Task Assignment: Randomly assign AI tasks to people with no relevant experience [00:09:24].
  • Outsource Without Context: Outsource data review to offshore Q&A teams with little business context [00:09:32].
  • Launch Untested Products: Launch completely untested, bug-ridden AI chatbots directly to customers from the incubation zone, disregarding quality assurance and beta testing [00:09:45].
  • Organizational Collapse: Yank best engineers from revenue-producing products, leading to total collapse [00:10:12].

6. Burn It All to the Ground - Focus on Tools, Not Processes

When your organization is in disarray, focus on tools, not processes, to burn it down [00:10:30]. This is a common aspect of mismanagement of AI resources.

  • Throw Tools at Problems: Don’t analyze or understand problems; just throw tools at them [00:10:44]. If a RAG system isn’t retrieving documents, buy a new, more expensive vector database [00:10:51].
  • Blindly Trust Metrics: Use every off-the-shelf evaluation metric without customizing them to business needs, blindly trusting numbers even if they make no sense [00:11:01]. This contributes to challenges in ensuring AI accuracy and reducing errors.
  • Framework Hopping: If agents aren’t working, pick a new framework and vendor. Fine-tune without measurement or evaluation, assuming it will be better “like Alchemy with a lot more electricity” [00:11:15].
  • Generic Metrics: Adopt all evaluation metrics from frameworks, letting them guide blindly without questioning whether they measure success [00:13:04]. Prioritize metrics like cosine similarity, BLEU, and ROUGE over actual user experience [00:13:17].
  • No Cross-Checking: Never cross-check with domain experts or users, as “if an LM says it’s accurate, who are we to argue?” [00:13:24].

7. Avoid Looking at Data

This is the most potent technique for challenges with current AI implementation.

  • Blindfold Approach: Actively avoid looking at data [00:13:42]. Trust the AI’s output 100% without review [00:13:58].
  • Delegate Data Responsibility: Data analysis is an “engineering problem”; leaders have more important strategic tasks like meetings about meetings [00:14:03].
  • Trust Gut Over Data: Trust your gut feelings over data, especially for million-dollar decisions, as feelings are a reliable substitute [00:14:34].
  • Engineer-Only Access: Assume engineers are coding wizards with more domain expertise than business teams [00:14:54]. Forget simpler options like spreadsheets for data annotation [00:15:05].
  • Inaccessible Data Systems: Put data in complex systems that only engineers can access, making it unavailable to domain experts [00:15:26]. Insist on buying custom data analysis platforms requiring a team of PhDs to operate, especially if they take six months to load and have incessant errors [00:15:39].

Conclusion

Following this advice meticulously guarantees wasted time and resources, and the alienation of colleagues, which is presented as the ultimate success in achieving total AI failure [00:16:05]. For real advice on implementing AI in enterprises, resources like ai-exec.com and an O’Reilly book are available [00:16:20].