From: redpointai

Rahul Roy-Chaudhury, CEO of Grammarly, discusses the transformative impact of AI on communication and productivity, drawing from Grammarly’s extensive experience as an AI writing assistant with over 30 million daily active users [00:00:10].

A Vision for AI in Communication

The future of communication, enabled by AI, is envisioned as more meaningful and connection-focused [00:01:40]. AI is expected to eliminate the “drudgery” of day-to-day work, allowing humans to concentrate on creativity, synthesizing ideas, and deeper human connections [00:01:45]. Rahul hopes for a future with less email and fewer documents, where the goal is not to create more content volume, but to make existing communication better, more memorable, and precise [00:02:00]. This contrasts with the current reality where the average person switches contexts 1,200 times in a workday [00:02:29]. AI should enable individuals to enter a “flow state” and focus on their strengths, making each conversation measurably more valuable [00:02:39].

It’s crucial for humans to have agency in shaping this future, avoiding a dystopian scenario where AI generates and consumes most content, as writing and communicating are fundamental to human nature [00:03:14].

Grammarly’s Product Evolution

Grammarly, founded in 2009, has evolved through various technology waves, from rules-based systems and Natural Language Processing (NLP) to deep learning models and now Large Language Models (LLMs) and generative AI (GenAI) [00:04:13]. The company focuses on solving user needs by applying the most suitable technology [00:04:32].

Communication can be viewed in four stages:

  1. Ideation/Conceptualization: Thinking about what to say [00:04:53].
  2. Composition: Writing down the message [00:04:58].
  3. Revision: Polishing and improving the text [00:05:04].
  4. Comprehension: The recipient understanding the message [00:05:08].

Historically, Grammarly primarily focused on the revision phase, helping with correctness, adherence to style guides, tone, and brevity [00:05:23].

With LLMs, Grammarly is expanding its capabilities in two main ways:

  1. Tying communication to business outcomes: Suggestions will become more strategically aligned with desired results. Correctness and tone adjustments will often be auto-applied, allowing users to focus on achieving goals, such as adding a “call to action” to an email [00:06:04].
  2. Supporting the entire communication lifecycle: Grammarly will assist users from ideation and composition through revision and comprehension (e.g., summarizing long email threads and identifying action items) [00:07:22].

Developing AI Features

Grammarly approaches AI development with a strong emphasis on quality and safety due to the critical nature of user communication [00:08:33]. This requires significant work beyond just “throwing a model over the wall” [00:08:52].

Key aspects of their development process:

  • Fine-tuning: Models are fine-tuned for specific use cases [00:08:56].
  • Evals: Extensive quality and safety evaluations are conducted [00:09:00].
  • User Feedback Loop: User acceptance/rejection of suggestions and engagement levels provide critical quality input [00:10:02].
  • Human Experts: Side-by-side evaluations by linguistic experts compare model outputs to human-idealized text [00:10:28].
  • Experiments: Features are initially rolled out to small percentages of users as asset tests [00:10:52].

Accuracy requirements vary by context; unlike marketing content where hallucinations might be acceptable, sensitive communications demand high precision [00:09:23]. One example of learning from user feedback was the tone detector, which sometimes made inappropriate suggestions for sensitive documents like police reports, leading to suppression rules [00:11:42].

Best Practices for AI Tool Adoption

Effective users of AI tools like Grammarly understand that communication is a strategic imperative linked to personal and organizational goals [00:13:03]. For instance, ModMed, an e-health company, uses Grammarly strategically to improve patient outcomes through enhanced communication in sensitive documents [00:13:31]. The key is to define the goal and measure how the AI solution helps achieve it, moving beyond the “cool factor” of new technology [00:14:03].

Response to LLM Advancements

The release of ChatGPT was a watershed moment, surprising many with its speed, scale, and pace of quality improvement [00:15:54]. Grammarly views this as a significant enabler for its mission [00:16:32]. As domain experts in communication, they harness the latest technology to provide the best solutions for user problems, adapting from rules-based NLP (which couldn’t support ideation) to current LLMs [00:16:47]. While earlier LLMs showed lower precision than NLP for grammar, the quality improvements have been phenomenal, with LLMs now being as good as rule-based systems [00:17:35].

Product Roadmap and Challenges

When building with rapidly evolving AI models, Grammarly focuses on solving current user problems and upholding its brand promise [00:18:37]. Safety, for example, is a non-negotiable aspect that requires continuous work, even if models improve in the future [00:18:45].

Areas of exploration and future development include:

  • On-device inference: As models become more efficient, moving inference to the edge can improve security, privacy, latency, and user experience [00:19:34]. This is expected to become broadly viable by next year [00:35:52].
  • Multi-step reasoning and agentic workflows: Future models capable of complex, multi-step reasoning will enable Grammarly to orchestrate elaborate communication flows (e.g., assisting with drafting a comprehensive board email by pulling in various contexts and applying domain expertise) [00:20:41]. This aims to reduce the “drudgery” of cutting and pasting context, promoting a flow state [00:21:42].
  • Idiosyncratic Models: Different LLMs behave uniquely, requiring significant work to make each model fit for purpose, rather than being simply “plug and play” [00:23:12].
  • Model Orchestration: Grammarly uses a combination of half a dozen or so closed and open-source models in production, mostly fine-tuned on user data [00:24:08]. The goal is to distill models to the smallest, most efficient size for each use case without compromising quality [00:24:33]. Cost and latency are significant considerations, especially for a large user base [00:25:09].

Grammarly’s competitive advantage lies in its vast repository of relevant, contextual, and high-quality user data, processing 75 billion user events daily [00:25:51]. This data is used to fine-tune models and personalize experiences. They enable users to fine-tune their personal voice and assist organizations in enforcing style guides, brand tones, and corporate values across internal and external communications [00:26:36]. This can automate compliance with strict communication rules, like those for loan officers [00:27:54].

Competitive Landscape

Grammarly views competition positively, as it brings attention to the problem space they’ve long addressed, leading to increased interest in their product [00:31:04]. Their key differentiators are:

  1. Quality of user data: Leveraging their vast data to continuously improve product quality [00:31:43].
  2. Omnipresence: Operating across the fragmented landscape of modern work tools (e.g., Gmail, Microsoft Word, Slack, Salesforce), offering a uniform AI communication stack [00:32:12].

AI Team Structure

Grammarly employs a hybrid approach to AI team structuring:

  • Core Research Team: Focuses on longer-term exploration (18-24 months out), building foundational capabilities and infrastructure for future features like on-device AI [00:34:33].
  • Embedded AI Engineers: AI engineers are integrated into full-stack product and feature teams to launch immediate AI-powered features [00:34:54].

The transformation of work by AI is seen as a long-term journey, similar to the shift from on-premise to cloud, rather than a single deployment [00:36:03]. Enterprises seek trusted partners for this multi-year transformation [00:37:05].

While there’s significant investment and excitement around AI, widespread productivity gains beyond specific use cases have been elusive [00:37:21]. Clear examples of impact include software engineering (code generation) [00:37:43] and customer support [00:38:43]. Grammarly emphasizes measuring and demonstrating value; the average organizational user saves 19 days per year, equating to roughly a month’s worth of work [00:38:03].

Rahul has changed his mind on the distinction between consumer and enterprise businesses. He now sees it as an artificial line, with many “consumer” users buying Grammarly for work. The company is building for a seamless customer journey from free consumer usage to large-scale enterprise adoption [00:49:11].

AI in Education

AI presents a crucial moment for education, posing the question of how to responsibly incorporate this powerful tool into pedagogical methods [00:39:53]. Initial fears of banning AI have largely dissipated, with educators now eager to partner with industry to equip graduates with essential AI skills for the workforce [00:40:51].

Grammarly is a responsible partner in this space:

  • AI Citations: They launched a feature allowing users to cite their use of AI in a work product. This helps educators distinguish between students who merely auto-generate content and those who actively engage with the tool to deepen their understanding [00:41:16].
  • Authorship: A new feature, “Authorship,” provides the provenance of every part of a document (manually written, copy-pasted, or AI-generated), offering transparency and tools for institutions to set their own guardrails [00:42:27].

AI is viewed as a tool of augmentation, providing “superpowers” rather than causing displacement [00:44:26]. It’s a powerful “leveler” and democratizer of skills, especially for students and workers globally who lack access to traditional educational resources, enabling them to upskill significantly [00:44:47].

AI in Web Browsers

The web browser of the future will be significantly changed by AI [00:45:40]. AI can embed into the browsing experience to synthesize information, remember past browsing activities, and surface relevant context at the right moments, potentially solving issues like having “too many tabs” open [00:46:04].

Quickfire Insights

  • Overhyped in AI: Chat interfaces, which are considered a “subpar command line interface” that will likely fade into the background [00:46:41].
  • Underhyped in AI: AI’s potential as a “force multiplier” to upskill and uplevel people around the world, democratizing access to skills and aiding those struggling in the workforce [00:46:52].
  • Biggest Surprise in Building AI Features: The strong user resonance with the tone detector and tone AI feature, with many users sharing stories of how it “saved their life” in various scenarios [00:47:59].
  • Most Exciting AI Startup (outside of Grammarly’s space): AI for healthcare, specifically AlphaFold, for its ability to improve healthcare outcomes and accelerate drug discovery by identifying fruitful research areas [00:50:07].