Think Again: Why AI Won’t Erase Writing - A Five‑Year Planner’s Counterpoint to the Boston Globe Op‑Ed
Most people believe AI is destroying good writing. They are wrong.
When the Boston Globe published its opinion piece titled AI is destroying good writing, the headline alone sparked a chorus of dread among editors, educators, and corporate communicators. The article warned that algorithms churn out formulaic prose, erode nuance, and threaten the craft of storytelling. Yet the alarm overlooks a deeper reality: AI can become a catalyst for higher-order thinking, especially for long-term planners who must synthesize data, anticipate trends, and craft narratives that guide multi-year strategies. Pegasus & the Ironic Extraction: How CIA's Spyw...
In the next five years, the very tools critics decry will shape the way planners allocate resources, communicate risk, and rally stakeholders. Rather than a death knell, AI offers a lever to amplify strategic insight - if we understand its limits and deploy it wisely.
Problem 1: The Myth of Uniform Mediocrity
Critics argue that AI reduces writing to a bland template, stripping away the author’s voice. The Boston Globe op-ed cites examples of chatbot essays that repeat the same sentence structures, suggesting a future where every report sounds alike. This fear rests on a narrow view of current models, which excel at pattern replication but stumble when asked to innovate.
For planners, the danger is not the loss of style but the loss of critical analysis. If a team leans on AI to draft quarterly updates without questioning the underlying assumptions, the output becomes a polished echo chamber. The real problem is the unchecked delegation of judgment to a machine that cannot evaluate strategic trade-offs.
Key Insight: AI’s strength lies in speed and data aggregation, not in the judgment that turns raw facts into a compelling vision.
In 2023, a survey of Fortune 500 strategy units revealed that 42% of respondents used AI to generate first-draft briefs, yet only 18% reported confidence that the AI-produced narratives captured the nuance needed for board-level discussions. The gap between efficiency and insight is the crux of the problem.
Solution 1: Treat AI as a Research Assistant, Not a Author
Reframing AI from writer to researcher restores the planner’s role as the ultimate decision-maker. By feeding AI massive data sets - financial statements, market forecasts, regulatory filings - it can surface patterns, flag anomalies, and draft bullet-point summaries in minutes. The planner then curates, contextualizes, and weaves these insights into a story that reflects the organization’s purpose.
Consider a five-year infrastructure roadmap for a municipal utility. An AI model can ingest decades of maintenance logs, climate projections, and budget histories, producing a matrix of risk scores. The planner uses this matrix to prioritize projects, then crafts a narrative that links each investment to community resilience and policy goals. The AI does the heavy lifting of data synthesis; the human adds the strategic glue.
"AI can process more information than any human, but it cannot decide which information matters for a five-year vision," says Dr. Maya Patel, senior analyst at the Global Strategy Institute.
By positioning AI as a fact-finder, planners preserve the analytical depth that the Boston Globe piece fears will vanish.
Problem 2: Short-Term Cost Pressures Mask Long-Term Value
Many organizations adopt AI to cut copy-editing costs, citing the Globe’s claim that “AI writes faster, cheaper, and with fewer errors.” The immediate savings are tangible - software licenses, reduced staffing hours, and faster turnaround. Yet these short-term metrics ignore the strategic cost of missing a five-year perspective.
When AI replaces seasoned writers, the institutional memory embedded in years of corporate storytelling evaporates. A planner who relies on a lean AI-only team may produce reports that lack the historical context needed to justify long-range investments. The result is a series of well-written but strategically shallow documents.
Stat: A 2022 internal audit of a multinational retailer showed a 27% drop in forward-looking scenario analysis after the writing staff was reduced by 30% in favor of AI tools.
The real danger is not the loss of prose but the erosion of foresight.
Solution 2: Build Hybrid Teams That Blend AI Speed With Human Foresight
- AI ingests all relevant data and produces a structured outline.
- Human experts review the outline, add historical anecdotes, and challenge assumptions.
- The combined draft undergoes a rapid editorial pass, preserving both efficiency and depth.
This model retains cost advantages while safeguarding the strategic narrative. A European energy firm piloted this approach in 2024, reducing report production time by 45% while improving board satisfaction scores by 22% - a clear indication that AI and human insight can coexist profitably.
For long-term planners, the lesson is clear: treat AI as a catalyst for collaboration, not a replacement for the strategic mind.
Problem 3: Over-Reliance on AI Generates Echo Chambers
The Globe warns that AI learns from existing content, reinforcing dominant viewpoints and marginalizing dissent. In a planning context, this translates to repeated risk assessments that echo past assumptions, stifling innovative thinking.
When an AI model is trained on a company’s historical reports, it will naturally reproduce the language and conclusions of those reports. If a firm’s past strategy favored incremental upgrades over disruptive innovation, the AI will suggest the same path, even when market signals point toward a paradigm shift.
Example: A biotech startup used AI to draft its five-year pipeline plan. The AI, trained on the firm’s early-stage focus on small-molecule drugs, omitted emerging gene-editing opportunities, nearly costing the company a $200 million partnership.
Echo chambers are especially dangerous for planners tasked with anticipating black-swans.
Solution 3: Inject Diverse Data Sets and Human Counter-Narratives
Combat echo chambers by deliberately feeding AI a breadth of sources - academic journals, competitor filings, emerging-technology blogs, and even contrarian think-tank reports. Pair this with a “devil’s advocate” role within the planning team, tasked with questioning every AI-suggested recommendation.
In practice, a multinational logistics firm created a “scenario-challenge” sprint each quarter. AI produced three baseline forecasts; a designated analyst then generated a counter-scenario that deliberately contradicted the AI’s assumptions. The resulting board deck presented a balanced view, forcing decision-makers to weigh both AI-derived optimism and human-driven caution.
This method transforms AI from an echo machine into a catalyst for robust debate, aligning with the five-year outlook that planners must constantly defend.
Problem 4: The Illusion of Objectivity Masks Hidden Biases
AI models are often touted as neutral, but they inherit biases from their training data. The Globe’s op-ed highlights how language models can perpetuate gendered stereotypes in news coverage. For planners, biased outputs can skew risk assessments, market sizing, and stakeholder analysis.
Fact: A 2021 audit of AI-generated policy briefs found that 31% of them under-represented minority-focused initiatives compared to human-written equivalents.
Bias is not a technical flaw alone; it is a strategic risk.
Solution 4: Institutionalize Bias Audits and Inclusive Prompt Engineering
Before deploying AI for strategic writing, conduct a bias audit: compare AI outputs against a benchmark of human-crafted documents across demographic dimensions. Use the findings to refine prompts, explicitly requesting diverse perspectives.
For example, a public-health agency added a prompt line - "Consider underserved populations in every recommendation" - to its AI workflow. The subsequent five-year health equity plan featured 18% more interventions targeting low-income communities, a measurable improvement over the prior draft.
Embedding bias checks into the planning cycle turns a potential weakness into a governance strength, ensuring that the five-year vision remains inclusive.
What Long-Term Planners Must Accept
The Boston Globe’s alarmist headline captures a genuine concern: AI can dilute craft if wielded without discipline. Yet the five-year outlook for planners reveals a different story. When AI is treated as a research accelerator, a collaborative partner, and a source of data diversity, it enriches - not erodes - strategic storytelling.
Planners who cling to the myth that AI will destroy good writing risk missing the strategic upside that AI already offers: rapid data synthesis, scenario breadth, and the ability to iterate narratives at unprecedented speed. The uncomfortable truth is that the real threat lies not in the technology, but in the choice to surrender strategic judgment to a machine.
In the next half-decade, the organizations that thrive will be those that harness AI’s horsepower while keeping human foresight in the driver’s seat. The pen may be faster, but the story still needs a compass.
Comments ()