Artificial intelligence (AI) appears to be following the “move fast and break things” ethos of Silicon Valley. Popularized by Facebook’s (now Meta) Mark Zuckerberg, “move fast and break things” prioritizes rapid innovation and rollout over all else to secure a competitive edge. While Facebook did secure a competitive advantage in the digital world, it has also offered us a cautionary tale. Its swift rollout without meaningful regulation contributed to documented detrimental impacts to mental health and the proliferation of harmful disinformation. Now, the idea of moving fast and breaking things in the AI race has led leaders, including those in government, to further advocate for deregulation to ensure market dominance. The Trump Administration's Executive Order from December 2025 — Ensuring a National Policy Framework for Artificial Intelligence — illustrates this dynamic, proposing a “minimally burdensome national policy framework for AI.”
There are growing concerns that AI might dramatically change employment as we know it. Last year’s UNCTAD Technology and Innovation Report forecast that AI could affect roughly 40% of global jobs within 10 years, though effects will vary by national readiness. The ‘ideas industry’ will not be immune to this. Historically, public policy think tanks have served as hubs of expertise and research, offering policymakers, leaders, and members of the public carefully curated insights on some of society's most pressing challenges. Their reputations are based on analysis, deep subject matter knowledge, and the ability to synthesize complex information into actionable recommendations. But as artificial intelligence rapidly transforms the landscape of research and knowledge production, think tanks face an existential question: what is their role when AI agents can conduct research at speeds no human could match?
The exponential growth in AI's capacity to analyze information, identify patterns, and generate insights is reshaping the research landscape. Researchers worldwide are already embracing AI tools at scale. A Frontiers report titled "Unlocking AI's untapped potential: Responsible innovation in research and publishing" revealed that more than 50% of researchers now use AI in manuscript peer review. But this enthusiasm warrants scrutiny. Scholars Lisa Messeri and M.J. Crockett caution in "Artificial intelligence and illusions of understanding in scientific research" that researchers risk a paradox: generating more content while comprehending less. Recently, Moltbook offered a window into an AI-only digital space. On a superficial level, one could be impressed by the “intelligence” displayed by the AI agents, but under scrutiny the prevalence of AI slop was apparent, and the conversations these AI agents had were essentially a reflection of their training data. Therefore, the fundamental danger lies in treating AI as a trusted intellectual partner rather than as a powerful tool demanding constant human evaluation and oversight.
Think tanks can therefore reposition themselves as essential curators and conveners in an era of information overload. They should become what society urgently needs: trusted arbiters of expertise in a world saturated with both information and disinformation. When anyone can generate a sophisticated-looking research paper with AI in minutes, the ability to distinguish rigorous analysis from sophisticated nonsense becomes invaluable. The role of think tank researchers may no longer involve spending months writing reports because an AI could draft them in hours. Instead, their expertise may be needed to evaluate AI-generated research and identify its limitations and biases critically.
Perhaps most critically, think tanks are positioned to resolve a central paradox of our era. The more AI-generated content saturates our lives, the more urgently we crave authentic, human dialogue with credible experts. As our world grows increasingly digital, the capacity to disconnect and engage face-to-face remains desirable. Trust cannot be built through algorithms. It requires sustained human connection, particularly in international disputes where empathy and understanding develop only through genuine face-to-face interaction. Part of the answer requires democratizing a historically closed, exclusive field and fostering greater openness in knowledge sharing so that beneficiaries can hone what may be the most critical skill for our AI-driven future, the ability to ask the right questions.
Stuart Russell, a Professor of Computer Science at the University of California-Berkeley, warned against "AI solutionism" while speaking at the World Economic Forum’s Annual Meeting of the Global Future Councils. While Russell acknowledges that AI applications present significant promise across diverse fields, he also emphasized that AI isn't a cure-all. He stated, “if we are creating systems more powerful than ourselves, then there is an obvious question: How do we retain power over entities more powerful than ourselves forever?” Perhaps one of the solutions to retaining power over AI is to further champion spaces which prioritize human dialogues and connection. We need more spaces to do so, as traditional third places such as libraries, community centers, and cafes have become increasingly inaccessible for communities. Think tanks now have the opportunity to evolve to serve as accessible, neutral grounds where diverse perspectives converge and where trust and understanding deepen through face-to-face exchange. As the digital world continues to dramatically streamline various aspects of our work, promising to optimize our time, we must begin using that reclaimed time to engage in human connection and dialogue.
Katherine Salinas is Senior Program Coordinator for the Technology Policy program at ORF America.

