Every disruptive technology forces policymakers to answer the same question: how do we protect the public without slowing innovation? History shows that clarity, adaptability, and enforcement must move together. Otherwise, regulation risks becoming either a brake on progress or an empty promise. Looking back across industries—energy, biotechnology, finance, transportation, telecommunications—we can see recurring lessons. Those lessons should inform how policymakers approach artificial intelligence today.
What Other Technologies Teach Us
Nuclear energy demonstrated that without transparency and oversight, public trust collapses. In the United States, the creation of the Nuclear Regulatory Commission (NRC) formalized rigorous safety reviews and open reporting. Events such as Three Mile Island in 1979 and Fukushima in 2011 underscored how a single failure can define public perception for decades. Countries that adopted strict oversight frameworks were able to expand nuclear safely, while others froze development entirely. For AI, transparency in training, auditing, and deployment will be equally critical, since even a single widely reported failure could undermine trust.
Key Takeaway: Oversight and transparency build public trust. Without them, industries face resistance that slows both public and private adoption.
Biotechnology showed the importance of early dialogue. In 1975, the Asilomar Conference brought together scientists, ethicists, and policymakers to establish rules for recombinant DNA research. This early consensus prevented a public backlash and allowed genetic engineering to progress responsibly. In recent years, debates around CRISPR have again shown the value of proactive engagement. For AI, this means convening stakeholders before regulations are locked in place, not after crises emerge.
Key Takeaway: Inclusive dialogue reduces risk of backlash. Policymakers who engage stakeholders early create durable and accepted frameworks.
Autonomous vehicles revealed the costs of fragmentation. More than 35 U.S. states enacted their own AV laws, often conflicting with one another. Internationally, different testing standards further complicated development. The result has been delayed adoption and confusion for manufacturers, with several companies scaling back testing programs after costly setbacks. For AI, fragmented rules across jurisdictions could similarly hinder progress, increase compliance costs, and create uncertainty for innovators.
Key Takeaway: Fragmented rules delay adoption. A patchwork of regulations creates inefficiency, discourages investment, and erodes trust.
Financial technology illustrated the risks of regulatory lag. Global fintech investment topped $160 billion in 2024, yet regulators often relied on statutes drafted before smartphones existed. Cryptocurrencies in particular highlighted the tension: innovation raced ahead while oversight struggled to catch up. In Europe, the Markets in Crypto-Assets (MiCA) regulation sought to fill this gap. In the United States, this uncertainty has begun to resolve in 2025: the House passed the Digital Asset Market Clarity Act to clearly define whether tokens are regulated by the SEC as securities or by the CFTC as commodities. It passed 294–134 and now heads to the Senate. Another key step was the passage of the GENIUS Act in July 2025, establishing clear standards for stablecoins, including one-to-one backing and dual federal-state oversight. Together these measures represent progress, although debates over enforcement boundaries persist. For AI, outdated rules will not only slow adoption but also create enforcement gaps that undermine accountability.
Key Takeaway: Regulatory lag undermines effectiveness. When laws fall behind technology, both innovators and the public lose confidence.
Aviation demonstrated the value of consistent global safety standards. International Civil Aviation Organization (ICAO) protocols and national aviation authorities made air travel one of the safest modes of transportation. Because enforcement mechanisms were clear and coordinated, public confidence grew even after high-profile accidents. Between 1970 and 2020, global aviation accident rates fell by more than 90 percent, largely due to strict adherence to international standards and continuous oversight. This measurable improvement underscores how consistent enforcement can build trust even in high-risk sectors.
Key Takeaway: Consistent enforcement builds confidence, allowing innovation to scale.
Telecommunications regulation illustrated how oversight can accelerate access and innovation. The breakup of monopolies and introduction of competition fueled massive investment and expanded services to consumers. Yet oversight remained necessary to ensure fair pricing, protect privacy, and maintain security. For AI, striking the balance between competition and oversight will determine whether its benefits reach all sectors of society. By the early 2000s, mobile subscriptions exceeded the global population, and broadband expansion accelerated following deregulation and competition. The balance of market freedom with regulatory guardrails enabled both rapid growth and universal service obligations to expand connectivity.
Key Takeaway: Balanced competition and oversight expand access while protecting fairness.
Pharmaceutical regulation highlighted the necessity of rigorous pre-market approval. In the United States, the Food and Drug Administration (FDA) requires extensive clinical trials to demonstrate safety and efficacy before new drugs are approved. This process, while sometimes criticized for slowing innovation, has prevented public health disasters and built confidence in modern medicine. For AI, careful pre-deployment testing could provide the same assurance without halting progress.
Key Takeaway: Rigorous testing before deployment builds confidence and prevents harm.
AI Faces the Same Dilemma
Artificial intelligence is not the first transformative technology, but it may be the most far-reaching. As of 2025, more than 30 countries have launched national AI strategies, yet enforcement remains inconsistent. Investment is projected to surpass $300 billion globally by 2030. Meanwhile, public concern grows: a 2024 Pew Research survey found that 52 percent of Americans worry about major risks from AI. The challenge is balancing rapid innovation with credible safeguards that address both technical and social risks.
Key Takeaway: AI will test whether lessons from history are applied.
Rules Only Matter if They Can Be Enforced
Regulation without enforcement is symbolic. Aviation safety standards work because enforcement mechanisms are robust and continuous. For AI, enforcement must cover auditing, testing, and accountability structures. Without mechanisms, guidance will remain aspirational rather than operational. Examples from data privacy show that even with strong rules such as the EU’s General Data Protection Regulation (GDPR), lack of enforcement capacity in some jurisdictions weakens outcomes. For environmental policy, foundational rulings can shift dramatically. In July 2025, the EPA proposed rescinding its 2009 Endangerment Finding, the scientific and legal basis for greenhouse gas regulation under the Clean Air Act. This illustrates how quickly established frameworks can be reconsidered, creating regulatory uncertainty.
Key Takeaway: Rules must be enforceable to matter. Strong laws without enforcement remain symbolic gestures.
Why Global Rules Matter
Technology crosses borders, but rules often do not. Nuclear frameworks required international treaties to prevent proliferation. Cybersecurity similarly exposed the risks of weak global standards, with uneven protections leaving openings for malicious actors. For AI, global alignment will determine whether companies face one clear rulebook or a patchwork of conflicting demands. The United Nations, OECD, and G20 have all convened discussions on AI ethics, but enforcement power remains limited. The Paris Climate Agreement illustrates the challenge: ambitious goals without hard enforcement. The European Union has moved forward with its AI Act, which entered into force in August 2024. By February 2025, bans on high-risk AI practices and AI literacy obligations were active, while rules for general-purpose AI systems became applicable in August 2025. High-risk product system requirements will follow by 2027. These staged milestones show how global governance can evolve, though challenges remain.
Key Takeaway: Global coordination prevents fragmentation. Without alignment, even well-meaning efforts falter.
Bring Everyone to the Table
Effective governance requires multiple perspectives. Public health, finance, energy, and transportation all taught the same lesson: when regulations are drafted in isolation, they fail to reflect operational realities. A growing number of organizations are making this principle central to their work. One example is RegulatingAI, a nonprofit working with Diakon. Its vision, mission, and goals emphasize that everyone must have a seat at the table—policymakers, industry, academics, and communities alike. This approach embodies the co-governance model that ensures regulation is both inclusive and enduring. For AI, inclusion of diverse stakeholders—from developers to end users—is essential. Bringing civil society, industry, academia, and government into one dialogue ensures regulations are both practical and legitimate.
Key Takeaway: Broad participation makes regulation realistic. The most effective rules emerge from inclusive processes.
Turning Guidance Into Real Rules
Voluntary guidance can provide direction, but only codified rules ensure compliance. History shows that without transition from guidance to enforceable law, rules remain optional. The AI field has produced abundant principles—fairness, accountability, transparency—but most remain voluntary. For AI, the move from principles to statutes will mark the real test of governance, and delay risks leaving society unprotected.
Key Takeaway: Guidance must evolve into law to be effective. Without legal force, principles remain aspirational.
Where We Could Be in 10 Years
If lessons from other technologies are applied, AI could advance with strong public trust and global alignment. In that scenario, innovation will scale responsibly, with clear accountability measures and international cooperation. If not, AI may become fragmented, under-enforced, and politicized, creating uneven adoption and rising risks. The difference will be whether policymakers act with foresight rather than hindsight, and whether they take the time to build durable frameworks now.
Key Takeaway: The future depends on foresight, not hindsight. Policymakers must anticipate challenges rather than react to crises.
Three Things Policymakers Must Do
- Ensure transparency and enforceability in all rules.
- Align regulations globally to prevent fragmentation.
- Build inclusivity into governance from the start.
Key Takeaway: Balance clarity, adaptability, and enforceability. Sustainable frameworks must deliver all three.
AI will test whether policymakers can learn from history. Nuclear energy taught us safety and transparency. Biotech showed us the power of early dialogue. Fintech exposed the risks of regulatory lag. Autonomous vehicles revealed the costs of fragmentation. Aviation proved the value of consistent enforcement, and telecommunications showed the benefits of balanced oversight.
The choice now is clear: repeat old mistakes, or craft systems that are adaptive, enforceable, and globally aligned. History is not destiny. Policymakers have the benefit of hindsight—examples from multiple industries that show what works and what fails. The question is whether those lessons will be applied with urgency before crises force a reaction. Which path do you believe we will take?
By Michael Taylor – Managing Partner, Diakon Partners, Inc. Advisor on policy and advocacy strategy for public and private stakeholders across transportation, logistics, infrastructure, technology, sustainability, energy, and manufacturing sectors.
