In the absence of AI regulation, AI literacy is necessary

By: Katherine Salinas

The United States’ innovative leadership in artificial intelligence (AI) has been enabled both by an open and regulation-free environment, but also by significant investment. For example, in the year 2020 alone, the U.S. government spent nearly $5 billion for unclassified AI, including funding from U.S. agencies such as the Defense Advanced Research Projects Agency and the National Institutes of Health. This has allowed Silicon Valley giants and countless startups to push the boundaries of what machines can do. But the uncomfortable truth is that while U.S. entities have mastered the art of building smarter AI, they’ve failed at building wiser AI governance.

The repercussions of unregulated AI are unfolding in real time, with potentially life-altering consequences for millions of people. Every day, AI systems identify and treat us despite harboring significant flaws that operate without meaningful oversight. Facial recognition technology exemplifies this crisis, with National Institute of Standards and Technology research revealing that most industry software is 10 to 100 times more likely to misidentify Black and Asian individuals compared to white people due to difficulties distinguishing features across different skin tones —biases that have already resulted in multiple wrongful arrests. The problem extends beyond surface-level recognition into critical healthcare decisions. An AI sepsis model failed to detect the condition in 67% of those who developed sepsis, and generated false alerts for thousands of patients who did not have it. While the FDA has begun addressing some medical AI failures by issuing new guidelines to regulate such tools as medical devices, the broader landscape remains largely unregulated. These AI systems now fundamentally shape how we perceive ourselves and others through the information they curate and the interactions they enable. Yet, they continue to operate in a regulatory vacuum where profit motives take precedence over ethical considerations, leaving our society vulnerable.

Recognizing the critical nature of this issue, former U.S. President Joe Biden highlighted the need for protective regulations with the release of "A Blueprint for an AI Bill of Rights." This initiative aimed to safeguard individuals' rights and opportunities amidst the rise of powerful automated technologies. However, on January 20, 2025, President Donald Trump reversed Biden's October 2023 executive order concerning AI. Recently, President Trump also announced the Stargate Project, which intends to funnel $500 billion into private AI infrastructure over the next four years. At the same time, the Republican tax bill that passed in the House would inhibit states from implementing AI regulations for a decade.

Some believe the solution lies in what experts call Human-Centered AI (HCAI) — an approach that puts human welfare at the center of every AI decision. But the challenge isn't identifying ethical principles — it's implementing them. Too many ethical guidelines gather dust on corporate shelves while AI systems perpetuate bias and invade privacy. Frameworks must be broad enough to address diverse concerns yet specific enough to guide real-world decisions. They must be flexible enough to accommodate different AI applications while remaining iterative throughout a system's lifecycle.

Education is equally crucial. Currently, only one in ten AI and data science courses addresses ethics in any meaningful way. The opacity of AI decision-making — what experts call the "black box" problem — compounds these challenges. When even the creators don't fully understand how their systems reach conclusions, accountability becomes nearly impossible. For all these reasons, AI literacy is becoming increasingly important. To facilitate productive discussions, we need to democratize knowledge rather than keep it confined to experts who may not prioritize ethical considerations. This is particularly crucial with AI, as it impacts numerous aspects of our lives. 

Katherine Salinas is Senior Program Coordinator for the Technology Policy program at ORF America.