Understanding AI Regulation and the Potential Impact of Proposed Budget Cuts


 A Pragmatic Look at the Intersection of Law, Technology, and Policy

Before we get too much farther along - Full disclosure, This post was written with the help of "CoPilot", Microsoft's "Screaming Goat" AI engine. Notwithstanding, efforts were made to fact-check the output. Please let me know of any inaccuracies or contradictions you may have found.

Artificial intelligence (AI) is no longer the stuff of futuristic dreams. From recommendation algorithms on your favorite streaming platforms to facial recognition at airports, AI services are becoming an integral part of our daily lives. However, as transformative as AI is, its rapid growth raises significant ethical, legal, and societal concerns, making regulation a crucial component to ensure its development aligns with public interest. But what happens when funding for the very bodies overseeing AI regulation is cut, as proposed in policy initiatives like those from the Trump Administration? Let’s explore.

How Are AI Services Regulated?

AI regulation is a complex, multi-jurisdictional effort that involves balancing innovation with safety, privacy, and fairness. Governments, international organizations, and private entities work together—sometimes in harmony but often at odds—to establish frameworks for AI governance. Let’s break down the key elements of regulation.

1. Ethical Guidelines

Governments and organizations worldwide have issued ethical principles to guide AI development. For instance, the European Commission’s guidelines emphasize transparency, human oversight, and accountability, while the United States focuses on innovation-friendly, risk-based approaches that avoid over-regulation.

2. Data Protection Laws

AI relies heavily on data, making data protection laws a cornerstone of its regulation. The General Data Protection Regulation (GDPR) in the EU is a gold standard, mandating consent for data collection and granting individuals the right to access and delete their data. (1.) In the U.S., regulations are more fragmented, with state-level laws such as the California Consumer Privacy Act (CCPA). (2.)

3. Sector-Specific Regulation

Certain domains, like healthcare or autonomous vehicles, have bespoke regulations tailored to the risks inherent to those fields. For example, the Federal Aviation Administration (FAA) oversees AI in aviation, while the Food and Drug Administration (FDA) monitors AI used in medical devices. (3.) (4.)

4. Oversight Bodies

Regulatory agencies and advisory councils are often charged with monitoring AI developments. In the U.S., bodies like the National Institute of Standards and Technology (NIST) establish technical standards, while the Federal Trade Commission (FTC) addresses consumer protection issues related to AI. (5) (6)

The Proposed Cuts: What’s at Stake?

Proposals for budget cuts, like those floated during the Trump Administration, often target regulatory agencies in the name of reducing government spending. While specifics vary, the overarching consequences of such cuts could be profound when it comes to AI governance. Here are key areas of concern:

1. Reduced Oversight

If agencies like the FTC or NIST face significant budget cuts, their ability to monitor and enforce AI-related laws could diminish. An article from the Brookings Institute suggests that this could lead to unchecked development of AI systems, increasing risks of bias, discrimination, and privacy breaches. (7)

2. Slower Development of Standards

Technical standards are essential for ensuring interoperability, safety, and fairness in AI systems. Budget cuts could slow the development and adoption of such standards, creating a fragmented ecosystem where bad actors exploit loopholes. (8)

3. Weakened Consumer Protections

With fewer resources, consumer protection agencies may struggle to keep up with AI-related complaints, such as issues with algorithmic transparency or unfair outcomes in lending or hiring decisions. This could erode public trust in AI technologies. (9)

4. Global Leadership at Risk

Countries like China and the EU are investing heavily in AI regulation to establish themselves as global leaders in the field. U.S. budget cuts could compromise its ability to compete, leaving it behind in shaping the international AI agenda. (10)

Real-World Consequences

The consequences of weakened AI regulation are not hypothetical—they would manifest in tangible ways that affect individuals, businesses, and society at large.

1. Increased Algorithmic Bias

AI systems are only as good as the data they are trained on. Without robust oversight, systems could perpetuate or exacerbate biases, leading to discriminatory outcomes in areas like employment, housing, or law enforcement. (11)

2. Privacy Breaches

With less stringent data protection, companies might misuse personal information, exposing individuals to identity theft, surveillance, or other forms of harm. (12).

3. Stifled Innovation

Paradoxically, under-regulation can stifle innovation. Without clear guidelines, companies may hesitate to invest in AI technologies for fear of future litigation or public backlash. (13)

4. Public Safety Risks

In sectors like transportation or healthcare, poorly regulated AI could lead to catastrophic failures, such as accidents involving autonomous vehicles or errors in AI-driven medical diagnoses. (14)

Conclusion

Regulating AI is a delicate balancing act that requires adequate resources, expertise, and foresight. Budget cuts to regulatory agencies risk tipping the scales in favor of unchecked development, with potentially dire consequences for individuals and society. As we move forward, it is imperative to prioritize investment in AI governance to ensure that this transformative technology serves the greater good rather than becoming a source of harm or inequality.

In the end, the goal should not be to stifle AI but to guide it responsibly. After all, the future of AI is not just about machines—it’s about people.

References.

1.)   What is the GDPR? Retrieved 5/18/2025, from https://gdpr.eu/what-is-gdpr/

2.)   General Information about the CCPA Retrieved from https://oag.ca.gov/privacy/ccpa#:~:text=Right%20to%20opt%2Dout%20of%20sale%20or%20sharing:%20You%20may,them%20to%20do%20so%20again.

3.)   Artificial Intelligence and Machine Learning in Software as a Medical Device. Retrieved from https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device#:~:text=Artificial%20intelligence%20(AI)%20and%20machine,the%20medical%20product%20life%20cycle.

4.)   Technical Discipline: Artificial Intelligence – Machine Learning Retrieved from https://www.faa.gov/aircraft/air_cert/step/disciplines/artificial_intelligence

5.)   NIST: Information Technology/Artificial Intelligence. Retrieved from https://www.nist.gov/artificial-intelligence#:~:text=Artificial%20intelligence%20Topics&text=NIST%20promotes%20innovation%20and%20cultivates,Input%20is%20encouraged.

6.)   FTC Announces Crackdown on Deceptive AI Claims and Schemes. Retrieved from https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes#:~:text=The%20cases%20being%20announced%20today,tools%20that%20can%20turbocharge%20deception.

7.)   States are legislating AI, but a moratorium could stall their progress. Retrieved from https://www.brookings.edu/articles/states-are-legislating-ai-but-a-moratorium-could-stall-their-progress/#:~:text=Josie%20Stewart%20Research%20and%20Communications,fair%20AI%20design%20and%20deployment.

8.)   The Need for and Pathways to AI Regulatory and Technical Interoperability. Retrieved from https://www.techpolicy.press/the-need-for-and-pathways-to-ai-regulatory-and-technical-interoperability/

9.)   Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Retrieved from https://www.mdpi.com/2413-4155/6/1/3#:~:text=In%20healthcare%2C%20an%20AI%20system,healthcare%20or%20receiving%20subpar%20treatment.

10.)                       The global AI race: Will US innovation lead or lag? Retrieved from The global AI race: Will US innovation lead or lag?

11.)                       Human Rights Research Center: Harnessing Technology to Safeguard Human Rights: AI, Big Data, and Accountability. Retrieved from https://www.humanrightsresearch.org/post/harnessing-technology-to-safeguard-human-rights-ai-big-data-and-accountability#:~:text=AI%20systems%20often%20inherit%20and,to%20systematically%20disadvantage%20female%20applicants.

12.)                       Paloalto networks. What is sensitive data. Retrieved from https://www.paloaltonetworks.com/cyberpedia/sensitive-data

13.)                       Thomson Rueters. Why ai still needs regulation despite impact. Retrieved from https://legal.thomsonreuters.com/blog/why-ai-still-needs-regulation-despite-impact/

14.)                       US Department of Transportation. Understanding AI Risks in transportation. Retrieved from https://www.transportation.gov/sites/dot.gov/files/2024-09/HASS_COE_AI_Assurance_Whitepaper_AI_Risk_Sep2024.pdf



Comments

Popular posts from this blog

In Memoriam of Dorothy Austad

Open Letter to Congress: Uphold the Constitution and Check Presidential Overreach

The Screaming Goat Speaks.