The Rise of AI Systems

An introduction to AI systems at the intersection of agentics, governance, and safety

Illustration elements: arcady_31 (iStock)

Artificial Intelligence (AI) may be the new hype, but it is actually not new at all. Early applications of AI, although quite limited, have been around since the 1950s. Over the last several decades, the capacities of artificial intelligence have grown exponentially, evolving from traditional and curative AI, to generative AI, to AI agents and agentic AI systems. In the future, this evolution may give rise to artificial general intelligence. 

It’s helpful to clarify the terms used to describe various AI systems since they are often used interchangeably but mean different things. Traditional AI, among the first and simplest of the popularized AI systems, requires human guidance and direction. If you ask it a question, it produces a specific response. It is best used for simple, one-task-at-time requests. Like most software, traditional AI enhancements are accomplished only through formal updates. 

A close sibling of traditional AI is generative AI (GAI), the technology behind popular models like OpenAI’s ChatGPT, Anthropic’s Claude AI, and others. GAI is more sophisticated and has the power to create new content, including images, video, audio, and even 3-D models. Like traditional AI, in its current state GAI remains primarily reactive in nature and requires user directives at each step before generating
a response. 

The next phase is where things start to get a little tricky. The rise in popularity of AI agents over the last six to nine months has occurred at breakneck speed. Common AI agents include virtual assistants such as Amazon’s Alexa, Apple’s Siri, and ChatGPT’s Deep Search. These agents retrieve data from large language models (LLMs) and knit together information to create more seamless user experiences and interactions in a highly responsive and conversational way. AI agents also reason through unexpected scenarios that are not explicitly covered in their training; this requires flexible “thinking” rather than rigid pattern recognition. As these capacities evolve further, we will likely see the advent of complex AI systems, through which generative AI, agentic AI, and even the holy grail of artificial general intelligence—AI that can learn and understand like a human being—are integrated, either in whole or in part. Such systems could in theory handle any task with human-like flexibility.

Against this backdrop, it’s no wonder that AI is having such a profound impact on the global market for artificial intelligence. In 2024, that market was valued at nearly $280 billion, with some aggressive estimates projecting it could surpass $800 billion by 2030. These staggering figures seem almost unimaginable. However, for a glimpse of the future, we need only look at Microsoft’s and SoftBank’s recent $1 billion and $40 billion investments, respectively, in OpenAI. With such robust funding in hand, OpenAI announced an ambitious revenue projection that proposes to triple its earning to $12.7 billion in 2025. As these systems become more sophisticated, they present unprecedented opportunities across diverse sectors.

At the same time, they also raise complex governance challenges that current state and federal policy and regulatory frameworks struggle to adequately address. 

 

AI at the intersection of governance, advanced reasoning, and agentics

The work of government is significantly more complicated when it involves AI systems and agentics. Part of the remedy for establishing trustworthy governance is to adopt an all-hands-on-deck approach to AI, spanning its creation, iteration, implementation, and accountability. Indeed, technological development and policy must evolve simultaneously and symbiotically. For this to occur, decision-makers should involve a diverse range of stakeholders, from state and federal governments, legislative and regulatory bodies, companies and developers, academia, labor, civil society, and international bodies. This intersectional and inter-sector approach gives us a more authentic and holistic view of AI, allowing us to advance the technology in deep service to humanity, facilitate more meaningful democratic participation in its governance, and safely evolve society’s relationship to AI.

While an all-hands-on-deck approach would better address the needs of society, not everyone agrees. In May, the House Energy and Commerce Committee voted to pass a budget reconciliation bill that included a range of provisions impacting technology. Specifically, the bill contained a proposed 10-year moratorium on states’ AI efforts. If passed by Congress, the moratorium would prohibit (with limited exceptions) all states and political subdivisions “from enforcing any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.” Though these provisions may not survive the reconciliation process, they reflect a misguided approach that fails to recognize the important role states can and should play in preparing us for the coming AI revolution. 

 

What state legislators can do now

States have been leading the way in the technology space, ahead of the federal government, serving as early adopters and champions for some time. To date, more than half the states have enacted some form of AI legislation, with more than two dozen addressing deepfakes, including those targeting non-consensual intimate images across red and blue states (such as Virginia’s HB 2678, Iowa’s HF 2240, Texas’s SB 1361, Hawaii’s SB 309, Washington’s HB 1999, and Minnesota’s HF 1370). Following a steady stream of state action over the years, it took the House of Representatives until 2025 to introduce the Take It Down Act (H.R.633/S.146), which addresses the same issue. This is a clear example of how and why state and federal governments must have agency over complementary policy and regulatory issues arising from artificial intelligence.

Fast-forwarding to agentic AI systems, we’re confronted by novel governance issues that existing regulatory frameworks are ill-equipped to address. The variation in autonomy across different systems further complicates governance. Systems with greater independence require more agile oversight mechanisms. When an agentic AI system or an AI agent makes a decision with minimal human input, complex questions arise regarding risks, safety, and legal and ethical responsibilities. Do such responsibilities fall to the developer, the deployer, or the user, or are they distributed across multiple stakeholders? Who makes sure that privacy and other rights are truly protected? 

These questions have inspired states to take action in recent years, in the absence of federal legislation. These efforts were made with the intention of coordinating across party and state lines to promote, not stifle, innovation, and to work in unison rather than settle for a patchwork of laws around the nation. 

One example was the effort to promote algorithmic fairness (otherwise known as “mitigation of algorithmic discrimination”) to address the use of AI in the key areas of education, health, employment, lending services, and more. In 2024, Colorado became the first state to pass such a bill, SB25-318. Other states, including Virginia, where I serve in the House of Delegates, passed the more streamlined “High-Risk Artificial Intelligence Developer and Deployer Act,” requiring algorithmic impact assessments, a bill I brought forward in the 2025 session. Although it passed the General Assembly, it ultimately was vetoed by Virginia’s governor in the spring of 2025. 

These bills, and others like them, have been targeted by certain parts of industry and members of the venture-capital community who claim that regulation would stifle innovation and harm businesses. While these are valid concerns, history does not support this argument. When California and Virginia became the first states to pass privacy-protection acts in 2018 and 2021, respectively, similar claims about harm were made. These warnings were not borne out as each state has continued to see businesses thrive and people enjoy important data protection.

Conversely, when social-media platforms were first introduced and gained popularity years ago, the government failed to create meaningful policy and regulatory frameworks to help guide their application in society. As a result, we are now faced with the great challenge of trying to address adverse mental-health impacts on our children and young adults, which should have been preventable. Playing governance catch-up is only exacerbated with the arrival of more sophisticated technologies, like agentic AI systems and AI agents, which evolve quickly. In the last few years, we’ve seen a significant leap in AI capabilities. A decade is like a lifetime in tech, so a 10-year moratorium on state AI actions, like the one under consideration by Congress, would likely be regressive rather than progressive for large segments of society outside the business sector. It would be nearly impossible to catch up.

Governance efforts naturally include considerations of the risk, safety, privacy, and responsibility associated with agentic AI systems and AI agents, among others. An early international example is the EU AI Act passed in 2024, which classifies AI systems according to risk level. 

In this framework, autonomous systems are typically placed into higher-risk categories subject to stricter requirements. While the more rigid risk-category frameworks may not squarely fit U.S. culture or its form of capitalism, many organizations, policymakers, and regulators in the U.S. are in the early stages of developing methodologies tailored to AI and agentic systems. These efforts broadly include the use of adversarial testing and red-teaming to identify potential harms and misuse scenarios. While not a mandatory regulatory framework, and one that remains in flux as the current presidential administration continues to develop its AI policies, the National Institute of Standards and Technology (NIST) proposed the AI Risk Management Framework (AI RMF) to help organizations assess potential risks and build practices as they design, operate, and retool their AI systems. 

History has taught us that while government institutions advance their work in the field of AI, it’s important for the private and nonprofit sectors to collaborate on standardized approaches to building, testing, implementing, and certifying agentic systems while meeting domestic needs, as well as facilitating understanding, coordination, competitiveness, and security on a global level.

Admittedly, organizations that develop agentic AI systems are implementing novel internal-governance practices to manage associated risks. Among these is the implementation of real-time oversight mechanisms that are critical for detecting and intervening when AI and related agentic systems operate outside expected parameters. However, mandatory human checkpoints, also referred to as “human-in-the-loop,” remain essential to providing additional safeguards against unintended consequences—particularly when these systems are being used to make high-risk or consequential decisions.

Even with this laudable enhanced testing and evaluation, the lens through which a private-sector organization or government agency views its technology may not adequately balance the spectrum of risks, privacy needs, and preferences of consumers and other members of society. Nor may it fully account for the way agentic AI systems designed for beneficial purposes can be repurposed for harmful applications, creating tension between open research and security considerations. For these reasons, it’s important for state legislators to continue to do the good and hard work of bringing forward legislation that addresses six key areas: 

  1. Enhanced privacy protections along with enabling transparency and consumer-rights legislation regarding the use, type, and scope of personal consumer data, including its use in data training sets for LLMs (while protecting trade secrets). (Consider Virginia’s proposed: HB 2250–Artificial Intelligence Training Data Transparency Act; Chapter 58, Section 59.1-607.)
  2. Digital-content authenticity and cross-platform interoperability. (Consider Virginia’s proposed: HB 2121–Digital Content Authenticity and Transparency Act.)
  3. Guardrails to promote algorithmic fairness that incorporate balanced accountability mechanisms and operational flexibility (such as Virginia’s HB 2094). 
  4. State government infrastructure, cybersecurity, and use of AI across agencies, systems, and services. (Consider models for AI use in government bodies: Texas HB 149 and Virginia proposed SB 1214.)
  5. Considerations for small and start-up businesses.
  6. Workforce development strategies to augment and upskill American workers to mitigate large-scale displacement and replacement.

 

Regardless of the legislative action chosen, it’s important to remember that privacy issues are foundational to AI. As a result, a highly promising path forward is for state and federal legislators to partner with the industry to encourage integration of governance and privacy considerations into the development of AI reasoning. Such a collaboration would promote understanding and respect for diverse perspectives and positions, and influence the design, operation, and outcome of AI systems and corresponding agentic components. This approach is referred to as “AI safety by design,” a term borrowed from a similar engineering concept, and one further inspired by the Thorn and All Tech Is Human Safety by Design Initiative, the Safety by Design Lab founded by Dr. Tomomi Tanaka, the work of Dr. Joy Buolamwini of the Algorithmic Justice League, and many others. A common thread within the AI safety-by-design approach is the belief that the integration of ethical constraints in AI reasoning as it’s under development can reduce unintended harmful outputs, compared to systems in which safety measures are added later. 

An AI safety-by-design approach is meant to create systems with discerning reasoning capabilities that also respect ethical and legal boundaries; they are capable of maintaining transparency about decision-making processes, while also protecting trade secrets and ensuring regulatory compliance. As a society, we are not quite there yet. Indeed, congressional actions must play out on parallel tracks to reveal the clearest, most sustainable path forward. We are in the process of synthesizing diverse and often competing interests to ensure that however we use AI in its myriad permutations, we do so with a human-centric approach. 

 

Conclusion

The rise of AI extending to agentic AI systems and AI agents, as fueled by fast-paced advancements in reasoning capabilities, represents both a profound opportunity and a significant challenge for humanity. These technologies and their governance practices promise to transform how we live, work, play, and govern. From healthcare and scientific discovery to environmental and energy resources, from entertainment to education, agentic AI systems and AI agents within the AI ecosystem offer solutions to some of society’s most pressing problems. 

What we do now matters. Proper governance requires intentional and persistent collaboration between technological communities, state and federal policymakers, educators, and civil society to develop governance frameworks that evolve alongside AI’s capabilities. While we must not stifle innovation, the integration of ethical considerations into the design of agentic AI and related systems, along with flexible oversight mechanisms, is essential. The evolution of agentic AI systems, reasoning capabilities, and governance frameworks presents us with among the most highly consequential technological and policy issues of our time. And how we respond in these moments will shape the trajectory of artificial intelligence, and its relationship to and impact on society, for decades to come.

About The Author

Delegate Michelle Lopes Maldonado, representing Virginia’s 20th District, is a former tech lawyer and a champion of AI, emerging technology, and data privacy. As the founding chair of the Technology & Innovation Caucus, she has been named a 2024 “Impact Maker” and “Legislative Champion.”

SIGN UP

For the latest from The States Forum

Other Articles

Editor’s Note

Message from the Founders: First Principles in the 11th Hour

Power-Sharing Liberalism

The Case for New Cities

SIGN UP

For updates from The States Forum