As research and innovation accelerate progress and U.S. competitiveness in fields like AI, biotech, and cybersecurity, research security has never been more pressing. Today, researchers and institutions face a growing array of threats to their work, from intellectual property theft to nation-state espionage. In this session, our expert panel unpacked the latest risks and strategies for protecting research and innovation.
AI, as a dual-use technology, presents both significant opportunities and serious risks. It can enhance decision-making, streamline operations, and support strategic objectives, but it also introduces vulnerabilities, from technical flaws to gaps in governance and oversight.
“As AI technology continues to accelerate, the divergence between good outcomes and bad outcomes has never been wider.”
The Hon. Will Tobey
Center Director
Center for National Security and
International Studies
Los Alamos National Laboratory
A key concern raised by Mr. Nicholas Generous, Deputy Group Leader at Los Alamos National Laboratory, was AI’s capacity to magnify systemic weaknesses. “AI can scale harm massively,” he warned, especially in environments already exposed to risk. He likened this risk exposure to debates in biotechnology, such as gain-of-function research, where advances come with heightened security concerns. Insider threats, he added, are particularly troubling — those with privileged access to AI systems may exploit them in ways that are hard to detect and mitigate.
These challenges point to a critical shortcoming in current policy. Traditional regulatory models are no longer adequate for managing AI’s rapid pace of change. Rather than trying to keep up by regulating each new tool or application, Dr. Kevin Dixon, Program Director at Los Alamos National Laboratory, made the case that rules should be based on the outcomes of AI processes. He described this concept as an “outcomes-based risk framework.” Instead of constantly rewriting regulations for every emerging innovation, this model focuses on assessing the broader impacts of AI deployment. Such a shift would allow policymakers to address risk at the systems level, supporting more adaptable and durable oversight.
Dr. Dixon argued the process of securing AI is like the processes of securing human researchers. “In my perspective, there is no difference between artificial intelligence and ‘real intelligence,’” he noted. Rather than treating AI as a separate category requiring its own ethical or regulatory framework, Dr. Dixon maintained that institutions should apply consistent accountability standards — whether a decision is made by a person or a machine. If an AI system causes harm, the deploying organization should be held responsible, just as it would be for failures resulting from human judgment.
Ms. Rebecca Jackson, Chief Privacy Officer and Senior Counsel at Sandia National Laboratories, extended this discussion by turning to the human side of research security. She defined it as “a set of practices that preserves information and data in research applications and contexts,” and noted that many breaches are not the result of foreign actors or sophisticated attacks, but of avoidable internal mistakes. “There might be cybersecurity threats that we may be facing, but most of the breaches Sandia faces is due to human error,” she said.
“The AI landscape, and the legal landscape around it, is rapidly evolving.”
Ms. Rebecca Jackson
Chief Privacy Officer and Senior Counsel
Sandia National Laboratories
These internal vulnerabilities, Ms. Jackson warned, have consequences far beyond the immediate loss of information. Breaches that begin with simple human error can escalate into system-wide disruptions, especially when critical infrastructure is affected. “We need to think about the potential disruption of work from ransomware attacks. These threaten our supply chain,” she cautioned. In this way, cybersecurity is not only a digital issue, but also a logistical and organizational one. Protecting scientific institutions requires embedding security across both technical and operational domains.
To address these evolving risks, Ms. Jackson advocated for a shift toward governance that is flexible and grounded in risk-based thinking. Rather than relying on static rules, Ms. Jackson pointed to frameworks like those developed by the National Institute of Standards and Technology (NIST) as tools for more adaptive decision-making. The NIST frameworks, including the widely adopted Cybersecurity Framework (CSF) and the Risk Management Framework (RMF), offer structured, modular approaches to identifying, assessing, and mitigating security risks. These frameworks are built around core principles such as continuous assessment, institutional accountability, and outcome-based decision-making — all of which are crucial in managing emerging technologies like AI.
“The real balance for AI rule makers is trying to balance security and innovation.”
Mr. Nicholas Generous
Deputy Group Leader
Los Alamos National Laboratory
Building on this institutional foundation, Ms. Jackson further argued that navigating the complexity of systems like artificial intelligence requires expanded training in research security. Institutional leaders must be able to understand how AI models function, ask critical questions, and anticipate downstream consequences. Mr. Generous reinforced this view, stressing the need to formalize best practices in AI use to ensure consistency and accountability in a fast-changing landscape. He also asserted that the government has a central role in safeguarding national security, particularly where AI technologies developed in the private sector may have dual-use applications with the potential to be misused or weaponized.
“The advantage the United States has is our likeminded friends. We cannot harness the full power of data and data tools all by ourselves.”
Dr. Kevin Dixon
Program Director
Sandia National Laboratories
In parallel, data security becomes integral to AI risk governance, as both internal and external threats require the same adaptive frameworks and careful management. Data is a critical asset in AI systems, as its quality and protection are vital for the safe and responsible deployment of AI technologies. Dr. Dixon noted that any data of value to the United States quickly becomes a target for adversaries. Describing data as "intellectual capital," he positioned it as a core asset to protect. The risk of data exposure, whether through internal breaches or cyberattacks from foreign adversaries, underscores the need for robust data governance within AI risk management. Dr. Dixon advocated for smart data-sharing structures that safeguard sensitive information while enabling collaboration, urging institutions to carefully manage how they govern access to research outputs.
However, robust data protection cannot come at the cost of scientific isolation. Collaboration across borders is itself a competitive advantage — one that, if managed carefully, can coexist with strong information controls. Dr. Dixon framed collaboration with democratic allies as a critical asset, especially when facing global competitors with centralized, authoritarian research systems. He pointed to the United States’ partnerships with other free nations as an underused strength — one that not only expands access to research talent and infrastructure but also reinforces shared values around transparency and accountability. However, Dr. Dixon also cautioned that collaboration must be purposeful and secure. “We need to be intentional with the information we choose to disseminate and the information we share,” he added, calling for tighter alignment between researchers and institutional bodies such as technology transfer offices and innovation hubs. These internal mechanisms, he argued, can help steward knowledge exchange in ways that balance openness with strategic restraint.
However, the need for careful deliberation in international cooperation is complicated when considering how emerging technologies evolve globally. Mr. Generous suggested, in certain cases, engagement with adversarial nations is not only inevitable but could be strategically necessary. He pointed to fields like artificial intelligence, quantum computing, and synthetic biology, where technological advancements in one region can rapidly have global consequences. The nature of these technologies means that developments in one country — whether a friend or adversary — can quickly influence global norms, security, and the overall pace of innovation. As such, Mr. Generous argued that engaging with strategic competitors in these areas can offer valuable insights into their technological advancements, potentially helping to prevent the misuse of these technologies on a global scale.
Regardless of whether the United States works with ally or rival, its international scientific partnerships must be governed by thoughtful guidelines. Ms. Jackson argued that collaboration cannot be improvised — it must be governed by clearly defined agreements that articulate mutual expectations around data handling, security, and research ethics. Without shared protocols, she warned, even well-intentioned projects can falter due to misalignment between partners operating under different legal and cultural systems. These guidelines for information sharing become even more important in the complex scenario where the United States may need to extend its partnerships beyond its traditional allies.
As the panel wrapped up, there was agreement that artificial intelligence has the potential to radically reshape the social and economic order by lowering the barriers to innovation and enabling new players to challenge established industries. This heightened pace of disruption, while promising, also introduces new vulnerabilities within research institutions and the national security enterprise — vulnerabilities that require proactive and adaptive governance.Dr. Dixon warned that future breakthroughs may emerge in forms that elude traditional detection and oversight systems, making preparedness even more critical. Although the benefits of AI — from advanced models to collaborative research tools — are vast, their development and deployment must be approached with clear intent and strategic foresight.