Flexible Governance Urged to Curb AI Biosecurity Risks

American Association for the Advancement of Science (AAAS)

In a Policy Forum, Doni Bloomfield and colleagues discuss the need for expanded – yet tailored and flexible – governance for the biological data used to develop powerful artificial intelligence (AI) models. Rapidly advancing AI systems trained on biological data have enabled researchers to design new molecules, predict protein structure and function, and probe vast and highly complex biological datasets for novel insights that could greatly expand our understanding of nature and human health. However, these same tools could also be misused for dangerous purposes, such as designing harmful pathogens or generating genetic sequences that bypass safety checks. Despite these widely recognized risks, current governance is severely lacking, and increasingly powerful models are often released without safety evaluation. Here, Bloomfield et al. discuss how biological data governance could be achieved to both mitigate potential risks of biological AI systems without impeding their research potential. Just as researchers accept limits on access to personal information in genetic datasets in order to protect privacy without halting research, say the authors, similar frameworks could restrict only a narrow class of especially sensitive pathogen data while leaving most scientific data openly available. Such targeted controls would make it harder for malicious actors to obtain the rare and costly datasets needed to train dangerous models, without significantly impeding legitimate research, especially if paired with secure digital research environments. Bloomfield et al. argue that this oversight should, however, remain limited, targeted, and flexible so that governance frameworks can adapt as required to keep up with both technological and scientific advancements. Moreover, to prevent abuse or excessive bureaucratic control, the research community should have the ability to appeal data classifications, and governing agencies should promise to ensure fast, transparent review processes so that much-needed safety measures do not become obstacles to legitimate scientific processes. "Formalizing a system of data access would allow researchers to scrutinize and develop these controls and would give scientists and companies clarity in an environment that is currently somewhat unpredictable," write the authors. "Starting this work will also allow scientists and governments to learn more about the nature of AI risk and revise data-access controls in light of tangible evidence, rather than guesswork."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.