B&B_NEW_LOGO_400

Biden issues ambitious executive order on AI

By Mark Lanterman

On October 30, the Biden administration issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.1 Coming near the end of what was dubbed by many “the year of AI,” the order acknowledges both the risks and manifold benefits of AI technology, as well as the need for governance oversight to manage it as responsibly as possible. The order states: 

“Artificial Intelligence must be safe and secure. Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use…Testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner, and are compliant with applicable Federal laws and policies. Finally, my Administration will help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not.”

In the “misinformation” age, marked by deep fakes, vocal cloning, and the unsettling idea that seeing shouldn’t always be believing, a labeling system allowing Americans to spot AI-generated content would certainly be a game-changer. Within a year, it is expected that the government will have a better idea of how to best identify and label “synthetic content produced by AI systems, and to establish the authenticity and provenance of digital content, both synthetic and not synthetic, produced by the Federal Government or on its behalf.” While these efforts seem to be primarily directed at digital content produced by the United States government, it is less clear how such measures would be applied to AI-produced content more generally. 

The idea of an identification system itself is promising in light of current challenges, and the executive order signals progress in the right direction, but it remains to be seen how these objectives will come to fruition. For example, the order describes watermarking as “the act of embedding information, which is typically difficult to remove, into outputs created by AI.” However, as noted by MIT Technology Review, “The trouble is that technologies such as watermarks are still very much works in progress. There currently are no fully reliable ways to label text or investigate whether a piece of content was machine generated. AI detection tools are still easy to fool. The executive order also falls shorts of requiring industry players or government agencies to use these technologies.”2 At this point in time, enabling Americans to distinguish AI-generated content from authentic content will still require a substantial amount of time and effort on several different fronts. 

Furthermore, the order’s call for AI applications to be made resilient against misuse or dangerous modifications will be similarly difficult. As is common with rapidly evolving technology, the methods needed to use or adapt it for nefarious purposes tend to develop at the same rate. Though the objectives of the order are welcome, and likely reflect the wishes of the American people when it comes to navigating a world infiltrated by “fake news,” they will be challenging to achieve. In the meantime, especially in the courtroom, policies and procedures should be considered for the here and now. From the deepfake defense (“That’s not me, prove it is”) to fake content being submitted as evidence, methodologies should be established for managing AI in the courtroom in the absence of widescale, standard technological detection methods. 

The executive order indicates that AI’s inherently dual-sided nature is being acknowledged within government. However, legislation is still required to effectively combat its risks and maximize benefits. Some of the proposed objectives are still elusive, and it is unclear when individuals can be expected to consistently spot a deepfake in daily life or at the very least be assured that the government communications they receive are real. That being said, improved governance, safety protocols, transparency, and a commitment to testing are all positive goals that would assist in making better protections for consumers a reality. 

 

NOTES

1 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

2 https://www.technologyreview.com/2023/10/30/1082678/three-things-to-know-about-the-white-houses-executive-order-on-ai/



Mark Lanterman is CTO of Computer Forensic Services. A former member of the U.S. Secret Service Electronic Crimes Taskforce, Mark has 28 years of security/forensic experience and has testified in over 2,000 matters. He is a member of the MN Lawyers Professional Responsibility Board.