Back to All Events

The Gladstone AI Reports: a drafted AI Bill, and AI Trajectories

Recently, The AI Industry Foundation did a two-part Masterclass on the Gladstone AI Report.

Gladstone’s press release claims the report is “an analysis of catastrophic AI risks, and a first-of-its-kind, government-wide Action Plan for what we can do about them”. Hidden in the footnote of the report was also a drafted AI Bill. Do their recommendations align with other clear actions on AI from the U.S. government, such as the NIST framework, U.S. AISIC, and U.S. EO on AI?

Nestled alongside this report was the “Survey of AI Technologies and AI R&D Trajectories”, which examines the landscape of AI companies of note, and warns about hedge fund hijinks in the industry. But does any of it have any teeth?

In this session, Blaine Rogers and Harold Godsoe will examine Gladstone AI’s report: the proposed draft U.S. AI Bill at the core of the Gladstone AI report in context, and the “Survey of AI Technologies and AI R&D Trajectories”.

The text discusses the paradoxical nature of artificial intelligence (AI) as both a significant driver of economic growth and innovation, particularly in critical fields like medicine, energy, and climate science, and a source of potential risks equivalent to weapons of mass destruction due to its capability for weaponization, accidents, or loss of control. It highlights the astonishing pace at which AI capabilities have evolved, from struggling to generate coherent text to enabling high-risk activities such as automated phishing, identity theft, autonomous hacking, and potential bioweapon design. This rapid advancement is primarily attributed to increases in computational power and data, which have led to larger and more capable AI models.

Concerns are raised about the development of artificial general intelligences (AGIs) that could match or exceed human performance across all tasks, posing unprecedented risks if weaponized. The potential for such AI systems to execute catastrophic attacks or operate with autonomy in ways harmful to human welfare is noted as plausible within the next few years. The proliferation of AI technology is facilitated by both proprietary innovations by major labs and the AI community's open-source culture, leading to an irreversible spread of weaponizable AI systems. The text emphasizes the complex challenge that advanced AI poses to national security and public policy, calling for immediate attention to manage its dual-use nature and prevent catastrophic outcomes.

— Chat-GPT 4’s summary of Harris et al., “Survey of AI R&D Technologies”, Gladstone AI Inc. 2024

(If you have trouble accessing the link, please go to Gladstone AI’s website, and scroll to the “Survey of AI R&D Trajectories”)

Previous
Previous
16 April

AIIF Masterclass: AI Inside

Next
Next
23 April

AIIF Masterclass: AI Business Landscape