Back to All Events

AIIF Weekly Masterclass: AI Use

Welcome to the AI Industry Foundation’s Weekly Masterclass.

The AIIF will be running Weekly Masterclasses to bring AI Executives up-to-speed. Our Masterclasses provide a structured distillation of technical and non-technical aspects of cutting-edge AI knowledge, preparing you to have conversations on the frontier of AI and to make business decisions based on those conversations.

This week, we’ll be focusing on AI Use. How should you think about modern generative AI in order to get the best use out of it?

Good mental models of generative AI systems are hard to find. Some people describe GPT as a “code interpreter for natural language”; others describe it as a surprisingly worldly five-year-old. A bad mental model can lead you to make bad inferences about what generative AI can do. Sometimes GPT gives the wrong answers to questions; if you think of it like a code interpreter you’ll expect it to give the same wrong answer every time, but often slightly rephrasing the prompt can elicit the right answer. Sometimes a generative AI will refuse an innocuous request; if you think of it like a knowledgeable five-year-old you might try asking again more politely, but it’s easier to avoid safeguards by starting or ending your query with a bunch of random words instead.

A useful metaphor that we find ourselves returning to time and time again is that of the simulator. Popularized by the pseudonymous janus in late 2022, the simulator is posed as an alternate metaphor to Bostrom’s Oracle, Genie, and Sovereign. A simulator is a piece of software designed to mimic the evolution of another system. A physics simulator might simulate two billiard balls colliding, or it might simulate sand falling through an hourglass. The billiard balls and the hourglass are “simulacra”, the thing being simulated. This is a useful metaphor for generative AIs like GPT. GPT simulates text, and the documents it produces are its simulacra. Using this metaphor, we can neatly disentangle questions of what capabilities GPT has, and make good predictions about what tasks it’s well-suited to.

In this session, we’ll review the simulator metaphor as it applies to modern generative AI, paying special attention to large language models, and see how thinking of generative AIs as simulators can help us make decisions about which tasks we should delegate to AI, how to work around its flaws, and how to best spend our API credits.

TL;DR: Self-supervised learning may create AGI or its foundation. What would that look like?

Unlike the limit of RL, the limit of self-supervised learning has received surprisingly little conceptual attention, and recent progress has made deconfusion in this domain more pressing.

Existing AI taxonomies either fail to capture important properties of self-supervised models or lead to confusing propositions. For instance, GPT policies do not seem globally agentic, yet can be conditioned to behave in goal-directed ways. This post describes a frame that enables more natural reasoning about properties like agency: GPT, insofar as it is inner-aligned, is a simulator which can simulate agentic and non-agentic simulacra.

The purpose of this post is to capture these objects in words so GPT can reference them and provide a better foundation for understanding them.

— “Simulators”, Less Wrong, 2022-09-02

Previous
Previous
13 March

Fundamental Limitations of Reinforcement Learning from Human Feedback

Next
Next
27 March

Reinforcement Learning from AI Feedback