Back to All Events

The Gemini Report

Or, “Blaine read the Gemini Report, so you don’t have to.”

On December 6th, Google announced it’s new and “most powerful LLM model ever”: Gemini. Excitement and hype followed, then waves of scrutiny. Even given Google’s cherry-picking demos, it seems that Gemini Ultra is better than GPT-4 on almost all benchmarks. That and its native multimodal capabilities make Gemini a genuine improvement on the state of the art, and worthy of our attention.

This week, Blaine will give a close reading of the Gemini report, with a healthy degree of skepticism. What technical details can we glean from Google Deepmind’s vague descriptions? Will a 67% to 74% improvement on benchmarks be a noticeable to users? Is the Gemini release healthy competition, or irresponsible accelerationism? Should the working professional prepare to cancel their GPT-4 subscription and jump ship? Let’s find out.

This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks — notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of Gemini models in cross-modal reasoning and language understanding will enable a wide variety of use cases and we discuss our approach toward deploying them responsibly to users.

- ”Gemini: A Family of Highly Capable Multimodal Models”

Previous
Previous
13 December

Progress Measures for Grokking via Mechanistic Interpretability

Next
Next
10 January

Limits of Transformers on Compositionality