Interested in AI Safety?
We run irregular social events where you can learn more, chat to interested people, network, and generally have a good time. Scroll down to see some of the events that we or our sister organisations have organized. Subscribe to our mailing list to be kept up to date.
TAIS 2024
The Technical AI Safety Conference will bring together specialists in the field of AI and technical safety to share their research and benefit from each others’ expertise. We want our attendees to involve themselves in cutting-edge conversations throughout the conference with access to networking opportunities with the brightest minds in AI Safety.
技術的AIセーフティカンファレンス(The Technical AI Safety Conference)は、AIと技術的セーフティの専門家たちが一堂に会し、研究成果を共有し合い、互いの専門知識を深めるためのイベントです。このカンファレンスでは、AIの最先端研究に関する意見交換が行われ、AIセーフティ分野のトップ専門家たちとのネットワーキングの機会も提供されます。
ACX Tokyo: The Extinction Tournament
ACX Tokyo is a thriving rationalish discussion group based in Nakameguro. They meet once a month, and this month’s topic is inside view / outside view conflicts around AI Safety.
ALIFE 2023: (In)human Values and Artificial Agency
ALIFE 2023 will feature a special session and workshop on alignment-related topics. More information can be found at https://humanvaluesandartificialagency.com/workshop/.
This event is organized by Simon McGregor, Rory Greig and Chris Buckley, and is unaffiliated with AI Safety 東京.
Conversational AI Safety: Is AI as dangerous as pandemics? AIはパンデミックと同じくらい危険なのか?
午後のひととき、形式にとらわれない気軽な会話を楽しみませんか。コーヒー、スナック、プロンプトを用意し、新しいアイデアについて新鮮な顔ぶれと話すことができるようにします:
AIリスクを防ぐために、政府は現在どのような取り組みを行っていますか?
AIリスク?
パンデミック(伝染病)?
核戦争?
これらのリスクを軽減するために、私たちはさらに何をすることができるのか?
AIによる実存的リスクは、優先順位が低いのか、過剰なのか、それとも妥当なのか?
Join us for an afternoon of mostly unstructured casual conversation. We’ll provide coffee, snacks and prompts, to get you talking to some fresh faces about new ideas:
What are governments currently doing to prevent…
AI risk?
Pandemics?
Nuclear War?
What more could we be doing to mitigate those risks?
Is existential risk from AI under-prioritised, over-prioritised, or about right?
Conversational AI Safety: GPT-4
午後のひととき、形式にとらわれない気軽な会話を楽しみませんか。コーヒー、スナック、プロンプトを用意し、新しいアイデアについて新鮮な顔ぶれと話すことができるようにします:
GPTを何に使ったことがありますか?校正、プログラミング、詩、陶芸?
GPTの信頼性はどうですか?
GPTに対抗して、どんな新しい規制や法律が成立しているのですか?その理由は?
GPTから予見されるリスク:大量誤報?技術的な失業?誰もが授業でAを取ります?
AGI研究の一時停止を求めるFLIの公開書簡
GPTをどのように改善すればいいのですか?何が足りないのですか?
Join us for an afternoon of mostly unstructured casual conversation. We’ll provide coffee, snacks and prompts, to get you talking to some fresh faces about new ideas:
For what have you used GPT? Proofreading, programming, poetry, pottery?
How reliable have you found GPT?
What new regulations and laws are being passed in response to GPT? Why?
Foreseeable risks from GPT: Mass misinformation? Technological unemployment? Everyone gets an A in their coursework?
FLI’s open letter to pause AGI research
How can we improve on GPT? What is it missing?
TEDxOtemachi
Blaine William Rogers, founder and organiser of AI Safety 東京, will be speaking at TEDxOtemachi on large language models and why they can’t (yet) be trusted.