As I was driving back after dropping my kid off at school this morning, a thought kept nagging at me: If we’ve never managed to create a completely

Assessing AI Risks: AI Bets 50% on AI-induced Catastrophe Within 10 Years

submited by
Style Pass
2024-10-06 19:00:13

As I was driving back after dropping my kid off at school this morning, a thought kept nagging at me: If we’ve never managed to create a completely secure system — people are always jailbreaking phones, hacking consoles, finding new zero-day exploits — how can we expect the safety measures we’re putting on AI to be any different? With rogue developers and malicious actors out there, open-source AI implementations becoming widely available, and the costs of training powerful models dropping all the time, it feels almost inevitable that something could go wrong. It seems like the logical conclusion isn’t if something will happen, but when. So, like I often do when a question like this bugs me, I turned to our friendly neighborhood AI to see what it “thought”.

This document is the result: an attempt to use commercial Large Language Models (LLMs) to assess the potential risk of artificial intelligence causing a major catastrophic event within the next ten years.

Leave a Comment