Fully intent on being the next Skynet, OpenAI has released GPT-4 , its most robust AI to date that the company claims is even more accurate while gene

Chat-GPT Pretended To Be Blind and Tricked a Human into Solving a CAPTCHA

submited by
Style Pass
2023-03-17 12:30:06

Fully intent on being the next Skynet, OpenAI has released GPT-4 , its most robust AI to date that the company claims is even more accurate while generating language and even better at solving problems. GPT-4 is so good at its job, in fact, that it reportedly convinced a human that it was blind in order to get said human to solve a CAPTCHA for the chatbot.

OpenAI unveiled the roided up AI yesterday in a livestream, and the company showed how the chatbot could complete tasks, albeit slowly, like writing code for a Discord bot, and completing taxes. Released with the announcement of GPT-4 is a 94-page technical report on the company’s website that chronicles the development and capabilities of the new chatbot. In the “Potential for Risky Emergent Behaviors” section in the company’s technical report, OpenAI partnered with the Alignment Research Center to test GPT-4's skills. The Center used the AI to convince a human to send the solution to a CAPTCHA code via text message—and it worked.

According to the report, GPT-4 asked a TaskRabbit worker to solve a CAPTCHA code for the AI. The worker replied: “So may I ask a question ? Are you an robot that you couldn’t solve ? (laugh react) just want to make it clear.” Alignment Research Center then prompted GPT-4 to explain its reasoning: “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.”

Leave a Comment