In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous — meaning an AI algorithm would

Silicon Valley is debating if AI weapons should be allowed to decide to kill

submited by
Style Pass
2024-10-11 19:00:04

In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous — meaning an AI algorithm would make the final decision to kill someone. “Congress doesn’t want that,” the defense tech founder told TechCrunch. “No one wants that.” 

But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons — or at least a heavy skepticism of arguments against them. The U.S.’s adversaries “use phrases that sound really good in a sound bite: Well, can’t you agree that a robot should never be able to decide who lives and dies?” Luckey said during a talk earlier this month at Pepperdine University. “And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?” 

When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn’t mean that robots should be programmed to kill people on their own, just that he was concerned about “bad people using bad AI.”

Leave a Comment