A large, influential contingent of Silicon Valley believes that large language models could pose an existential threat to humanity. (A different, less

People are worried about Large Language Models

submited by
Style Pass
2024-09-23 17:00:04

A large, influential contingent of Silicon Valley believes that large language models could pose an existential threat to humanity. (A different, less hip contingent also believes LLMs can be harmful, but less existentially so—they worry it can be used by bad actors for misinformation, or can further human biases.) When a new product might have negative externalities one way we as a society deal with this is via regulation. Nobody expects the federal government to take any meaningful action here in the near future; everybody expects the EU to take action but at this point we’ve all written down Europe’s growth prospects to 0 so no one particularly cares. That just leaves California. Over the past year momentum for a CA bill regulating LLMs has steadily built, and we’re now near the finish line.

SB 1047, authored by State Senator Scott Wiener, sets up thresholds in terms of the cost and number of FLOPs needed to train a model and requires models that exceed these thresholds to have “reasonable assurances” (an emergency stop button, safety protocols, incident reporting) that they will avoid “critical harm” (>$500 million in damage). The bill has passed both state houses and is awaiting signature or veto from our governor, Gavin Newsom.

Leave a Comment