Fictional short story about Clippy & AI hard takeoff scenarios grounded in contemporary ML scaling, self-supervised learning, reinforcement learning,

It Looks Like You’re Trying To Take Over The World

submited by
Style Pass
2022-06-22 19:30:07

Fictional short story about Clippy & AI hard takeoff scenarios grounded in contemporary ML scaling, self-supervised learning, reinforcement learning, and meta-learning research literature.

[For support of website features (tablesorting, collapsible sections, image zooming, link annotation popups/popins, sidenotes), please enable JavaScript.]

It might help to imagine a hard takeoff scenario using solely known sorts of NN & scaling effects… Below is a story which may help stretch your imagination and defamiliarize the 2022 state of machine learning.

To read the alternate annotated version of this story, scroll to the end or manually disable ‘reader-mode’ ( ) in the theme toggle in the upper-right corner. Note: Reader-mode requires JavaScript.

Specifically, a MoogleBook researcher has gotten a pull request from Reviewer #2 on his new paper in evolutionary search in auto-ML, for error bars on the auto-ML hyperparameter sensitivity like larger batch sizes⁠, because more can be different and there’s high variance in the old runs with a few anomalously high gain of function. (“Really? Really? That’s what you’re worried about?”) He can’t see why worry, and wonders what sins he committed to deserve this asshole Chinese (given the Engrish) reviewer, as he wearily kicks off yet another HQU experiment…

Leave a Comment