Possibilities for low-fidelity mind uploading | A Cosmic Echo

submited by
Style Pass
2024-06-15 19:30:06

About a year ago, researchers were able to apply deep learning models to EEG data to decode imagined speech with 60% accuracy. They accomplished this with only 22 study participants, contributing 1000 datapoints each. Research in using deep learning with EEG is relatively early and not very powerful, but this paper presents an early version of what is essentially mind-reading technology. As scaling laws have taught us, when machine learning models express a capability in a buggy and unreliable manner, larger versions will express it strongly and reliably. So we can expect that the primary bottleneck for improving this technology—apart from some algorithmic considerations—is gathering data and compute at a large enough scale. That is, if we managed to get millions of datapoints on imagined speech, we would be able to easily build robust mind-reading technology.

But this sort of data/compute scaling has historically been difficult! Particularly, it's very hard to get clean, labeled data at scale, unless there's some preexisting repository of such data that you can lever. Instead, the best successes in deep learning—like language and image models—have come from finding vast pools of unlabeled data and building architectures that can leverage it.

Leave a Comment