A Viewpoint on the Frontiers in Science Lead Article  
    Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish

Frontiers | The sentient organoid?

submited by
Style Pass
2023-03-26 13:30:05

A Viewpoint on the Frontiers in Science Lead Article Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish

The review—or perhaps White Paper—by Smirnova et al. (1) offers an incredibly useful orientation to the emerging world of organoids and the exciting opportunities ahead. This viewpoint picks up on three cardinal themes as viewed through the lens of the free energy principle and active inference, namely: the potential for organoids as sentient artefacts with (artificial) generalised intelligence, as experimental models in neurobiology, and as in vitro patients.

This theme can be framed in terms of machine learning and engineered intelligence, i.e., the use of organoids to study sentient behaviour and active computers. Smirnova et al. start their overview by comparing current approaches in artificial intelligence and machine learning research with natural intelligence, noting the six orders of magnitude difference between in silico and in vivo computers (i.e., von Neumann architectures and functional brain architectures). These differences are expressed in terms of computational and thermodynamic efficiency, speaking to a change in the direction of travel for machine learning—a direction of travel that may be afforded by the introduction of organoids.

From a theoretical perspective, this can be usefully understood in terms of the physics of self-organisation. If one commits to the free energy principle, then one can describe the self-organisation of cells, organoids, and brains as minimising a bound on the log evidence or marginal likelihood of sensory inputs (2). The self-organisation comes into play when the cell, organoid, or brain actively samples or selects its inputs (a.k.a., active inference). But why inference? Here, inference speaks to the fact that—to define this kind of universal objective function—one needs a model that defines the likelihood of any sensory exchanges with the environment. This model is variously known as an internal, world, or generative model and is entailed by the structure and message-passing on the interior of the structure in question. This formalisation generalises things like reinforcement learning by absorbing preferred states into the prior preferences of the generative model (3). In this way, one can then use variational calculus to derive a lower bound on model evidence—known as the evidence lower bound (ELBO) in machine learning (4)—to describe the dynamics, plasticity, and morphogenesis as a gradient flow on variational free energy (5). So, why is this a useful formulation?

Leave a Comment