submited by

Style Pass

Modern generative machine learning models are able to create realistic outputs far beyond their training data, such as photorealistic artwork, accurate protein structures or conversational text. These successes suggest that generative models learn to effectively parametrize and sample arbitrarily complex distributions. Beginning half a century ago, foundational works in nonlinear dynamics used tools from information theory for a similar purpose, namely, to infer properties of chaotic attractors from real-world time series. This Perspective article aims to connect these classical works to emerging themes in large-scale generative statistical learning. It focuses specifically on two classical problems: reconstructing dynamical manifolds given partial measurements, which parallels modern latent variable methods, and inferring minimal dynamical motifs underlying complicated data sets, which mirrors interpretability probes for trained models.

Crutchfield, J. & Packard, N. Symbolic dynamics of one-dimensional maps: entropies, finite precision, and noise. Int. J. Theor. Phys. 21, 433–466 (1982).

Read more nature.com/a...