This article is the second of a four-part series on the Envision virtual machine’s inner workings: the software that runs Envision scripts. See part

Envision VM (part 3), Atoms and Data Storage

submited by
Style Pass
2021-11-29 11:30:06

This article is the second of a four-part series on the Envision virtual machine’s inner workings: the software that runs Envision scripts. See part 1, part 2. This series doesn’t cover the Envision compiler (maybe some other time), so let’s just assume that the script has somehow been converted to the bytecode that the Envision virtual machine takes as input.

This creates two challenges: how to preserve this data from the moment it is created and until it is used (part of the answer is: on NVMe drives spread over several machines), and how to minimize the amount of data that goes through channels slower than RAM (network and persistent storage).

One part of the solution is to have two separate data layers, with data being pushed into one of the two layers based on its nature. The metadata layer contains information about the actual data, and about the scripts being executed:

In other words, the metadata layer can be seen as a dictionary mapping thunks to results, where the exact nature of the result depends on what the thunk actually returned.

Leave a Comment