Inline caching in our AST interpreter

submited by
Style Pass
2024-06-06 20:30:04

This is part of a series of blog posts relating my experience pushing the performance of programming language interpreters written in Rust. For added context, read the start of last week’s blog post.

In short: we optimize AST (Abstract Syntax Tree) and BC (Bytecode) Rust-written implementations of a Smalltalk-based research language called SOM, in hopes of getting them fast enough to meaningfully compare them with other SOM implementations.

As a general rule, all my changes to the original interpreter (that led to speedups + don’t need code cleanups) are present here. This week’s code is in its own branch, since it relies on ugly code so I don’t want it on the main branch (explanations further below).

…and benchmark results are obtained using Rebench, then I get performance increase/decrease numbers and cool graphs using RebenchDB. In fact, you can check out RebenchDB in action and all of my results for yourself here, where you can also admire the stupid names I give my git commits to amuse myself.

Inline caching is a very widespread optimization in dynamic programming language implementation. If you’re reading this kind of blog, chances are you’ve already heard of it. Say you have this code:

Leave a Comment