Data becomes more valuable as it compounds. A constant stream of newly generated data makes the existing snapshot more useful because it allows you to see new patterns emerge that weren’t clearly visible before. Organizations realize this; hence, they have analysts working to extract valuable insights from the data they have and produce.
Although the value is clear, not everyone within an organization is equally happy to work with data. To software engineers, data is a burden. Or more accurately: state is a burden. Stateless code is easy to reason about because there are no external factors at play. External factors increase system complexity and thus make the process of writing code slower.
Managing data is a necessary evil; software engineers are burdened with the task of ensuring data is consistent, up-to-date, and available to the rest of the organization. For organizations to be able to ship quickly, it’s important to unburden engineers as much as possible. There is a simple reason for this: the more burden placed on an engineer, the slower the project they’re involved in progresses. Software needs to be maintainable to be able to ship fast. Dealing with data is a part of that.
This article is an exploration of how to unburden software engineers in the context of working with data in larger organizations. By making working with data easier, I believe organizations are able to ship high-quality software faster, with deeper integration between products. In the metaphor of compounding value, we change the formula to extract more value from the data already available.