SCALE GPGPU Programming Language

submited by
Style Pass
2024-08-30 16:00:05

It seems several companies agree with us, as we’ve been contacted by over a dozen organizations for commercial purposes & partnerships, and all that while we’re still in beta! We weren’t quite expecting this level of interest but are happy to report that talks are underway and we’re looking forward to sharing more once the details have been ironed out.

In the same vein, the consulting side of our business has been booming as well. We do a little something we like to internally call ‘’cost-imization’’. In reality this is performance/compute optimization, but the end goal is typically to reduce costs and it has a nice ring to it.

Sometimes, this means that we optimize the software running an AI without changing the behavior of the model itself to massively reduce cloud compute costs. Other times, we’re reworking older software to resolve longstanding technical debt, freeing up large amounts of compute to ‘’do the thing’’. Although cost-imization isn’t always the goal, sometimes it’s just about speed. In those cases where every millisecond counts, we’re squeezing every last bit of performance that we can out of the software, bringing latencies down as low as they can go.

We enjoy doing this kind of work because our team is primarily composed of performance specialists and GPU experts with a lot of experience in developing low latency software - although that’s not really doing the team justice but going into that would be a blog post all on its own. Suffice to say that the intertwined and synergistic specialties of our developers are the whole reason we set out and built SCALE in the first place: we had the right people for the job.

Leave a Comment