Moore's law has this idea that, like, computers for a long t...

Moore’s law has this idea that, like, computers for a long time - single thread performance just got faster and faster and faster and faster for free. But then physics and other things intervened - in power consumption - like other things started to matter.

And so what ended up happening is we went from single core computers to multi-core. Then we went to accelerators, right, this, this trend towards specialization of hardware is only going to continue. And so for years us programming language nerds and compiler people have been saying, “okay, well how do we tackle multi-core,” right? For a while it was like “multi-core is the future, we have to get on top of this thing,” and then it was “multi-core is the default, what are we doing with this thing” and then it’s like “there’s chips with hundreds of cores in them what happened,” right?

So I’m super inspired by the fact that, you know, in the face of this, you know, those machine learning people invented this idea of a tensor, right? And a tensor is a, like, an arithmetic and algebraic concept. It’s like an abstraction around a gigantic parallelizable data set, right? And because of that and because of things like Tensorflow and PyTorch we’re able to say: okay, well, express the math of the system. This enables you to do automatic differentiations, enables you to, like, all these cool things. And it’s an abstract representation and because you have that abstract representation you can now map it onto these parallel machines without having to control “okay, put that right here, put that right there, put that right there.” And this has enabled an explosion in terms of AI compute accelerators - like all the stuff.

(~1:02 in the recording)

There’s so much going on in this stream of thought.

This idea that matrices/tensors are an innately parallel data structure that enables parallel computing (in GPUs or across machines) is very interesting.

www.joshbeckman.org/notes/686547738