Okay, let's do an timewarpjump back to the year 2008 and figure out how we could use the hardware back then for an neural network based chess engine.
Reinforcement Learning on a GPU-Cluster is probably a no go (the Titan supercomputer with 18,688 K20Xs went op in 2012) so we stick on Supervised Learning from a database of quality games or alike. A neural network as used in A0 with ~50 millions parameters queried by an MCTS-PUCT like search with ~80 knps is also not doable, we had only ~336 GFLOPS on an Nvidia 8800 GT back then, compared to ~108 TFLOPS on an RTX 2080 TI via Tensor Cores nowadays. So we have to skip the MCTS-PUCT part and rethink the search. Instead to go for NPS, we could build a really big CNN, but the memory back then on a GPU was only about 512 MB, so we stick on ~128 Mega parameters. So, we have to split the CNN, for example by piece count, let us use 30 distinct neural networks indexed by piece count, so we get accumulated ~3840 Mega parameters, that sounds already better. Maybe this would be already enough to skip the search part and do only a depth 1 search for NN eval. If not, we could split the CNN further, layer by layer, inferred via different waves on GPU, loaded layer-wise from disk to GPU memory via PCIe or alike and hence increase the total number of parameters...so what is the drawback if we could run an CNN with several billion parameters? Obviously the training of such an monster, not only the horse power needed to train, but the training data, the games. A0 used about 40 million RL games to reach top-notch computer chess level, for only ~50 million parameters, the Chess Base Mega Database contains ~8 million quality games...so we simply have not enough games to train such an CNN monster via Supervised Learning, we rely on Reinforcement Learning, and therefore on some kind of GPU-Cluster to play RL games... nowadays, and also back in 2008.