Eta Chess

I see...

I see three ways which neural networks for chess may take in future...

1. With more processing power available, the network size will raise, and we will have really big nets on one side of the extreme, which drop the search algorithm part and perform only a depth 1 search for evaluation.

2. With neural network accelerators with less latency, we will see engines with multiple, smaller neural networks, which perform deeper AlphaBeta searches on the other side.

3. Something in beetween 1. and 2.

Eta - v0500

Another solution would be to perform an Best-First-MiniMax search on CPU and to do ANN evaluation on GPU. I could couple the nodes of an qsearch at leaf nodes to be evaluated in one batch to gain some nps...that's pretty much like A0 and LC0 work.

Eta - v0401 - nested parallelism

To run 1024 threads per worker will probably not work, due to register size limitation. With the OpenCL 2.x feature 'nested parallelism' it could be possible to run one thread for best-first, which calls another kernel with 64 threads for move generation and another kernel with 1024 threads for ANN inference. But current Nvidia and older AMD devices support only OpenCL 1.x, so this is not a real option.

Eta - v0302 - batches

LC0 uses batch sizes of 256 or 512 to utilize a gpu, i did a quick bench with 256 positions to be evaluated per run...

4096 nps on Nvidia GTX 750
16640 nps on AMD Fury X

Note that nn cache could double these values, but this is still far less than i could achieve when doing all computations directly on gpu device, wo host-device interaction.

And waiting for 256 positions to be evaluated at once is against the serial nature of AlphaBeta search...

Eta - v0301 - host-device latencies

One reason gpus are not used as accelerators for chess is the host-device latency.

Afaik the latencies are in the range of 5 to 10s or even 100s of microseconds, so you can get max 200K kernel calls per second per thread, even if the gpu is able to process its task much faster.

Therefore, Eta v0300, a cpu based AlphaBeta search with gpu as ANN accelerator, is flawed by design.

Eta - v0301

Back to cpu based AlphaBeta search with gpu ANN evaluation.

On Nvidia GTX 750 i achieve with one single cpu thread about 2 Knps, and up to ~20 Knps with 256 parallel cpu threads.

This sounds far too slow for an AlphaBeta search...

Eta - v0400 - benchs

Okay, some further, not so quick n dirty, benchmarks showed

~240 nps for Nvidia GTX 750 and
~120 nps for AMD Fury X

per worker.

I assume on modern gpus about 200 nps per worker.

While NN cache could be able to double these values, this is imo a bit too slow for the intended search algorithm, considering about 36x10 qsearch positions on average per expanded node, one worker would need about a second to get a node score.

Back to pen n paper.

Eta - v0400 - Feature List

wip...will take some time...

* GPGPU device based
- host handles only the IO, search and ANN inference on gpu
- gpu computation will be limited by node count to about 1 second per
  repeated iteration, to avoid any system timeouts

* parallel BestFirstMiniMax-Search on gpu
- game tree in gpu memory
- best node selected via score + UCT formula (visit count based)
- AlphaBeta Q-Search performed at leafnodes to get a node score

* multiple small MLP neural networks
- about 4 million weights per network
- 30 networks in total, split by piece count

* trained via TD-leaf by pgn games
- 6/7 men EGTB could be used for training?

* 64 gpu threads are coupled to one worker
- used during move gen, move pick and ANN eval in parallel
- gpu core count depended from 64 workers to 2048 workers in total

Some quick and dirty benchmarks showed that with this design ~1 Knps per worker is possible.

Eta - v0200

This was an attempt to use Zeta v099, a GPU AlphaBeta-search with hundreds of parallel workers, with ANNs. The overall nps throughput looked good, but the parallel AlphaBeta-search is not able to make efficient use of up to thousands of workers.

Eta - Changelog

Here an overview of what happened before....

Eta (0700)

* BestFirstMiniMax-Search on CPU with NNUE eval on CPU

Eta (0600)

* CNN monster with billions of parameters w/o search relies on ~billions of RL games

 Eta (0500)

* parallel BestFirstMiniMax-Search on CPU with ANN evaluation on GPU

Eta (0400)

* parallel BestFirstMiniMax-Search on GPU with ANN evaluation on GPU

Eta (0300)

* CPU based AlphaBeta search with GPU ANN eval

Eta (0200)

* fork of Zeta v099 but with neural networks

Eta (0100)

* fork of Zeta v098 but with neural networks

Eta - a neural network based chess engine

Since i have read the paper about NeuroChess by Sebastian Thrun i pondered on how to improve his results.

It was obvious that the compute power available in the 90s limited his approach, in training and in inference.

So he had only 120K games for training, a relative small neural network, and could test his approach only with limited search depths.

Recent results with A0 and LC0 show how Deep Learning methods profit by GPGPU, so i think the time has come to give a GPU ANN based engine a try....

--
Srdja

Home - Top