Friday, 28 August 2020
Eta - v0600 - The next step for LC0?
I know, LC0's primary goal was an open source adaptation of A0, and I am not into the Discord development discussions and alike, anyway, my 2 cents on this:
- MCTS-PUCT search was an descendant from AlphaGo, generalized to be applied on Go, Shogi and Chess, it can utilize a GPU via batches but has its known weaknesses, tactics in form of "shallow-traps" in a row, and end-game.
- A CPU AB search will not work with NN on GPU via batches.
- NNUE makes no sense on GPU.
- LC0 has a GPU-cloud-cluster to play Reinforcement Learning games.
- LC0 plays already ~2400(?) Elo with an depth 1 search alone.
- It is estimated that the NN eval is worth 4 plies AB search.
Looking at the above points it seems pretty obvious what the next step for LC0 could be, drop the weak part, MCTS-PUCT search, ignore AB and NNUE, and focus what LC0 is good at. Increase the plies encoded in NN, increase the Elo at depth 1 eval.
To put it to an extreme, drop the search part completely, increase the CNN size 1000 fold, decrease NPS from ~50K to ~50, add multiple, increasing sized NNs to be queried stepwise for Time Control.
Just thinking loud...
There are no published comments.
New comment