is a little free Qt[fn:1] GUI manipulating backpropagation neural networks[fn:2]. It’s supposed to be fed with specific files created by the log modes of neuroStock[fn:3] or neuroGrape[fn:4] chess engines. It creates/manipulates neural network files that can be used by the neuro modes of these engines.
During start a neural network is created with the following characteristics: 4 layers - input with 262 neurons, hidden with 66 neurons, hidden with 256 neurons and output with 1 neuron. Neurons from all layers except the input have bias. All neurons have a sigmoid output function ranging in (-1; 1). Network output is scaled by 29744 which is Grapefruit’s internal maximum. Learning rate is 0.35, acceleration 0.3. This somewhat arbitrary specification (except the number of neurons in the input and output layers) can be changed by saving, editing and then loading the new one. In fact it seems that this network is too small for chess.
specifies directory of log files to be used.
converts raw .log files to .bpnl files where FEN[fn:5] notation is recoded in 262 bits + eval number allowing slightly faster training and assessment. This conversion is optional and doesn’t seem to give substantial advantage (I’ll have to think of some binary format). If there is even one .bpnl file in the `Train directory’, only .bpnl files will be used for training or assessment. Otherwise .log files will be used.
writes current network to file.
loads network from file.
invokes backpropagation learning over `Train directory’. For just in case, in bpn~ file current state of the network is backup-ed after each file is processed. On error, network state is restored from there.
specifies number of train iterations over the set of files.
calculates mean error which the current network gives over the files in `Train directory’ and total mean error.
specifies number of threads to be used for training or testing.
Structure of these files fully specifies a neural network. The first line describes main characteristics and the rest - weights of all neurons. Elements from the first line specify respectively:
- absolute value of range where random numbers will be used for initializing weights.
- learning rate
- acceleration
- output scaling constant (29744 for Grapefruit, 30001 for Stockfish)
- total number of layers including input and output
- sequentially enumerating characteristics of the layers
- number of neurons
- is there bias
- constant which specifies output function. For now functions are: 0 - linear 1 - sigmoid with range (0; 1) 2 - sigmoid with range (-1; 1)
- if the line ends with `x’ - initialization of all weights will be performed. This is a way of creating new networks with different characteristics than the default. Weights will be initialized after `Load’ of the file.
Just invoke `make’ within the source directory.
There is less obtrusive and script friendly command line interface that can be built by invoking `make cli’ within the sources directory. To see available arguments and options, just execute the script with no arguments.
neuroChessTrainer is developed and tested only on GNU/Linux thus far.
[fn:1] http://qt.nokia.com [fn:2] http://en.wikipedia.org/wiki/Backpropagation [fn:3] https://github.com/m00natic/neuroStock [fn:4] https://github.com/m00natic/neuroGrape [fn:5] http://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation