This is my attempt at making a chess engine from scratch, with the help of a lot of research.
TODO: Improve pruning Improve evaluation
Useful sources: https://www.chessprogramming.org/Main_Page https://www.chessprogramming.org/Perft_Results
- .NET SDK (10.0 or later)
1. Build the project (required before running any command):
cd path\to\ChessEngine
dotnet build -c Release2. Run the engine — choose one:
- From source (no publish step): run with
dotnet run -c Release --followed by a command. Examples:dotnet run -c Release -- # UCI mode dotnet run -c Release -- perft # perft tests dotnet run -c Release -- tune mydata.csv 50
- From a published executable: build once, then run the exe:
dotnet publish -c Release -r win-x64 --self-contained true -o publish .\publish\ChessEngine.exe # UCI mode .\publish\ChessEngine.exe perft # perft tests .\publish\ChessEngine.exe tune mydata.csv 50
All commands below use dotnet run -c Release -- for “run from source”. If you use the published exe, replace that with .\publish\ChessEngine.exe (Windows) or ./publish/ChessEngine (Linux/macOS).
Run everything from the project directory (where ChessEngine.csproj is).
| Command | Description |
|---|---|
| (no args) | Start the engine in UCI mode. If eval_params.json exists in the current directory, evaluation parameters are loaded from it. |
perft |
Run built-in perft tests. |
convert |
Convert a CSV dataset to FEN;result format for tuning. |
tune |
Run Texel-style evaluation tuning on a dataset. |
eval-error |
Report mean squared error and prediction accuracy on a dataset. |
save-params |
Save current evaluation parameters to a JSON file. |
load-params |
Load evaluation parameters from a JSON file and print them. |
Perft
dotnet run -c Release -- perftConvert (CSV → positions file)
Converts a CSV file (with FEN and result columns) into FEN;result lines. Output defaults to positions.txt.
dotnet run -c Release -- convert <input.csv> [output.txt] [max_positions]
# Examples:
dotnet run -c Release -- convert tuning_dataset_16m.csv
dotnet run -c Release -- convert tuning_dataset_16m.csv positions.txt 500000Tune
Runs evaluation tuning on a dataset. Format is auto-detected: CSV (fen,result) or FEN;result text. Writes tuned parameters to eval_params_tuned.json (and a backup to eval_params_tuning.json).
dotnet run -c Release -- tune <dataset_file> [iterations] [max_positions]
# Examples:
dotnet run -c Release -- tune tuning_dataset_16m.csv 50
dotnet run -c Release -- tune tuning_dataset_16m.csv 50 500000
dotnet run -c Release -- tune positions.txt 100Eval-error
Measures how well the current evaluation matches game results on a dataset (same file formats as tune).
dotnet run -c Release -- eval-error <dataset_file>
# Example:
dotnet run -c Release -- eval-error positions.txtSave / load parameters
# Save current parameters (default: eval_params.json)
dotnet run -c Release -- save-params [path]
dotnet run -c Release -- save-params eval_params.json
# Load parameters from a file (prints values; does not start UCI)
dotnet run -c Release -- load-params <params_file>
dotnet run -c Release -- load-params eval_params_tuned.jsonUsing tuned parameters in the engine
After tuning, parameters are in eval_params_tuned.json. The engine only loads eval_params.json at startup. To use the tuned values:
-
Copy the tuned file over the default (from project directory):
copy eval_params_tuned.json eval_params.json
(PowerShell:
Copy-Item eval_params_tuned.json eval_params.json) -
Or load tuned params then save as default:
dotnet run -c Release -- load-params eval_params_tuned.json dotnet run -c Release -- save-params eval_params.json
Then run the engine (with no arguments or from a GUI); it will use the tuned evaluation.
This engine implements the UCI (Universal Chess Interface) protocol. To use the engine with a GUI (Arena, CuteChess, etc.), add it as a UCI engine and point the GUI to the executable.
Example UCI command exchange (illustrative):
- GUI -> engine:
uci - Engine -> GUI:
id name ChessEngine - Engine -> GUI:
id author marcl - Engine -> GUI:
uciok - GUI -> engine:
isready - Engine -> GUI:
readyok - GUI -> engine:
position startpos moves e2e4 e7e5 - GUI -> engine:
go movetime 1000 - Engine -> GUI:
bestmove e2e4
Open a terminal and run the executable; then type UCI commands like uci, isready, position, go, and quit to interact with the engine. On Unix-like shells you can also pipe a small script of commands into the executable for automated tests.
- The engine currently implements a basic UCI interface and a simple time allocation heuristic.
- For development or inspection, see source files such as
Uci.cs,Search.cs,MoveGenerator.cs, andPerft.cs.
| File | Purpose |
|---|---|
eval_params.json |
The one the engine uses. Loaded at startup when you run UCI (no args or from a GUI). Keep your chosen evaluation parameters here. Create or update it with save-params or by copying from a tuned file. |
eval_params_tuned.json |
Written at the end of each tuning run. This is the final tuned result. Use it by copying to eval_params.json (or run load-params then save-params) so the engine uses these values. |
eval_params_tuning.json |
Written during tuning (after each iteration) as a backup. You can ignore or delete it; it is only useful if a tuning run is interrupted and you want to recover the last iteration’s parameters. |
Summary: For normal use, only eval_params.json matters. The engine reads that file when it starts. The other two are tuning outputs; copy eval_params_tuned.json to eval_params.json when you want to apply new tuned values.
This readme was generated by AI, Marc Lampron
--TODO (features currently in the engine):
Search
- Transposition table (hash table, configurable size)
- Iterative deepening with aspiration windows (PVS)
- Move ordering (TT move, good captures, killers, countermoves, history)
- Killer moves
- Countermove heuristic
- History heuristic
- Quiescence search (captures, promotions, passed-pawn pushes; delta/SEE pruning)
- Null move pruning (with endgame verification)
- Late move reduction (LMR)
- Late move pruning (LMP)
- Futility pruning
- Static Exchange Evaluation (SEE) for ordering and pruning
- Internal iterative deepening (IID)
- Check extension; recapture and capture extensions
- Multi-threaded search (Lazy SMP)
- Ponder (best move + ponder from TT)
- Draw detection (repetition, fifty-move rule, insufficient material)
Time management
- Variable time allocation (phase, complexity, stability-based early exit)
Evaluation
- Phase-based (tapered) evaluation
- Material and piece-square tables (MG/EG)
- Pawn structure (doubled, isolated, passed pawns)
- Bishop pair
- Rooks on open/semi-open files
- Eval cache; tunable parameters (JSON load/save)
- Quick eval for pruning; insufficient material detection
UCI & interfaces
- UCI protocol (options: Hash, Threads, BookFile, BookDepth, BookEvalLimit)
- Opening book (Polyglot .bin)
- Perft testing
- Evaluation tuning (Texel-style: convert, tune, eval-error, save/load params)
- Lichess bot mode
Not yet implemented
- Endgame/opening databases (e.g. Syzygy)
- Critical positions / contempt
- SPRT testing