This project is an implementation of the game Othello (also known as Reversi) with several AI agents, including human interaction. It features different strategies for choosing moves, such as Random, Minimax, Alpha-Beta Pruning, and Monte Carlo Tree Search (MCTS).
- Human vs AI: Play as a human against any AI agent.
- AI vs AI: Watch two AI agents compete against each other.
- Custom AI Depth: Adjust the depth or iterations for AI computations.
- Multiple AI Agents:
- RandomAgent: Makes random moves.
- MinimaxAgent: Implements the Minimax algorithm with a heuristic evaluation.
- AlphaBetaAgent: Extends Minimax with Alpha-Beta pruning for efficiency.
- MCTSAgent: Uses Monte Carlo Tree Search for decision-making.
agent.py: Contains implementations of the AI agents and the human player.game.py: Manages the game loop and interactions between players and the game state.main.py: Entry point for running the game.mcts.py: Implements the Monte Carlo Tree Search agent.othello.py: Contains the game logic, including the board representation and valid move generation.
- Python 3.8 or above.
- Clone the repository:
git clone <repository_url> cd <repository_name>
- Run the game with specified agents:
python main.py <player1> <player2> [depth_or_iterations]
<player1>and<player2>can be one of:human,random,minimax,alphabeta, ormcts.[depth_or_iterations]is optional and specifies the depth forminimaxandalphabeta, or iterations formcts.
- Human vs RandomAgent:
python main.py human random
- Minimax vs AlphaBeta with depth 4:
python main.py minimax alphabeta 4
- The game starts with the standard Othello board setup.
- Players alternate turns, flipping opponent pieces by placing their own in valid positions.
- The game ends when no valid moves remain for either player.
- RandomAgent: Picks a move randomly from the list of valid moves.
- MinimaxAgent: Uses the Minimax algorithm with a heuristic that evaluates board control and mobility.
- AlphaBetaAgent: Optimized Minimax algorithm that prunes unnecessary branches.
- MCTSAgent: Uses simulations to estimate the best move, balancing exploration and exploitation.
You can create custom agents by extending the game.Player class and implementing the choose_move method.
Contributions are welcome! Feel free to open issues or submit pull requests to enhance the project.
This project is licensed under the MIT License. See the LICENSE file for details.