This repository contains a comprehensive implementation and explanation of Linear Classification using the Perceptron Algorithm, developed as part of a Machine Learning lab.
The project demonstrates the fundamental concepts of linear classification, from data generation to model training and evaluation. It focuses on the Perceptron algorithm, a foundational building block for neural networks.
- Concept: Transforming "white noise" (standard normal distribution) into two distinct classes.
-
Method: Uses Cholesky Decomposition to apply specific covariance (
$C$ ) and mean ($m_1, m_2$ ) to random noise, creating two Gaussian "blobs". - Goal: Ensure the data is linearly separable so a straight line can perfectly partition the classes.
-
Decision Logic: The model uses a weight vector
$w$ to define a boundary. It classifies points based on whether$w^T x \le 0$ or$w^T x > 0$ . - Stochastic Learning: Implements an error-correcting loop that updates the weight vector ONLY when a misclassification occurs.
-
Update Rule:
$w = w + \alpha \cdot y \cdot x$ , where$\alpha$ is the learning rate and$y$ is the true label.
- Data Plotting: Visualizes the generated classes and their combined distribution.
- Learning Curve: Tracks the global accuracy of the model over iterations, showing how it converges to 100% accuracy on linearly separable data.
- Decision Boundary: Plots the final "boundary line" that separates the two classes.
- Language: Python (Jupyter Notebook)
- Libraries:
numpy,matplotlib
- Open
Linear_Classification.ipynbin a Jupyter environment. - Run the cells sequentially to observe data generation, algorithm training, and final classification results.
The implementation successfully achieves 100% accuracy on the generated linearly separable dataset, demonstrating the effectiveness of the Perceptron algorithm for simple classification tasks.