Skip to content

AnamikaS2005/Tiny-Neural-Inference-Engine-QNN-Core-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 

Repository files navigation

Tiny Neural Inference Engine (QNN-Core)

This project implements a Quantized Neural Network (QNN) inference engine entirely in Verilog as part of my Summer Internship at IIST (Dept. of Avionics).
The design executes inference for a 784-64-10 fully connected neural network (trained on MNIST) using an FSM-based datapath and memory-mapped weight/bias storage.


πŸš€ Features

  • FSM-based pipeline covering:
    • MAC (Multiply-Accumulate) for input β†’ hidden layer
    • Bias addition + ReLU activation
    • MAC for hidden β†’ output layer
    • Final argmax for classification
  • Quantization: Inputs, weights, and biases stored as 8-bit signed integers
  • Simulation-only flow (no FPGA needed)
  • Modular testbench that drives memory from Python-generated .mem files

🧩 Architecture

  • Input Layer: 784 neurons (28Γ—28 MNIST image)
  • Hidden Layer: 64 neurons with ReLU activation
  • Output Layer: 10 neurons β†’ argmax = predicted digit
  • FSM States:
    1. Load inputs + weights (first MAC)
    2. Add bias-1
    3. Apply ReLU
    4. Load weights (second MAC)
    5. Add bias-2
    6. Find max
    7. Output prediction

About

Integrated Python-trained QNN parameters with Verilog logic, processing pre-quantized MNIST image data through memory-mapped weight, bias, and input modules; verified functional correctness through waveform simulation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors