Lab 3: From Perceptrons to Neural Networks — Building Smarter Classifiers
Experimenting with perceptrons, vectorization, and multi-layer neural networks — from toy datasets to digit recognition and universal approximation.
Introduction
This lab was my first deep dive into neural networks. Starting from the basics of linear classification and perceptrons, I explored how simple models can separate linearly separable data, and why they fail on more complex patterns. The natural next step was introducing multi-layer perceptrons (MLPs), which leverage hidden layers to capture non-linear decision boundaries.
Along the way, I also revisited Python vectorization with NumPy, tested my understanding on synthetic datasets, and scaled up to a multi-class digit recognition problem — a classic benchmark for neural networks.
Key Steps Covered
- Data Preparation
- Used
sklearn.datasets.make_blobsto generate synthetic data. - Visualized clusters with Matplotlib.
- Used
- Vectorization with NumPy
- Compared loop-based vs vectorized operations for performance.
- Linear Classifiers & Perceptron
- Implemented and trained perceptron models on separable data.
- Understood guarantees of convergence.
- Multi-Layer Perceptrons (MLPs)
- Applied to non-linearly separable data.
- Hyperparameter tuning with Grid Search.
- Handwritten Digit Recognition
- Trained MLP on digit datasets for multi-class classification.
- Function Approximation
- Verified the universal approximation theorem by fitting a neural network to approximate functions.
Takeaway
Neural networks extend the power of linear models by introducing hidden layers that can learn complex, non-linear mappings. This lab showed me not only how perceptrons work, but also why MLPs are such a powerful tool — capable of everything from digit recognition to approximating mathematical functions.
🔗 View the full Lab Notebook on GitHub
▶️ Run in Google Colab
