Neural Network Visualizer
An interactive tool to understand how neural networks learn through visualization of forward propagation, backpropagation, and decision boundaries.
Network Configuration
Network Architecture
Blue lines: Positive weights | Red lines: Negative weights
Line thickness represents weight magnitude | Neuron color intensity shows activation level
Decision Boundary
The background color shows the network's prediction for each point in space.
Blue dots: Class 1 | Red dots: Class 0
How it works
- 1. Forward Propagation: Data flows from input to output through weighted connections
- 2. Activation Functions: Each neuron applies a non-linear transformation (ReLU, Sigmoid, Tanh)
- 3. Loss Calculation: Measures how far the prediction is from the target
- 4. Backpropagation: Adjusts weights to minimize the loss using gradient descent
- 5. Training: Repeats this process thousands of times to learn patterns
Key Features
- Real-time Training: Watch the network learn in real-time with live updates
- Multiple Datasets: XOR, Circle, and Spiral patterns to test different complexities
- Network Visualization: See weights, activations, and connections visually
- Decision Boundaries: Visualize how the network classifies different regions of space
- Configurable Architecture: Adjust hidden layer sizes to see the impact
Technical Implementation
Core Algorithms
- Forward propagation with matrix operations
- Backpropagation for gradient descent
- Multiple activation functions (Sigmoid, ReLU, Tanh)
- Mean Squared Error (MSE) loss function
Technology Stack
- TypeScript for type safety
- HTML5 Canvas for visualization
- React hooks for state management
- Client-side computation (no server needed)
Performance
- Real-time training at 10 epochs/second
- Efficient weight updates with gradient descent
- Responsive canvas rendering
Understanding Neural Networks
Neurons
Individual units that receive inputs, apply weights and biases, then pass through an activation function to produce an output.
Weights
Connection strengths between neurons. Positive weights strengthen signals, negative weights inhibit them. Training adjusts these values.
Training
The process of adjusting weights through backpropagation to minimize the difference between predictions and actual values.
Try These Experiments:
- Start simple: Try the XOR dataset with 4 hidden neurons. Watch how the decision boundary forms.
- Increase complexity: Switch to the Spiral dataset. You'll need more hidden neurons to separate the spirals.
- Compare architectures: Try 2 vs 8 hidden neurons. More neurons = more capacity to learn complex patterns.
- Watch the weights: Notice how thick blue/red lines indicate strong positive/negative connections.
- Monitor the loss: The loss decreases as the network learns. If it plateaus, you might need more neurons or training.
Educational Value
Perfect for Learning:
- β Students learning neural network fundamentals
- β Teachers demonstrating backpropagation
- β Data scientists prototyping architectures
- β Anyone curious about how AI learns
Concepts Demonstrated:
- β Forward and backward propagation
- β Gradient descent optimization
- β Activation functions and their effects
- β Network capacity and overfitting
This implementation uses vanilla JavaScript (no ML libraries) to help you understand the core algorithms.