Retour aux projets

Neural Network Visualizer

An interactive tool to understand how neural networks learn through visualization of forward propagation, backpropagation, and decision boundaries.

Machine LearningVisualizationEducationalInteractive

Network Configuration

layers
neurons
Epoch
0
Loss
0.0000

Network Architecture

Blue lines: Positive weights | Red lines: Negative weights

Line thickness represents weight magnitude | Neuron color intensity shows activation level

Decision Boundary

The background color shows the network's prediction for each point in space.
Blue dots: Class 1 | Red dots: Class 0

How it works

  • 1. Forward Propagation: Data flows from input to output through weighted connections
  • 2. Activation Functions: Each neuron applies a non-linear transformation (ReLU, Sigmoid, Tanh)
  • 3. Loss Calculation: Measures how far the prediction is from the target
  • 4. Backpropagation: Adjusts weights to minimize the loss using gradient descent
  • 5. Training: Repeats this process thousands of times to learn patterns

Key Features

  • Real-time Training: Watch the network learn in real-time with live updates
  • Multiple Datasets: XOR, Circle, and Spiral patterns to test different complexities
  • Network Visualization: See weights, activations, and connections visually
  • Decision Boundaries: Visualize how the network classifies different regions of space
  • Configurable Architecture: Adjust hidden layer sizes to see the impact

Technical Implementation

Core Algorithms

  • Forward propagation with matrix operations
  • Backpropagation for gradient descent
  • Multiple activation functions (Sigmoid, ReLU, Tanh)
  • Mean Squared Error (MSE) loss function

Technology Stack

  • TypeScript for type safety
  • HTML5 Canvas for visualization
  • React hooks for state management
  • Client-side computation (no server needed)

Performance

  • Real-time training at 10 epochs/second
  • Efficient weight updates with gradient descent
  • Responsive canvas rendering

Understanding Neural Networks

🧠

Neurons

Individual units that receive inputs, apply weights and biases, then pass through an activation function to produce an output.

πŸ”—

Weights

Connection strengths between neurons. Positive weights strengthen signals, negative weights inhibit them. Training adjusts these values.

πŸ“ˆ

Training

The process of adjusting weights through backpropagation to minimize the difference between predictions and actual values.

Try These Experiments:

  1. Start simple: Try the XOR dataset with 4 hidden neurons. Watch how the decision boundary forms.
  2. Increase complexity: Switch to the Spiral dataset. You'll need more hidden neurons to separate the spirals.
  3. Compare architectures: Try 2 vs 8 hidden neurons. More neurons = more capacity to learn complex patterns.
  4. Watch the weights: Notice how thick blue/red lines indicate strong positive/negative connections.
  5. Monitor the loss: The loss decreases as the network learns. If it plateaus, you might need more neurons or training.

Educational Value

Perfect for Learning:

  • βœ“ Students learning neural network fundamentals
  • βœ“ Teachers demonstrating backpropagation
  • βœ“ Data scientists prototyping architectures
  • βœ“ Anyone curious about how AI learns

Concepts Demonstrated:

  • βœ“ Forward and backward propagation
  • βœ“ Gradient descent optimization
  • βœ“ Activation functions and their effects
  • βœ“ Network capacity and overfitting

This implementation uses vanilla JavaScript (no ML libraries) to help you understand the core algorithms.