For my first workshop for the IEEE@UCI, I wanted to talk about something I found interesting, so I decided to do a workshop on neural networks. Because of its recently growing popularity, most engineering students have heard of neural networks, even if they don’t necessarily understand how they work.
My goal with this workshop was to both introduce people to the math behind neural networks and to let people have hands-on practice composing and training a neural network.
This workshop covers the following information:
- Forward Propagation as matrix multiplication
- Activation functions as nonlinearities (ReLU, Sigmoid, tanh)
- Loss functions and model evaluation (MSE, maximum likelihood)
- Backpropagations and calculating gradients
- Convolution layers
- Constructing a custom model class in PyTorch
- Training and inference on the MNIST Dataset of handwritten digits
- Saving model weights and loading pre-trained weighted
Participants had the option of following along with the MNIST example, either in their own Python file or on a Google Colab Notebook: