One Pager Cheat Sheet
- Neural Networks are models, designed based on our brains, that
propagateinputs through manycells/unitstohandle certain outputswhen given an input. - Nerve cells process and transfer information through
Synapses,Dendrites,SomaandAxonto work, which is effectively simulated inArtificial Neural Networks (ANNs)to mimic the operation of a human brain. - A neuron in an Artificial Neural Network (ANN) computes an output (activation) by multiplying the inputs (
tensor) with weights and adding abias, and then often applying an activation function to the sum. A Linear Layer is an efficient way of implementing a single neuron using matrix multiplication, which makes it possible to do parallel processing with GPUs and TPUs.- Matrix multiplication is more efficient than for-loops due to its ability to take advantage of
parallel processingand be performed in anyrandom order. - An
activation functionis used tokeep the output datain a desired shape in deep learning. - The
IdentityActivationfunction returns the given inputXas its output. - A linear function is
multipliedby an inputXand produces an output which is proportional to the input for all values of the input domain, which can be seen in the equation f(x) = a*x. - The
Sigmoid activation functiontakes in a value and produces a output between 0 and 1 to classify the data, represented by the equationf(x) = 1/(1+e^-x). - The
Tanhactivation function ranges from -1 to 1 and is calculated using thetanh()function. - The activation function
Binary Stepis a threshold-based activation used for binary classification, which outputs 0 if the input is negative, and 1 if it is positive. TheReLUActivation function filters negative values to 0 and acts as a linear function with all the positive values.- Leaky ReLU is a type of activation function defined by the equation
f(x) = b*xwhenx <= 0anda*xwhenx > 0, withb < a, which can be implemented in Python through theLeakyReLUActivationclass. - The SoftMax activation function is a complex non-continuous function which helps convert outputs to probabilities and is mostly used for multiclass classification, demonstrated by the
Pythoncode snippet provided. - The
sigmoid activation functionprovides outputs between 0 and 1, while SoftMax converts all outputs to a total probability of 1. - The Leaky ReLU activation function is defined by
f(x) = axwhenx<0andf(x) = xwhenx>=0, with a range of negative to positive infinity, which is in contrast to other activation functions such as theSigmoid. - We have created a NeuralNet class which provides an API for the user to
predictusing theforwardmethods of other classes. - We can calculate the loss of our neural network with the
categorical_crossentropyfunction, which performs the Categorical Cross-Entropy equation. - We can implement backpropagation in neural networks by adding a
backward()method that outputs the gradient of the input with respect to W and by using gradient descent algorithm in combination with backward propagation to update the weights and biases. - With a few lines of code in TensorFlow, you can easily create and train a deep learning model to input 2-dimensional data of any shape.


