One Pager Cheat Sheet
- Neural Networks are models, designed based on our brains, that
propagate
inputs through manycells/units
tohandle certain outputs
when given an input. - Nerve cells process and transfer information through
Synapses
,Dendrites
,Soma
andAxon
to work, which is effectively simulated inArtificial Neural Networks (ANNs)
to mimic the operation of a human brain. - A neuron in an Artificial Neural Network (ANN) computes an output (activation) by multiplying the inputs (
tensor
) with weights and adding abias
, and then often applying an activation function to the sum. A Linear Layer is an efficient way of implementing a single neuron using matrix multiplication, which makes it possible to do parallel processing with GPUs and TPUs.
- Matrix multiplication is more efficient than for-loops due to its ability to take advantage of
parallel processing
and be performed in anyrandom order
. - An
activation function
is used tokeep the output data
in a desired shape in deep learning. - The
IdentityActivation
function returns the given inputX
as its output. - A linear function is
multiplied
by an inputX
and produces an output which is proportional to the input for all values of the input domain, which can be seen in the equation f(x) = a*x. - The
Sigmoid activation function
takes in a value and produces a output between 0 and 1 to classify the data, represented by the equationf(x) = 1/(1+e^-x)
. - The
Tanh
activation function ranges from -1 to 1 and is calculated using thetanh()
function. - The activation function
Binary Step
is a threshold-based activation used for binary classification, which outputs 0 if the input is negative, and 1 if it is positive. The
ReLUActivation function filters negative values to 0 and acts as a linear function with all the positive values.
- Leaky ReLU is a type of activation function defined by the equation
f(x) = b*x
whenx <= 0
anda*x
whenx > 0
, withb < a
, which can be implemented in Python through theLeakyReLUActivation
class. - The SoftMax activation function is a complex non-continuous function which helps convert outputs to probabilities and is mostly used for multiclass classification, demonstrated by the
Python
code snippet provided. - The
sigmoid activation function
provides outputs between 0 and 1, while SoftMax converts all outputs to a total probability of 1. - The Leaky ReLU activation function is defined by
f(x) = ax
whenx<0
andf(x) = x
whenx>=0
, with a range of negative to positive infinity, which is in contrast to other activation functions such as theSigmoid
. - We have created a NeuralNet class which provides an API for the user to
predict
using theforward
methods of other classes. - We can calculate the loss of our neural network with the
categorical_crossentropy
function, which performs the Categorical Cross-Entropy equation. - We can implement backpropagation in neural networks by adding a
backward()
method that outputs the gradient of the input with respect to W and by using gradient descent algorithm in combination with backward propagation to update the weights and biases. - With a few lines of code in TensorFlow, you can easily create and train a deep learning model to input 2-dimensional data of any shape.