Mark As Completed Discussion

One Pager Cheat Sheet

  • Neural Networks are models, designed based on our brains, that propagate inputs through many cells/units to handle certain outputs when given an input.
  • Nerve cells process and transfer information through Synapses, Dendrites, Soma and Axon to work, which is effectively simulated in Artificial Neural Networks (ANNs) to mimic the operation of a human brain.
  • A neuron in an Artificial Neural Network (ANN) computes an output (activation) by multiplying the inputs (tensor) with weights and adding a bias, and then often applying an activation function to the sum.
  • A Linear Layer is an efficient way of implementing a single neuron using matrix multiplication, which makes it possible to do parallel processing with GPUs and TPUs.
  • Matrix multiplication is more efficient than for-loops due to its ability to take advantage of parallel processing and be performed in any random order.
  • An activation function is used to keep the output data in a desired shape in deep learning.
  • The IdentityActivation function returns the given input X as its output.
  • A linear function is multiplied by an input X and produces an output which is proportional to the input for all values of the input domain, which can be seen in the equation f(x) = a*x.
  • The Sigmoid activation function takes in a value and produces a output between 0 and 1 to classify the data, represented by the equation f(x) = 1/(1+e^-x).
  • The Tanh activation function ranges from -1 to 1 and is calculated using the tanh() function.
  • The activation function Binary Step is a threshold-based activation used for binary classification, which outputs 0 if the input is negative, and 1 if it is positive.
  • TheReLUActivation function filters negative values to 0 and acts as a linear function with all the positive values.
  • Leaky ReLU is a type of activation function defined by the equation f(x) = b*x when x <= 0 and a*x when x > 0, with b < a, which can be implemented in Python through the LeakyReLUActivation class.
  • The SoftMax activation function is a complex non-continuous function which helps convert outputs to probabilities and is mostly used for multiclass classification, demonstrated by the Python code snippet provided.
  • The sigmoid activation function provides outputs between 0 and 1, while SoftMax converts all outputs to a total probability of 1.
  • The Leaky ReLU activation function is defined by f(x) = ax when x<0 and f(x) = x when x>=0, with a range of negative to positive infinity, which is in contrast to other activation functions such as the Sigmoid.
  • We have created a NeuralNet class which provides an API for the user to predict using the forward methods of other classes.
  • We can calculate the loss of our neural network with the categorical_crossentropy function, which performs the Categorical Cross-Entropy equation.
  • We can implement backpropagation in neural networks by adding a backward() method that outputs the gradient of the input with respect to W and by using gradient descent algorithm in combination with backward propagation to update the weights and biases.
  • With a few lines of code in TensorFlow, you can easily create and train a deep learning model to input 2-dimensional data of any shape.