mod1

Cards (40)

  • Mapping function f

    A model or algorithm
  • Characteristics of Computing

    • Hard computing
    • Soft computing
  • Examples of Hard Computing
  • Hard computing

    Conventional computing techniques
  • Soft computing

    Techniques that tolerate imprecision, uncertainty, and partial truth to achieve tractability, robustness and low solution cost
  • Characteristics of Soft Computing

    • Tolerance of imprecision, uncertainty, and partial truth
    • Achieving tractability, robustness and low solution cost
  • Hard Computing vs Soft Computing

    • Hard computing: Conventional computing techniques
    • Soft computing: Techniques that tolerate imprecision, uncertainty, and partial truth to achieve tractability, robustness and low solution cost
  • How Soft Computing works

    1. What is GA?
    2. Selection
    3. Crossover
    4. Mutation
    5. GA Pseudo-code
  • Genetic Algorithm (GA)

    A global search algorithm inspired by evolution
  • Genetic Algorithm

    1. The evolution usually starts from a population of randomly generated individuals and happens in generations
    2. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are selected from the current population (based on their fitness), and modified (recombined and possibly mutated) to form a new population
    3. The new population is then used in the next iteration of the algorithm
    4. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population
  • If the GA algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached
  • Genetic algorithms
    • Fitness function: number of non-attacking pairs of queens 24/(24+23+20+11) = 31%
    • 23/(24+23+20+11) = 29% etc
  • Components of learning

    • Formalization
    • Example: Training and testing
  • Generalization
    Key challenge in machine learning: to perform well on unseen examples
  • Types of learning

    • Supervised (inductive) learning
    • Unsupervised learning
    • Reinforcement learning
  • Biological Inspiration
  • Human brain

    An amazing processor, comprising about 10^10 neurons, each connected to 200,000 other neurons
  • The power of the human mind comes from the sheer number of neurons and their interconnections
  • What can Artificial Neural Networks (ANNs) do?

    • Classify data by recognizing patterns
    • Detect anomalies or novelties, when test data does not match the usual patterns
    • Approximate a target function–useful for predictions and forecasting
  • Biological Neuron vs Artificial Neuron

    • Speed
    • Processing
    • Size
    • No. of neurons
    • No. of Connections
    • Storage
    • Recall
    • Fault tolerance
    • Redundancy
  • Artificial Neuron

    • The net input to the neuron Y is given by a formula
    • The activation function is applied to the net input to compute the output
    • The weight represents the strength of synapse connecting the input and the output neurons
    • The weights may be positive or negative
  • Basic Entities of Artificial Neural Networks

    • The model's synaptic interconnections
    • The training or learning rules adopted for updating and adjusting the connection weights
    • The activation functions
  • Connections in ANNs

    • The neurons can be visualized for their arrangements in layers
    • Arrangement of these processing elements and the geometry of their interconnections are essential
    • The point where the connection originates and terminates should be noted
    • The function of each processing element in an ANN should be specified
  • Basic Neural Network Architectures

    • SINGLE-LAYER FEED FORWARD NETWORK
    • MULTI-LAYER FEED FORWARD NETWORK
    • SINGLE NODE WITH ITS OWN FEEDBACK
    • SINGLE-LAYER RECURRENT NETWORK
    • MULTI-LAYER RECURRENT NETWORK
  • Multi Layer Feedforward Neural Network

    • It is formed by the interconnection of several layers
    • Input layer receives the input signals and biases
    • Output layer generates the output of the network
    • Hidden Layers are internal to the network and do not have any connection with the external environment
    • Higher no. of hidden layers ⇒ Higher complexity of the network, but may have an efficient output response
    • In a completely connected NN, outputs from neurons of one layer is connected to each and every node in the next layer
  • Feedback Networks
    • Feed Forward Networks: No neuron in the output layer is an input to a node in the same layer or in the preceding layer
    • Feedback Networks: Outputs can be directed back as inputs to same or preceding layer nodes
    • Lateral Feedback: When the feedback is directed as input to the nodes in the same layer
    • Lateral inhibition structure network: When each node gets two types of inputs; excitatory (from nearby processing elements) and inhibitory (from most distinctly located processing elements)
  • Learning in ANNs

    • Parameter learning: It updates the connecting weights in a neural network
    • Structure Learning: It focuses on the change in structure in the network
  • Types of Learning

    • Supervised Learning
    • Unsupervised Learning
  • Activation Functions

    • Linear Activation Function
    • Sigmoid Activation Function
    • Tanh Activation Function
  • Linear Activation Function

    • This function outputs the input value as it is. No changes are made to the input.
    • Used only in the output layer of a neural network model that solves a regression problem, not used in hidden layers.
  • Sigmoid Activation Function

    • It converts its input into a probability value between 0 and 1.
    • It converts large negative values towards 0 and large positive values towards 1.
    • It returns 0.5 for the input 0, which is known as the threshold value.
    • Used in the output layer when building a binary classifier.
  • Tanh Activation Function

    • The output of the tanh function always ranges between -1 and +1.
    • It has a steeper gradient than the sigmoid function.
    • Used in the hidden layers of MLPs, CNNs and RNNs, but not in the output layer.
  • M P Neuron

    1. Calculate the net input to the neuron
    2. Assume initial weights to be 1
  • Perceptron
    A type of artificial neural network that learns to perform binary classification
  • Adaptive Linear Neuron (Adaline)

    • A network having a single linear unit, developed by Widrow and Hoff in 1960
    • It uses bipolar activation function
    • The net input is compared with the target value to compute the error signal, and weights are adjusted based on an adaptive training algorithm
  • Adaptive Linear Neuron Learning algorithm

    1. Step 0: Initialize weights, bias, and learning rate
    2. Step 1-7: Iterative process to adjust weights and bias based on error
  • What Adaline and the Perceptron have in common

    • They are classifiers for binary classification
    • Both have a linear decision boundary
    • Both can learn iteratively, sample by sample
    • Both use a threshold function
  • Difference between Adaline and Perceptron

    Perceptron uses class labels to learn model coefficients, Adaline uses continuous predicted values (from the net input) to learn the model coefficients
  • Madaline
    • A network which consists of many Adalines in parallel, with a single output unit
    • The weights and bias between the input and Adaline layers are adjustable, while the Adaline and Madaline layers have fixed weights and bias of 1
  • Madaline Training Algorithm
    1. Step 0: Initialize weights and bias, set learning rate
    2. Step 1-7: Iterative process to adjust weights and bias based on error