Neural Networks:
- Definition: Computational models inspired by the human brain, composed of layers of interconnected nodes (neurons) that process data.
- Purpose: Used for various tasks including classification, regression, and pattern recognition.
Basic Components:
- Neuron: The fundamental unit that receives input, processes it and passes it to the next layer.
- Layers:
- Input Layer: The first layer that receives input data.
- Hidden Layers: Intermediate layers that transform input into meaningful patterns.
- Output Layer: The final layer that produces the output.
Activation Functions:
- Functions applied to neurons’ outputs to introduce non-linearity, enabling the network to learn complex patterns.
- Common activation functions: Sigmoid, Tanh, ReLU (Rectified Linear Unit).
1. Artificial Neural Networks (ANNs)
2. Convolutional Neural Networks (CNNs)
3. Recurrent Neural Networks (RNNs)
4. Long Short-Term Memory Networks (LSTMs)
5. Gated Recurrent Units (GRUs)
Summary
- Neural Networks (NNs): A basic framework for building models that can learn from data.
- Artificial Neural Networks (ANNs): Basic NNs with fully connected layers, suitable for general tasks.
- Convolutional Neural Networks (CNNs): Specialized for spatial data, especially images, using convolutional layers to detect patterns.
- Recurrent Neural Networks (RNNs): Specialized for sequential data, maintaining context across steps.
- LSTMs and GRUs: Advanced RNNs designed to handle long-term dependencies more effectively than standard RNNs.