Feed-Forward Neural Network Architecture: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
No edit summary
No edit summary
 
Line 36: Line 36:
*** Tanh: Outputs values between -1 and 1, suitable for regression tasks.
*** Tanh: Outputs values between -1 and 1, suitable for regression tasks.
*** ReLU: Outputs the maximum of 0 and the input, popular for its efficiency and ability to handle sparsity.
*** ReLU: Outputs the maximum of 0 and the input, popular for its efficiency and ability to handle sparsity.
=== 2018a ===
* (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Feedforward_neural_network Retrieved:2018-9-2.
** A '''feedforward neural network''' is an [[artificial neural network]] wherein connections between the nodes do ''not'' form a cycle ([[Feed-Forward Neural Network#1994|Zell, 1994]]).As such, it is different from [[recurrent neural networks]].        <P>        The feedforward neural network was the first and simplest type of artificial neural network devised ([[Feed-Forward Neural Network#2015|Schmidhuber, 2015]]). In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network ([[Feed-Forward Neural Network#1994|Zell, 1994]]).        <P><div style="text-align:center;"> <HTML><IMG SRC="https://upload.wikimedia.org/wikipedia/en/5/54/Feed_forward_neural_net.gif" width="300px"></HTML><P>        <i>In a feed forward network information always moves one direction; it never goes backwards.</i> </div>


----
----

Latest revision as of 22:13, 10 December 2023

A Feed-Forward Neural Network Architecture is a neural net architecture that can be mapped to a directed acyclic graph (in which connections between the units do not form a cycle and where signals move from the input nodes, through the hidden nodes (if any), and to the output nodes).



References

2023

  • GBard
    • Feed-forward neural networks (FFNNs) are a fundamental type of artificial neural network (ANN) characterized by information flow in one direction only - forward from the input layer to the output layer. This means there are no feedback loops within the network. Due to their simplicity and interpretability, FFNNs are widely used as building blocks for more complex deep learning architectures.
    • Network Structure: An FFNN typically consists of three types of layers:
      • Input Layer: This layer receives the raw data that the network will process.
      • Hidden Layers: These layers are responsible for performing the actual computations and learning complex relationships within the data. The number of hidden layers and the number of neurons in each layer influence the network's capacity and learning ability.
      • Output Layer: This layer generates the final output of the network, which could be a prediction, classification, or other desired result.
    • Information Flow:
      • Data enters the input layer.
      • Each neuron in the hidden layers receives weighted input from the previous layer, performs a weighted sum, and applies an activation function to generate its output.
      • This process repeats across all hidden layers until reaching the output layer.
      • The output layer performs a final activation function and generates the final output.
    • Activation Functions: Activation functions introduce non-linearity into the network, allowing it to learn and represent complex relationships. Common activation functions include:
      • Sigmoid: Outputs values between 0 and 1, often used for binary classification.
      • Tanh: Outputs values between -1 and 1, suitable for regression tasks.
      • ReLU: Outputs the maximum of 0 and the input, popular for its efficiency and ability to handle sparsity.

2018a