Deep Neural Network (DNN) Model

From GM-RKB
(Redirected from DNN)
Jump to navigation Jump to search

A Deep Neural Network (DNN) Model is an multi hidden-layer neural network with a neural network layer depth larger than three and that can learn hierarchical representations.



References

2018

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Deep_learning#Deep_neural_networks Retrieved:2018-8-12.
    • A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers (Bengio, 2009; Schmidhuber, 2015). The DNN finds the correct mathematical manipulation to turn the input into the output, whether it be a linear relationship or a non-linear relationship. The network moves through the layers calculating the probability of each output. For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer, and complex DNN have many layers, hence the name "deep" networks.

      DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives (Szegedy, Toshev & Erhan, 2013). The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network (Bengio, 2009). Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets. DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data.

       Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling (Gers & Schmidhuber, 2001; Sutskever, Vinyals & Le, 2014; Jozefowicz et al., 2016; Gillick et al., 2016; and Mikolov et al., 2010). Long short-term memory is particularly effective for this use (Hochreiter & Schmidhuber, 1997; and Gers, Schraudolph & Schmidhuber, 2002).

       Convolutional deep neural networks (CNNs) are used in computer vision (LeCun et al., 1998). CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR; Sainath et al., 2013).

2017

2016a

2016b

2015

2014

2013a

2013b

2013c

2012a

2012b

2012c

2012d

2010

2009

2002

2001

1998

1997

1992

1977

1971

1965