A recurrent neural network (RNN) is a specialized type of artificial neural network that is particularly adept at recognizing patterns within sequential data. This includes various forms of data such as time series, natural language text, and spoken language. Unlike traditional feedforward neural networks, which process inputs in a straightforward manner without any internal feedback loops, RNNs feature connections that loop back onto themselves.
This unique architecture allows RNNs to remember previous inputs, effectively maintaining an internal state or memory that carries over from one step of the sequence to the next.
This recursive structure is vital for tasks where the order and context of inputs significantly influence the output. For instance, in natural language processing, the meaning of a word can be heavily dependent on the words that precede it. Therefore, RNNs excel in applications such as
Language modeling: where they predict the next word in a sentence based on the previous words
Machine translation: which involves converting text from one language to another while preserving context
Sequential data analysis: which requires understanding temporal dynamics for accurate predictions or classifications.
Moreover, RNNs can handle varying input lengths, making them versatile for a range of applications that involve sequences of different sizes. However, traditional RNNs face challenges such as the vanishing and exploding gradient problems, which can hinder their ability to learn long-range dependencies.
To address these issues, more advanced variants of RNNs, such as Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRUs), have been developed. These architectures incorporate mechanisms that better manage memory and maintain relevant information over longer sequences, further enhancing their performance across diverse tasks in machine learning and artificial intelligence.
Basic RNN architecture
In the above architecture activation \(a^t\) is passed to the RNN cell from one step to another and is called a hidden state. \(T^x\) are multiple time steps and \(y^T{_x}\) is a prediction.
Each RNN cell depicted above can be shown as
RNN Cell
Hidden State \(a^t\) is calculated with tanh activation as
\(a^{ t } = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{ t } + b_a)\)
and prediction \(\hat{y}^{\langle t \rangle}\) is calculated as
\(\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)\)
\(W_{}aa\), \(W_{}ax\) and \(W_{}ya\) are weight matrices and \(b_y\), \(b_a\)are biases
The recurrent neural network (RNN) is defined by the iterative application of a single cell. A fundamental RNN processes inputs sequentially and retains information through the hidden layer activations, known as hidden states, which are transmitted from one time step to the subsequent one. The dimensionality of the time step determines the frequency of RNN cell reuse.
At each time step, each cell receives two inputs:
The hidden state from the prior cell \(a^{t-1}\).
The input data pertinent to the current time step \(x^t\).
Additionally, each cell generates two outputs at every time step:
A hidden state \(a^t\).
A predictive output \(\hat{y}^t\)
Traditional Recurrent Neural Networks (RNNs) encounter significant challenges, particularly the vanishing and exploding gradient problems. These issues can severely limit the networks' capacity to capture and retain long-range dependencies in sequential data.
To overcome these limitations, researchers have developed more sophisticated variants of RNNs, most notably Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRUs). We will discuss LTSM below in details.
Long Short-Term Memory Network (LSTM)
LSTM (Long Short-Term Memory) is a type of recurrent neural network (RNN) architecture designed to effectively learn from sequences of data by overcoming some limitations of traditional RNNs. Unlike standard RNNs, which struggle with long-range dependencies due to difficulties in retaining information over extended periods, LSTMs employ a unique gating mechanism.
This mechanism consists of input, output, and forget gates, allowing the network to regulate the flow of information and maintain relevant data in its memory. As a result, LSTMs are particularly well-suited for tasks involving sequential data, such as time series prediction, natural language processing, and speech recognition, where it’s crucial to remember important context from earlier inputs while discarding irrelevant information.
Above diagram of One Step LTSM shows following calculations are required to be to be done
Forget Gate
Candidate Value
Update gate
Output gate
Cell state
Hidden state
Forget gate \(\mathbf{\Gamma}_{f}\)
Forget gate is calculated as
\(\mathbf{\Gamma}_f^{\langle t \rangle} = \sigma(\mathbf{W}_f[\mathbf{a}^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_f)\tag{1}\)
\( \mathbf{W_{f}}\)contains weights that govern the forget gate's behavior.
The previous time step's hidden state \(a^{\langle t-1 \rangle}\) and current time step's input \(x^{\langle t \rangle}\) are concatenated together and multiplied by \(\mathbf{W_{f}}\)
A sigmoid function is used to make each of the gate tensor's values \(\mathbf{\Gamma}_f^{\langle t \rangle}\) range from 0 to 1.
Multiplying the tensors \(\mathbf{\Gamma}_f^{\langle t \rangle} * \mathbf{c}^{\langle t-1 \rangle}\) is like applying a mask over the previous cell state.
If a single value in \(\mathbf{\Gamma}_f^{\langle t \rangle}\)is 0 or close to 0, then the product is close to 0.
- This keeps the information stored in the corresponding unit in \(\mathbf{c}^{\langle t-1 \rangle}\) from being remembered for the next time step.
Similarly, if one value is close to 1, the product is close to the original value in the previous cell state.
- The LSTM will keep the information from the corresponding unit of \(\mathbf{c}^{\langle t-1 \rangle}\), to be used in the next time step.
Candidate value \(\tilde{\mathbf{c}}^{\langle t \rangle}\)
Candidate value is calculated as
\(\mathbf{\tilde{c}}^{\langle t \rangle} = \tanh\left( \mathbf{W}_{c} [\mathbf{a}^{\langle t - 1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_{c} \right) \tag{3}\)
The candidate value is a tensor containing information from the current time step that may be stored in the current cell state \(\mathbf{c}^{\langle t \rangle}.\)
The parts of the candidate value that get passed on depend on the update gate.
The candidate value is a tensor containing values that range from -1 to 1.
The tilde "~" is used to differentiate the candidate \(\tilde{\mathbf{c}}^{\langle t \rangle}\) from the cell state \(\mathbf{c}^{\langle t \rangle}\).
Update gate \(\mathbf{\Gamma}_{i}\)
Update gate is calculated as
\(\mathbf{\Gamma}_i^{\langle t \rangle} = \sigma(\mathbf{W}_i[a^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_i)\tag{2} \)
Update gate decides what aspects of the candidate \(\tilde{\mathbf{c}}^{\langle t \rangle}\) to add to the cell state \(c^{\langle t \rangle}\)
The update gate decides what parts of a "candidate" tensor \(\tilde{\mathbf{c}}^{\langle t \rangle}\)are passed onto the cell state \(\mathbf{c}^{\langle t \rangle}\).
Similar to the forget gate, \(\mathbf{\Gamma}_i^{\langle t \rangle}\) , the sigmoid produces values between 0 and 1.
The update gate is multiplied element-wise with the candidate, and this product \((\mathbf{\Gamma}_{i}^{\langle t \rangle} * \tilde{c}^{\langle t \rangle})\) is used in determining the cell state \(\mathbf{c}^{\langle t \rangle}\).
Output gate \(\mathbf{\Gamma}_{o}\)
Output gate is calculated as
\(\mathbf{\Gamma}_o^{\langle t \rangle}= \sigma(\mathbf{W}_o[\mathbf{a}^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_{o})\tag{5}\)
The output gate decides what gets sent as the prediction (output) of the time step.
The output gate is like the other gates, in that it contains values that range from 0 to 1.
The output gate is determined by the previous hidden state \(\mathbf{a}^{\langle t-1 \rangle}\) and the current input \(\mathbf{x}^{\langle t \rangle}\). The sigmoid makes the gate range from 0 to 1.
Cell state \(\mathbf{c}^{\langle t \rangle}\)
Cell state is calculated as
\(\mathbf{c}^{\langle t \rangle} = \mathbf{\Gamma}_f^{\langle t \rangle}* \mathbf{c}^{\langle t-1 \rangle} + \mathbf{\Gamma}_{i}^{\langle t \rangle} *\mathbf{\tilde{c}}^{\langle t \rangle} \tag{4}\)
The cell state is the "memory" that gets passed onto future time steps.
The new cell state \($\mathbf{c}^{\langle t \rangle}$\) is a combination of the previous cell state and the candidate value.
The previous cell state \(\mathbf{c}^{\langle t-1 \rangle}\) is adjusted (weighted) by the forget gate \( \mathbf{\Gamma}_{f}^{\langle t \rangle}\)and the candidate value \(\tilde{\mathbf{c}}^{\langle t \rangle}\), adjusted (weighted) by the update gate \(\mathbf{\Gamma}_{i}^{\langle t \rangle}\).
Hidden state \(\mathbf{a}^{\langle t \rangle}\)
Hidden state is calculates as
\(\mathbf{a}^{\langle t \rangle} = \mathbf{\Gamma}_o^{\langle t \rangle} * \tanh(\mathbf{c}^{\langle t \rangle})\tag{6}\)
The hidden state gets passed to the LSTM cell's next time step.
It is used to determine the three gates \( \mathbf{\Gamma}{f}\), \(\mathbf{\Gamma}{u}\), \(\mathbf{\Gamma}_{o}\) of the next time step.
The hidden state is also used for the prediction \( y^{\langle t \rangle}\).
The hidden state \(\mathbf{a}^{\langle t \rangle}\) is determined by the cell state \(\mathbf{c}^{\langle t \rangle}\) in combination with the output gate \(\mathbf{\Gamma}_{o}\)
The cell state state is passed through the \(\tanh\) function to rescale values between -1 and 1.
The output gate acts like a "mask" that either preserves the values of \(\tanh(\mathbf{c}^{\langle t \rangle})\) or keeps those values from being included in the hidden state \(\mathbf{a}^{\langle t \rangle}\)
Prediction \(\mathbf{y}^{\langle t \rangle}_{pred}\)
\(\mathbf{y}^{\langle t \rangle}_{pred} = \textrm{softmax}(\mathbf{W}_{y} \mathbf{a}^{\langle t \rangle} + \mathbf{b}_{y})\)
Here softmax is used in the prediction.
One step of an LSTM, Which is described above is now iterated over using a for loop to process a sequence of \(\mathbf{T_x}\) inputs.