Attention Model
As previously discussed, an encoder-decoder RNN was used to perform machine translation. It would first memorize the entire input sentence and generate an encoding, which would then be decoded by the decoder part of the network.
However, the performance of such an architecture is not good when it comes to long sentences, say >30-40 words.
In such cases, the attention model is more accurate. It works on the idea of translating a sentence part by part, like the human brain does. It helps to determine how much attention has to be given to a given part of the input, while generating the translation.
Apart from machine translation, the attention mechanism can also be used for applications such as image captioning, i.e. generating a caption by looking at the image one part at a time.
It can also be used for speech recognition.
Last updated