Deep Learning Specialization - Coursera
main
main
  • Introduction
  • Neural Networks and Deep Learning
    • Introduction to Deep Learning
    • Logistic Regression as a Neural Network (Neural Network Basics)
    • Shallow Neural Network
    • Deep Neural Network
  • Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
    • Practical Aspects of Deep Learning
    • Optimization Algorithms
    • Hyperparameter Tuning, Batch Normalization and Programming Frameworks
  • Structuring Machine Learning Projects
    • Introduction to ML Strategy
    • Setting Up Your Goal
    • Comparing to Human-Level Performance
    • Error Analysis
    • Mismatched Training and Dev/Test Set
    • Learning from Multiple Tasks
    • End-to-End Deep Learning
  • Convolutional Neural Networks
    • Foundations of Convolutional Neural Networks
    • Deep Convolutional Models: Case Studies
      • Classic Networks
      • ResNets
      • Inception
    • Advice for Using CNNs
    • Object Detection
      • Object Localization
      • Landmark Detection
      • Sliding Window Detection
      • The YOLO Algorithm
      • Intersection over Union
      • Non-Max Suppression
      • Anchor Boxes
      • Region Proposals
    • Face Recognition
      • One-Shot Learning
      • Siamese Network
      • Face Recognition as Binary Classification
    • Neural Style Transfer
  • Sequence Models
    • Recurrent Neural Networks
      • RNN Structure
      • Types of RNNs
      • Language Modeling
      • Vanishing Gradient Problem in RNNs
      • Gated Recurrent Units (GRUs)
      • Long Short-Term Memory Network (LSTM)
      • Bidirectional RNNs
    • Natural Language Processing & Word Embeddings
      • Introduction to Word Embeddings
      • Learning Word Embeddings: Word2Vec and GloVe
      • Applications using Word Embeddings
      • De-Biasing Word Embeddings
    • Sequence Models & Attention Mechanisms
      • Sequence to Sequence Architectures
        • Basic Models
        • Beam Search
        • Bleu Score
        • Attention Model
      • Speech Recognition
Powered by GitBook
On this page

Was this helpful?

  1. Sequence Models
  2. Sequence Models & Attention Mechanisms
  3. Sequence to Sequence Architectures

Bleu Score

PreviousBeam SearchNextAttention Model

Last updated 4 years ago

Was this helpful?

Bleu stands for Bilingual Evaluation Understudy.

Bleu Score is a measure of how accurate a translation is. It is especially useful when there are multiple acceptable translations.

Precision=∑n−gramcountclip(n−gram)∑n−gramcount(n−gram)Precision = \frac{\sum_{n-gram} count_{clip}(n-gram)}{\sum_{n-gram} count(n-gram)}Precision=∑n−gram​count(n−gram)∑n−gram​countclip​(n−gram)​

The above formula is used to calculate the precision of a translation by comparing the n-grams that occur in the result with the n-grams in the available acceptable translations.

countclip(n−gram)count_{clip}(n-gram)countclip​(n−gram) is the maximum number of times the n-gram occurs in any one of the acceptable translations. count(n−gram)count(n-gram)count(n−gram) is the number of times the n-gram occurs in the machine-generated translation.

For example, consider the following two acceptable translations:

T1: The cat is on the mat T2: There is a cat on the mat

The following is the machine translation:

MT: The cat the cat on the mat

n-gram (bi-gram)

count(n-gram)

count_clip(n-gram)

the cat

2

1

cat the

1

0

cat on

1

1

on the

1

1

the mat

1

1

precision=1+0+1+1+12+1+1+1+1=46=0.66precision = \frac{1+0+1+1+1}{2+1+1+1+1} = \frac{4}{6} = 0.66precision=2+1+1+1+11+0+1+1+1​=64​=0.66

So, pnp_npn​ is the Bleu Score computed on n-grams only. A combined Bleu Score (across different n-grams) is calculated as follows:

Combined Bleu Score=BP∗e∑i=14piCombined\, Bleu\, Score = BP * e^{\sum_{i=1}^4 p_i}CombinedBleuScore=BP∗e∑i=14​pi​

Where BP is the Brevity Penalty i.e. to penalize shorter translations.

BP = 1; if machine_translation_length > reference_translation_length

BP = (1 - machine_translation_length/reference_translation_length); otherwise