Deep Learning Specialization - Coursera
main
main
  • Introduction
  • Neural Networks and Deep Learning
    • Introduction to Deep Learning
    • Logistic Regression as a Neural Network (Neural Network Basics)
    • Shallow Neural Network
    • Deep Neural Network
  • Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
    • Practical Aspects of Deep Learning
    • Optimization Algorithms
    • Hyperparameter Tuning, Batch Normalization and Programming Frameworks
  • Structuring Machine Learning Projects
    • Introduction to ML Strategy
    • Setting Up Your Goal
    • Comparing to Human-Level Performance
    • Error Analysis
    • Mismatched Training and Dev/Test Set
    • Learning from Multiple Tasks
    • End-to-End Deep Learning
  • Convolutional Neural Networks
    • Foundations of Convolutional Neural Networks
    • Deep Convolutional Models: Case Studies
      • Classic Networks
      • ResNets
      • Inception
    • Advice for Using CNNs
    • Object Detection
      • Object Localization
      • Landmark Detection
      • Sliding Window Detection
      • The YOLO Algorithm
      • Intersection over Union
      • Non-Max Suppression
      • Anchor Boxes
      • Region Proposals
    • Face Recognition
      • One-Shot Learning
      • Siamese Network
      • Face Recognition as Binary Classification
    • Neural Style Transfer
  • Sequence Models
    • Recurrent Neural Networks
      • RNN Structure
      • Types of RNNs
      • Language Modeling
      • Vanishing Gradient Problem in RNNs
      • Gated Recurrent Units (GRUs)
      • Long Short-Term Memory Network (LSTM)
      • Bidirectional RNNs
    • Natural Language Processing & Word Embeddings
      • Introduction to Word Embeddings
      • Learning Word Embeddings: Word2Vec and GloVe
      • Applications using Word Embeddings
      • De-Biasing Word Embeddings
    • Sequence Models & Attention Mechanisms
      • Sequence to Sequence Architectures
        • Basic Models
        • Beam Search
        • Bleu Score
        • Attention Model
      • Speech Recognition
Powered by GitBook
On this page
  • Transfer Learning
  • Multi-Task Learning

Was this helpful?

  1. Structuring Machine Learning Projects

Learning from Multiple Tasks

Transfer Learning

Transfer Learning involves the use of a trained network to learn a new task.

To perform transfer learning, we must swap the last layer with a new one and perform training with the new dataset. This will update the weights in the network as required and is much faster than training from scratch.

Transfer learning is most suitable when we have already trained the net on large amounts of data and we have comparatively less data for the new task that it needs to learn. Low level features that the network learned when it was previously trained could be helpful to the new task that it needs to learn.

Multi-Task Learning

Multi-Task Learning can help us identify more than 1 class in a given image. For multi-task learning, the last layer will have C neurons, one for each of the C classes.

Multi-Task Learning is usually productive when all the tasks can use shared lower-level features, and when we have similar amounts of data for each task.

PreviousMismatched Training and Dev/Test SetNextEnd-to-End Deep Learning

Last updated 4 years ago

Was this helpful?