Transfer Learning: Boosting AI Performance with Pre-trained Models

Transfer Learning

Transfer Learning is a machine learning technique that enables a model trained on a source task to leverage its learned knowledge to solve a target task—often with limited labeled data or computational resources. Instead of training a model from scratch for the target task, transfer learning reuses features or parameters from a pre-trained model, drastically reducing training time and improving performance, especially for small datasets.

This approach mimics human learning: humans apply knowledge from past experiences (e.g., learning to ride a bike helps learn to ride a scooter) instead of starting from zero for every new task.


Core Motivation

Training deep neural networks from scratch requires three critical resources:

  1. Large labeled datasets: For example, ImageNet has 1.4M labeled images for image classification.
  2. Massive computational power: GPUs/TPUs to train models for days or weeks.
  3. Expertise in hyperparameter tuning: To avoid overfitting and ensure convergence.

Most real-world tasks lack these resources (e.g., classifying rare medical images, detecting specific defects in manufacturing). Transfer learning solves this by:

  • Reusing features learned from a data-rich source task (e.g., ImageNet classification).
  • Fine-tuning the pre-trained model on the data-poor target task.

Key Concepts

TermDefinition
Source TaskA task with abundant labeled data, where the model is pre-trained (e.g., ImageNet image classification, English text sentiment analysis).
Target TaskThe task we want to solve, often with limited data (e.g., X-ray pneumonia detection, product review sentiment analysis in Spanish).
Pre-trained ModelA model trained on the source task (e.g., ResNet, BERT, GPT). Contains learned features (e.g., edges, textures for images; syntax, semantics for text).
Feature ExtractionFreezing the pre-trained model’s layers and using its output as features for a new classifier/regressor trained on the target task.
Fine-TuningUnfreezing some or all layers of the pre-trained model and training the entire model (or subset of layers) on the target task with a small learning rate.
Domain AdaptationA subset of transfer learning where the source and target tasks are the same, but the data distributions differ (e.g., classifying cats from photos (source) vs. sketches (target)).

How Transfer Learning Works

The workflow depends on two factors:

  • Similarity between source and target tasks: More similar tasks require less fine-tuning.
  • Size of the target dataset: Larger target datasets allow more layers to be fine-tuned.

1. Two Main Approaches

A. Feature Extraction (Fixed Feature Extractor)

This approach is ideal when the target dataset is small and the source task is similar to the target task.

  1. Use a pre-trained model: Select a model trained on a source task (e.g., ResNet50 trained on ImageNet).
  2. Freeze the pre-trained layers: Prevent their weights from being updated during training—this preserves the learned features (e.g., edges, shapes for images).
  3. Add a custom head: Append a new classifier/regressor (e.g., fully connected layers) to the pre-trained model’s output. This head is trained from scratch on the target task.
  4. Train the custom head: Only the new layers are updated; the pre-trained model acts as a fixed feature extractor.

Example: Use ResNet50 (pre-trained on ImageNet) to extract features from X-ray images, then train a small classifier to detect pneumonia.

B. Fine-Tuning

This approach is ideal when the target dataset is large and the source task is somewhat similar to the target task.

  1. Start with a pre-trained model: Same as feature extraction.
  2. Unfreeze some layers: Unfreeze the top N layers of the pre-trained model (e.g., the last 2 blocks of ResNet50). Lower layers capture generic features (e.g., edges), while upper layers capture task-specific features (e.g., dog breeds for ImageNet).
  3. Add a custom head: Same as feature extraction.
  4. Train with a small learning rate: Update the weights of the unfrozen pre-trained layers and the custom head. A small learning rate ensures we don’t overwrite the useful features learned from the source task.

Example: Fine-tune BERT (pre-trained on English text) on a large dataset of medical text to classify patient diagnoses.

2. Critical Design Choices

ScenarioRecommended Approach
Small target dataset + similar source/target tasksFeature extraction
Large target dataset + similar source/target tasksFine-tuning (unfreeze top layers)
Large target dataset + dissimilar source/target tasksFine-tuning (unfreeze all layers)
Different data distributions (same task)Domain adaptation (e.g., adversarial training)

Transfer Learning Implementation (Python with TensorFlow/Keras: Image Classification)

We’ll use ResNet50 (pre-trained on ImageNet) to classify cats vs. dogs with a small dataset (subset of Kaggle’s Cats vs. Dogs dataset). This is a classic example of transfer learning for image tasks.

Step 1: Import Dependencies

python

运行

import tensorflow as tf
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
import numpy as np

Step 2: Set Up Data Generators

We use a small dataset (e.g., 1,000 training images: 500 cats, 500 dogs; 200 validation images). The ImageDataGenerator handles data loading and preprocessing (matching ResNet50’s input requirements).

python

运行

# Preprocessing parameters (matches ResNet50's expected input)
IMG_SIZE = (224, 224)
BATCH_SIZE = 32
PREPROCESS_INPUT = tf.keras.applications.resnet50.preprocess_input

# Data generators with augmentation (for training) and no augmentation (for validation)
train_datagen = ImageDataGenerator(
    preprocessing_function=PREPROCESS_INPUT,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True
)

val_datagen = ImageDataGenerator(preprocessing_function=PREPROCESS_INPUT)

# Load data from directories (assumes data is in ./data/train and ./data/val with cat/dog subfolders)
train_generator = train_datagen.flow_from_directory(
    "./data/train",
    target_size=IMG_SIZE,
    batch_size=BATCH_SIZE,
    class_mode="binary"
)

val_generator = val_datagen.flow_from_directory(
    "./data/val",
    target_size=IMG_SIZE,
    batch_size=BATCH_SIZE,
    class_mode="binary"
)

Step 3: Build Model with Transfer Learning

We use two approaches: feature extraction and fine-tuning.

A. Feature Extraction (Fixed ResNet50)

python

运行

# Load pre-trained ResNet50 without the top classification layers
base_model = ResNet50(
    weights="imagenet",
    include_top=False,  # Exclude ImageNet classifier
    input_shape=(224, 224, 3)
)

# Freeze the base model (no weight updates)
base_model.trainable = False

# Add custom classification head
inputs = tf.keras.Input(shape=(224, 224, 3))
x = base_model(inputs, training=False)  # Training=False → batch norm uses moving stats
x = GlobalAveragePooling2D()(x)  # Reduce spatial dimensions to 1D vector
outputs = Dense(1, activation="sigmoid")(x)  # Binary classification (cat/dog)

# Build and compile the model
model_feature_extraction = Model(inputs, outputs)
model_feature_extraction.compile(
    optimizer="adam",
    loss="binary_crossentropy",
    metrics=["accuracy"]
)

# Summary of the model
model_feature_extraction.summary()

B. Fine-Tuning (Unfreeze Top Layers of ResNet50)

After training the feature extraction model, we fine-tune the top layers of ResNet50:

python

运行

# Unfreeze the base model
base_model.trainable = True

# Freeze all layers except the last 2 blocks of ResNet50
fine_tune_at = 140  # Index of the layer to start fine-tuning (varies by model)
for layer in base_model.layers[:fine_tune_at]:
    layer.trainable = False

# Recompile with a small learning rate (critical for fine-tuning)
model_fine_tuning = Model(inputs, outputs)
model_fine_tuning.compile(
    optimizer=tf.keras.optimizers.Adam(learning_rate=1e-5),  # 10x smaller than default
    loss="binary_crossentropy",
    metrics=["accuracy"]
)

# Summary (only top layers are trainable)
model_fine_tuning.summary()

Step 4: Train the Models

python

运行

# Train feature extraction model
history_feature = model_feature_extraction.fit(
    train_generator,
    epochs=10,
    validation_data=val_generator
)

# Train fine-tuning model (continue training from feature extraction weights)
history_fine = model_fine_tuning.fit(
    train_generator,
    epochs=20,  # Total epochs: 10 (feature extraction) + 10 (fine-tuning)
    initial_epoch=history_feature.epoch[-1],
    validation_data=val_generator
)

Step 5: Visualize Results

python

运行

# Plot accuracy curves
plt.figure(figsize=(12, 4))

# Feature extraction accuracy
plt.subplot(1, 2, 1)
plt.plot(history_feature.history["accuracy"], label="Train Accuracy (Feature Extraction)")
plt.plot(history_feature.history["val_accuracy"], label="Val Accuracy (Feature Extraction)")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.legend()
plt.title("Feature Extraction Performance")

# Fine-tuning accuracy
plt.subplot(1, 2, 2)
plt.plot(history_feature.history["accuracy"], label="Train Accuracy (Feature Extraction)")
plt.plot(history_fine.history["accuracy"], label="Train Accuracy (Fine-Tuning)")
plt.plot(history_feature.history["val_accuracy"], label="Val Accuracy (Feature Extraction)")
plt.plot(history_fine.history["val_accuracy"], label="Val Accuracy (Fine-Tuning)")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.legend()
plt.title("Fine-Tuning Performance")

plt.tight_layout()
plt.show()

Key Outputs

  • Feature Extraction: Achieves ~85–90% validation accuracy with minimal training time.
  • Fine-Tuning: Boosts validation accuracy to ~95% by adapting the pre-trained model to the target task.

Transfer Learning in Different Domains

Transfer learning is applicable to all major machine learning domains, with domain-specific pre-trained models:

1. Computer Vision

  • Pre-trained Models: ResNet, EfficientNet, VGG16, MobileNet, Vision Transformer (ViT).
  • Source Tasks: ImageNet classification, object detection.
  • Target Tasks: Medical image analysis (X-ray, MRI), facial recognition, autonomous driving object detection.

2. Natural Language Processing (NLP)

  • Pre-trained Models: BERT, GPT, RoBERTa, T5.
  • Source Tasks: Masked language modeling (BERT), next-token prediction (GPT).
  • Target Tasks: Sentiment analysis, named entity recognition (NER), machine translation, question answering.
  • Key TechniquePrompt tuning (for large models like GPT-3)—instead of fine-tuning, design prompts to guide the pre-trained model to solve the target task.

3. Speech Recognition

  • Pre-trained Models: Wav2Vec2, HuBERT.
  • Source Tasks: Speech-to-text on large audio datasets.
  • Target Tasks: Accent classification, speaker verification, voice command recognition.

4. Reinforcement Learning (RL)

  • Source Tasks: Training an RL agent on a simple game (e.g., CartPole).
  • Target Tasks: Transferring skills to a more complex game (e.g., Atari Breakout).
  • Key TechniquePolicy transfer—reuse the pre-trained policy network and fine-tune it on the target RL task.

Pros and Cons of Transfer Learning

Pros

  1. Reduced Training Time: Avoids training large models from scratch—cuts training time from weeks to hours/days.
  2. Better Performance with Small Data: Pre-trained features generalize well, reducing overfitting on small target datasets.
  3. Lower Computational Cost: Requires fewer GPUs/TPUs compared to training from scratch.
  4. Generalization: Pre-trained models capture universal features (e.g., edges in images, syntax in text) that are useful across tasks.

Cons

  1. Domain Mismatch: If the source and target tasks are too dissimilar, transfer learning can hurt performance (e.g., using a text model for image tasks).
  2. Overfitting Risk: Fine-tuning with a large learning rate can overwrite useful source task features.
  3. Model Size: Pre-trained models are often large (e.g., BERT-base has 110M parameters), making deployment on edge devices challenging (mitigated by model pruning/quantization).
  4. Licensing and Bias: Pre-trained models may inherit biases from the source dataset (e.g., gender/racial bias in ImageNet), which can transfer to the target task.

Summary

Key considerations: Task similarity, target dataset size, and learning rate selection for fine-tuning.

Transfer Learning reuses knowledge from a pre-trained model (source task) to solve a target task, reducing training time and improving performance on small datasets.

The two main approaches are feature extraction (fixed pre-trained layers) and fine-tuning (updating top pre-trained layers with a small learning rate).

It is widely used across computer vision, NLP, speech recognition, and reinforcement learning, with domain-specific pre-trained models.



了解 Ruigu Electronic 的更多信息

订阅后即可通过电子邮件收到最新文章。

Posted in

Leave a comment