Ahead of AI

Ahead of AI

Share this post

Ahead of AI
Ahead of AI
Understanding Parameter-Efficient LLM Finetuning: Prompt Tuning And Prefix Tuning
Copy link
Facebook
Email
Notes
More

Understanding Parameter-Efficient LLM Finetuning: Prompt Tuning And Prefix Tuning

Sebastian Raschka, PhD's avatar
Sebastian Raschka, PhD
Apr 30, 2023
∙ Paid
81

Share this post

Ahead of AI
Ahead of AI
Understanding Parameter-Efficient LLM Finetuning: Prompt Tuning And Prefix Tuning
Copy link
Facebook
Email
Notes
More
11
4
Share

Last week, I shared an overview of different Large Language Model (LLM) finetuning techniques. In a series of short articles, I am planning to discuss a selection of the most relevant techniques one by one.

Finetuning Large Language Models

Finetuning Large Language Models

Sebastian Raschka
·
April 22, 2023
Read full story

Let's start with a selection of parameter-efficient finetuning techniques concerning prompt modifications.

A selection of parameter-efficient finetuning techniques covered in this article.

Prompt Tuning

The original concept of prompt tuning refers to techniques that vary the input prompt to achieve better modeling results. For example, suppose we are interested in translating an English sentence into German. We can ask the model in various different ways, as illustrated below.

An example of hard prompt tuning, that is, rearranging the input to get better outputs.

Now, this concept illustrated above is referred to as hard prompt tuning since we directly change the discrete input tokens, which are not differentiable.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Raschka AI Research (RAIR) Lab LLC
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More