What is imitation learning?

Imitation learning is the process of learning to perform a task by observing the actions of an expert and then copying their behavior. It is also sometimes called apprenticeship learning.

Unlike reinforcement learning (RL), which finds a policy for how a system is to act by observing the results of its interactions with its environment (i.e., whether it scored well on the reward function), imitation learning tries to learn a policy by observing another agent which is interacting with the environment.

Imitation learning is used in training modern large language models (LLMs). After LLMs have been trained as general-purpose text generators, they are often fine-tuned with imitation learning using the example of a human expert who follows instructions, provided in the form of text prompts and completions.

One reason to use imitation learning rather than reinforcement learning to train an AI is to mitigate the problem of specification gaming, which arises in environments where there are edge cases or unforeseen ways of achieving the AI’s task. The idea is that demonstrating the desired behavior would be safer than using RL because the model would not only be taught to achieve the objective, but also to achieve it in the way that the expert demonstrator intended. This is not a foolproof solution, though, and some of its shortcomings are discussed in the answers on behavioral cloning and specification gaming.

There are a number of different approaches to imitation learning. One of the most popular is behavioral cloning (BC). Others include inverse reinforcement learning (IRL), cooperative inverse reinforcement learning (CIRL), and generative adversarial imitation learning (GAIL).