What is leasttomost prompting?
Leasttomost prompting is a technique to elicit complex reasoning from large language models (LLMs). This is done in two stages. First, the model is prompted to break down a complex problem into simpler subproblems. Then, the model is prompted to solve the subproblems starting with the least complex subproblem. After each subproblem is solved, its solution is appended to the prompt and the next subproblem is posed. This is repeated for increasingly complex subproblems until a solution to the original problem has been reached. This process is illustrated in the following figure:
Source: Zhou et al., Leasttomost prompting enables complex reasoning in large language models (2023)
Both of these stages are implemented with fewshot prompting instead of using additional training or finetuning.
Leasttomost prompting was proposed as an alternative to chainofthought prompting. In chainofthought prompting, a single prompt asks the model to explain the intermediate steps in its reasoning based on a few examples. Chainofthought prompting tends to struggle with easytohard generalization, i.e., solving problems that are more complex than the examples included in the learning prompt.
Leasttomost prompting mitigates this problem by using multiple prompts recursively. Subproblems are solved sequentially based on the results of simpler subproblems. The recursive approach helps the model progressively work up to solutions of more complex problems than the examples, addressing the problem of easytohard generalization.
Zhou et al. found that leasttomost prompting surpasses standard prompting and chainofthought prompting in the following tasks:

Symbolic manipulation: Take the last letters of each word from a list and concatenate them, e.g., “robot, love, stamp” becomes “tep”.

Compositional generalization: Translate natural language into sequential action commands, e.g., “look thrice after jump” becomes “JUMP, LOOK, LOOK, LOOK”.

Mathematical reasoning: Math word problems, e.g., “Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have together?”
Leasttomost prompting works best when tasks are effectively decomposed. This means that the individual subtasks must be simple enough for the model to solve given the solutions of prerequisite subtasks, and that the subtasks must collectively work up to a solution of the original problem. For this to occur, the examples given in the decomposition prompt must properly illustrate the subtask structure. Since the decomposed structure can vary for tasks across domains, new decomposition prompts with appropriate examples must be created for each type of task being solved.