Prompt

Articles

  1. Brex on prompt engineering, but goes through the history of language models which is amazing

  2. prompt engineering examples, a good summary of all the techniques

  3. Lilian-Weng on prompt engineering - this is a very thorough review of the topic

Papers

Prompt Engineering

  1. Large Language Models Are Human-Level Prompt Engineers - optimizing over a set of candidate that were proposed by an LLM in order to maximize a score function. contributes to improvement of responses.

  2. Demystifying Prompts in Language Models via Perplexity - given the variability of the quality of results, how do we pick the best prompts automatically? using GPT3 and back translation to choose the lowest perplexity prompts that give the most gain in performance.

Prompt Tuning

  1. The power of scale for parameter efficient prompt tuning - it becomes more competitive at scale.

Chain Of Thought

  1. Chain Of Thought Prompting Elicits Reasoning in Large Language Models - COT is a series of intermediate reasoning steps that significantly improves the ability of large language models to perform complex reasoning, by Jason Wei Xuezhi Wang Dale Schuurmans Maarten Bosma Brian Ichter Fei Xia Ed H. Chi Quoc V. Le Denny Zhou Google Research, Brain Team.

  2. self consistency improve chain of though reasoning in language models - "samples a diverse set of reasoning paths instead of only taking the greedy one, and then selects the most consistent answer by marginalizing out the sampled reasoning paths"

Prompt Hacking Examples

Step back prompting

  1. STP - Step-Back Prompting (STP) is prompt approach in which we teach the model to answer a global questions, i.e., the original question is transformed into a stepback question, and the answer to the stepback question is used to formulate the final response.

Last updated