- 1.Large Language Models Are Human-Level Prompt Engineers - optimizing over a set of candidate that were proposed by an LLM in order to maximize a score function. contributes to improvement of responses.
- 2.Demystifying Prompts in Language Models via Perplexity - given the variability of the quality of results, how do we pick the best prompts automatically? using GPT3 and back translation to choose the lowest perplexity prompts that give the most gain in performance.
- 1.Chain Of Thought Prompting Elicits Reasoning in Large Language Models - COT is a series of intermediate reasoning steps that significantly improves the ability of large language models to perform complex reasoning, by Jason Wei Xuezhi Wang Dale Schuurmans Maarten Bosma Brian Ichter Fei Xia Ed H. Chi Quoc V. Le Denny Zhou Google Research, Brain Team.COT, Google brain.
- 2.self consistency improve chain of though reasoning in language models - "samples a diverse set of reasoning paths instead of only taking the greedy one, and then selects the most consistent answer by marginalizing out the sampled reasoning paths"
Prompt Hacking Examples