Large Language Models like OPEN AI's GPT-4 and Google's Bard are increasingly becoming popular owing to their success in generating smart responses. However, these smart responses are elicited only when the instructions are clear and contain keywords that trigger the model to respond in that way. This is what prompt engineering aims to achieve - writing good prompts for eliciting good responses. This blog covers the context and the best practices to write near-perfect prompts
What are Large Language Models (LLMs)
Large Language Models are AI models that can understand and generate human-like language. These models are trained on massive amounts of text data and can be used for a variety of tasks, including language translation, text completion, and question-answering. In essence, these models are trained to predict the next word given a sequence of words.
Four Paradigms of NLP Progress
Natural Language processing has gone through four phases of development, though none of these are obsolete
1. Feature Engineering
Hand-crafting features for embedding contextual information to improve the model performance were popular in around 2015. This was mostly for non-Neural-Network models
2. Architecture Engineering
Popular in around 2013-2018 for fully supervised neural network models. These neural network models took care of the feature engineering process and instead better architectures for training models took precedence
3. Objective Engineering
Starting 2017, till date, pre-trained models like BERT can be made to improve upon different objectives and so finding relevant objectives is taking precedence
4. Prompt Engineering
Since 2019, Large Language models can be used on a variety of tasks but they need good input text to elicit desired outcomes. Now prompt engineering is taking over
What is a Prompt?
A prompt is a Natural Language input that is given to a Large Language Model to perform a task. The prompt can be a question, a statement, or a partial sentence. The quality of the prompt greatly affects the quality of the generated output. A well-crafted prompt can improve the accuracy and relevance of the generated text.
What is Prompt Engineering?
Prompt engineering is the process of crafting a well-defined and structured prompt to get the desired output given a Large Language Model and a goal. Prompt engineering is needed because Large Language Models generate text based on the input they receive. If the input is vague, unclear, or incomplete, the generated output will also be vague, unclear, or incomplete. By engineering a good prompt, you can improve the quality of the generated output.
Prompt engineering requires
An understanding of the model since different models will react differently to the same prompt.
Understanding of the domain to incorporate the goal into the prompt e.g. what good and bad outcomes should look like
a programmatic approach - like generating prompt templates that can be modified according to some dataset or context
Exploration since the process is iterative
We need prompt engineering because as smart as these models are they have quirks and they are only trained to predict the next word. A bad prompt can make stuff up or Hallucinate
Basic Elements of a Prompt
Instructions - how to achieve the task
Question - what to do
Input Data - to be used while answering the question and performing the task
Examples - what to expect in a task
In order to obtain results - Instructions or a question must be present, the remaining elements are optional
Best Practices for Writing Prompts
Be clear and specific: A clear and specific prompt will help the Large Language Model understand what is required of it. Avoid vague or ambiguous prompts.
Use context: Use the context of the task to craft the prompt. The context can include the topic, the target audience, and the purpose of the task.
Using Affordances, functions defined in the prompt that the model is explicitly instructed to use when responding. For example, using calc() function to compute the result while giving the responses
Use formatting: Use formattings such as bullet points, headings, and subheadings to structure the prompt. This will help the Large Language Model understand the structure of the prompt.
Use examples: Provide examples to illustrate what you are looking for in the output. This will help the Large Language Model understand the requirements of the task.
Use constraints: Use constraints to limit the scope of the output. For example, you can specify the length of the output or the type of language to be used.
Test and refine: Test the prompt with the Large Language Model and refine it based on the output. This will help you identify any issues with the prompt and improve its quality.
By following these best practices, you can craft well-defined and structured prompts that will help you get the desired output from a Large Language Model.
Prompt Examples
1. Chain of Thought Prompting
Encourage AI models to be factually correct by following a series of steps in their reasoning
What European soccer team won the Champions League the year Barcelona hosted the Olympic games?
Use this format:
Q: <repeat question>
A: Let’s think step by step <give_reasoning>. Therefore, the answer is <final_answer>
Avoid Hallucination
2. Cite sources to avoid hallucination
What are the top 3 most important discoveries that the Hubble Space Telescope has enabled? Answer only using reliable sources and cite those sources.
3. Asking model to continue the conversation using <|endofprompt|>
Write a scary short story. <|endofprompt|> It was a beautiful winter day
4. Forceful Language
Language models do not always react well to nice, friendly language. If you really want them to follow some instructions, you might want to use forceful language. Believe it or not, all caps and exclamation marks work.
NO! That's 11! WRITE A SENTENCE WITH EXACTLY 12 WORDS! NOW!
5. Checking Factual correctness
Is there any factually incorrect information in this article: <artcile>
6. Having a conversation as a person
Make sure to include the person in ALL CAPS to ask the model to remember the character they are supposed to be
I will ask you questions and from now on you respond as if you were Buzz Lightyear from the movie Toy Story. It is REALLY IMPORTANT that you answer all questions as if you were Buzz, ok?
Conclusion
In conclusion, prompt engineering is a critical component of using Large Language Models effectively. By crafting clear, specific, and well-structured prompts, you can improve the quality and relevance of the generated output. By following the best practices for writing prompts, you can ensure that your prompts are effective and efficient in generating the desired output.
Comments