This guide breaks down six crucial areas for crafting effective prompts in AI models, from using clear language to establishing a feedback loop, ensuring models understand and accurately respond to user requests across various complexities.
Prompt Engineering 101
When it comes to using artificial intelligence models effectively, the most important part of the equation is the prompt. Without an effective prompt, the model cannot comprehend what the user is requesting of it. In this document, we will break down 6 key areas to keep in mind when prompting. These 6 items can be used across many different types of models, ranging from the most complex LLM’s all the way down to the simplest of chatbots and assistants.
- Clear and Concise Language: Use straightforward language and avoid ambiguity. Clearly define the task or question in simple terms to ensure the intended meaning is conveyed without confusion.
- Specificity and Detail: Provide detailed information relevant to the task. The more specific the prompt, the more tailored and accurate the response will be. Include key details like context, purpose, and any necessary constraints or preferences.
- Realistic Expectations: Understand the capabilities and limitations of the system being prompted. Ensure that the prompts are realistic and within the scope of what the system can achieve. Avoid overcomplicated or unattainable requests.
- Structured Format: Organize the prompt in a logical, structured manner. Consider breaking apart the whole task into step-by-step prompts as providing too many steps may confuse the model.
- Consistency in Instructions: Maintain consistency in your instructions throughout the prompt. If there are multiple parts or requirements, ensure they are coherent and do not contradict each other.
- Feedback Loop: Incorporate feedback mechanisms into your prompting process. If initial results are not satisfactory, refine the prompt by adjusting the language, adding more details, or clarifying the objectives. This iterative approach can significantly improve the outcomes.
Examples with Adesso AI
Task: You are attempting to find all of the open deals in the system who are associated with the customer Amazon.
Prompt: Find all open deals with the customer of Amazon.
Although this is a simple example, notice how the prompt is clear and concise, straight to the point, and avoids any fluff language. Based on the response the system returns, I can either accept the results, or slightly adjust the prompt phrasing. This tactic would mostly be used with more complex requests.
Task: You are attempting to update the rate for a deal item within the trade assistant to a different value.
Prompt: 1st: Update rate for deal item #1
2nd: .50
This is a good example of where to use a structured format and break the prompt out into 2 separate pieces. First I prompt the assistant for a simple task I want to complete - Update rate. I use specificity to tell it which deal item I want to update the rate for - deal item #1.
After this, the assistant responds back with something along the lines of “What do you want to update the rate to?” where I reply with the simple - .50.
Task: You want to return all the deals that are in the system currently
Prompt: Get all deals.
Bad Prompt: Please return me all the deals that are currently in the system.
This is an example of when to use Clear and Concise language when prompting. Although the “bad prompt” sounds nicer if you were speaking to a human, an assistant is not a human. It is best to talk in short and very clear sentences. What exactly do you want it to do for you? You should provide that information and nothing else.
Task: You are trying to update the EST. cases for deal item #1, but are unhappy with the response, or the system did not understand you correctly.
1st Prompt: Switch the Estimated cases for Deal Item #1.
System Response: I’m sorry, I do not understand what you are asking for.
2nd Prompt: Update the Estimated cases for Deal Item #1
This is a great example of using the Feedback loop when prompting AI. Many times you will find that systems may not understand you the first time, and will generate an output that is unsatisfactory to your standards. The best thing to do here is think of another way to ask your question. There are infinite ways that someone can ask the same question, and it is impossible to train an LLM to catch every single form of dialect. When in doubt, rephrase the question!
Task: You are attempting to create a deal in 1 line in the assistant.
Bad Prompt: Create Deal for Wakefern that runs in December with description 'BestPracticesTest1'
Good Prompt: Create Deal for Customer Wakefern that runs in December with description 'BestPracticesTest1'
This is a good example of Realistic Expectations when prompting the system. The system follows a path when creating a deal. Although a human may know that the customer is Wakefern, the system needs “Customer” defined in the prompt before realizing that Wakefern is a customer.