What Is Prompt Engineering?

Prompt engineering is the process of optimizing the performance of generative AI models by adjusting written text, code and other inputs. Here’s a deep dive into prompt engineering techniques, best practices and applications.

Written by Ryan Elmore
man software engineer concept for prompt engineering
Image: Shutterstock / Built In
Brand Studio Logo
UPDATED BY
Matthew Urwin | Oct 17, 2024

Prompt engineering is the process of optimizing the performance of generative AI by tailoring the questions and processes to your specific needs.

What Is Prompt Engineering?

Prompt engineering is the process of optimizing the performance of generative AI through crafting tailored text, code or image-based inputs. Effective prompt engineering boosts the capabilities of generative AI and returns better results.

As generative AI has captured the attention of the public and businesses everywhere, prompt engineering has become an increasingly important skill. Generative AI can assist in nearly every facet of our personal and professional lives — from giving dinner recommendations to crafting annual reviews to creating proposals to even streamlining complex clinical trial data for major pharmaceutical companies.

But it’s not totally fail-proof. The quality of the AI’s output depends on the input it receives, be it text, images or code. That’s where prompt engineering comes in.

More on AIIs Generative AI the Next Tech Bubble?

 

Key Elements of a Prompt

Before diving into the creation of prompts, it’s crucial to understand the building blocks of an effective prompt:

  • Instructions: Directions and details provided to a language model that explain the task or request in clear terms. 
  • Context: Background information provided to an AI model to give it a better understanding of how it can tailor its response to the needs of the user. 
  • Input data: Information submitted to the language model. While this can be a written question, it can also be a line of code or a paragraph taken from a book. 
  • Output indicator: Tells the AI model what format its response should take. For example, a user may ask for a written response in two paragraphs, a bulleted list or a five-paragraph essay.

 

Prompt Engineering Techniques

There are many ways to develop a prompt. Below are some of the more common prompt engineering techniques used for a range of scenarios:  

  • Zero-shot prompting: Submitting a request or task to be performed without additional information or context since the model can rely on previous training to generate a relevant answer.  
  • Few-shot prompting: Feeding a language model several examples of the desired output, with the goal of eliciting a more specific response. 
  • Chain-of-thought prompting: Making an AI model break a task down into steps, so it relies on reasoning before delivering the final answer. 
  • Self-consistency: When a model uses chain-of-thought reasoning to generate many reasoning paths and selects the most consistent answer that occurs across these paths. 
  • General knowledge prompting: Supplementing a prompt with additional information from an external source to aid the model. A language model may also generate its own general knowledge for the prompt. 
  • ReAct: A framework that enables models to adjust their reasoning when interacting with external sources, leading to improved responses. 
  • Tree of thoughts: When an AI model creates a range of thoughts that it considers as part of its reasoning process, essentially brainstorming the best course of action to take.   
  • Retrieval augmented generation: When a language model gathers information from external sources and produces an answer based on this data. 
  • Automatic reasoning and tool-use (ART): Freezing a language model mid-generation and instilling it with the ability to reason through intermediate steps before proceeding with the generation process.  
  • Automatic prompt engineer: Feeding a language model examples of potential outputs, which it then assesses before choosing the output that best fits the situation. 
  • Directional stimulus prompting: When an AI model is asked to complete a task while being given a few hints to guide its answer.   
  • Graph prompting: Taking information from a graph, converting it into a written format and submitting the newly written input for a language model to analyze. 

 

Types of Prompt Engineering

Here are some of the main types of prompt engineering you’ll most likely employ when using generative AI tools:

  • Text-completion prompts: Help language models finish sentences, such as, “The boy didn’t come to school because…”
  • Instruction-based prompts: Deliver commands to AI models to generate specific responses. 
  • Multiple-choice prompts: Provide a model with multiple potential responses, and the model chooses the best response for the situation.  
  • Contextual prompts: Give a language model hints and build on each other to nudge a model’s decision-making in a certain direction. 
  • Bias mitigation prompts: Check for biases in responses and enable users to adjust the output if needed.
  • Fine-tuning and interactive prompts: Allow users to adjust responses to improve outputs and train models to produce more accurate responses during each iteration.

 

Prompt Engineering Best Practices 

Let’s go over some tips for writing a good prompt 

1. Evaluate the Model’s Capabilities

Before entering prompts, take time to assess the model and its abilities. Note any limitations like an inability to process real-time data or retrieve information. Double-check whether the model contains biases as well. Remembering these details can inform how you craft prompts to avoid disruptions and inaccurate outputs.    

2. Create a Persona for the AI to Emulate 

If you are looking for generative AI to come up with direct marketing email subject lines and email copy, guide your prompt with a persona. For example, you might write:

“You have your own marketing agency with an expansive history of creating effective direct marketing campaigns via email. Today, I am looking to generate recommendations for an upcoming email marketing campaign.” 

Changing the persona of the language model changes the lens of how it will generate a response. For this particular marketing example, you could change it to focus on a particular industry or be more casual or formal based on the ask, and the results will adjust accordingly.  

3. Break Up the Prompt Into Smaller Tasks 

Instead of inquiring, “Write email subject lines,” break the prompts up into more specific questions: 

  • “Write three email subject lines for a direct marketing campaign for our latest financial services offering.”
  • “Develop three separate emails to go along with each subject line.”
  • “Create various options for recipients that know our company and do not know our company.”

Feeding broad tasks and information in return can lead to a generic response that will only get you part way there. However, breaking up the prompts into smaller sections promotes better interpretations and allows for better specificity.  

4. Enter Thorough Prompts

You may think that an AI model will understand a simple command, but sometimes it’s OK to be more thorough. 

Consider the command, “Make my article more reader-friendly.” This is a very open-ended statement that contains multiple interpretations. To help out the language model, you could say, “Make my article easier to read by shortening it from 1,500 words to 800 words and replacing all words longer than four syllables with shorter words.” 

You may be looking for a specific answer or approach, so be sure to communicate this to the model. Otherwise, it may misinterpret your request or come up with a solution that you didn’t have in mind.  

5. Provide Detailed Scenarios

Coming up with scenarios is an easy way to infuse your prompt with more details while giving an AI model more context. 

Take the question, “What are top tourist destinations in Chicago?” You can get more specific by painting a clearer picture: “Imagine you’ve just arrived in Chicago and know nothing about the city. What are some places you would visit to learn more about Chicago?”

This technique is similar to creating personas since it asks the AI model to place itself in a unique situation and tailor its responses to a set of circumstances. So, if you’re looking to narrow down a model’s response, crafting a scenario is a good next step.   

6. Use Plain Language

Like most humans, language models don’t respond well to industry-specific language and will be easily confused by academic-like speech. 

For example, don’t say, “Juxtapose HTML and CSS, then elaborate on their most important differences.” Simplify this sentence to: “Explain the main differences between HTML and CSS.” 

Being as straightforward as possible makes it easier for the model to process the request. 

7. Clarify What to Do Instead of What Not to Do 

Another way to make prompts more digestible is to use only positive phrases. Inserting negative phrases and commands into the prompt makes it more ambiguous and forces an AI model to work harder when processing the request. 

Consider the phrase, “Don’t use academic terms.” This can be difficult for an AI model to understand. Here’s a more direct version: “Please use simple language.” 

8. Ask the Model to Cite Sources

AI models are known to sometimes produce faulty answers, and a major reason for this is their tendency to hallucinate. But an effective method for countering this habit is to ask a model to list its sources. 

A request that asks for sources could look like this: “List the benefits of getting seven hours of sleep per night, and cite your sources.” 

This way, you can easily view where the model is getting its information from and verify whether its response is accurate.  

9. Experiment With Different Prompts

Explore various approaches to find the right combination that brings out the output you’re seeking and enhances your experience with generative AI. Get creative and adapt the prompts to suit your specific needs. Sometimes, even the order in which you ask questions matters, where the facts at the beginning or end can have more weight in the model output. 

An example of this is to add additional context, expand with more context of the problem at hand or give some categories of expected outcomes. The best prompts are not a single sentence with a simple request. Some of the best prompts are sometimes three pages long.  

10. Make the Output Your Own 

Take a break from relying solely on generative AI and add your personal touch to the output. While it’s always necessary to have a human fact-check the results, it’s equally important to remember that generative AI simply reproduces a consensus of what has been seen before. It can provide a helpful starting point, but it cannot replace a human when it comes to capturing nuances, infusing personality or providing insightful perspectives (at least not yet).

Another tip: take the output from your generative AI tool, then put it back in as part of a prompt, and then ask it to summarize your prompt (or change the tone or make it in list format). We’ve only got so much room here, but the options are truly endless.

More on AIResponsible AI Explained

 

Applications of Prompt Engineering

There are many use cases and applications of prompt engineering that can make our lives easier, more efficient and more productive. 

Programming

With prompt engineering, you can harness AI to assist you in crafting code, troubleshooting errors and enhancing your programming. Imagine effortlessly creating code suggestions, catching bugs and even having virtual code companions that brainstorm solutions with you. Prompt engineering allows you to level up your coding skills by tapping into a wellspring of AI-powered insights and guidance. 

Customer Service

Prompt engineering also enables you to analyze text and extract valuable information. Want to know what customers really think about your product? Simply prompt your AI and it will conjure sentiment analysis that deciphers the emotional undertones of their feedback. With natural language processing, you can also uncover hidden patterns, detect anomalies, and unlock the power of language in various applications from chatbots to customer support. 

Content Writing

Whether you’re a writer, marketer or content strategist, prompt engineering can also be your secret weapon. Need a captivating headline? A compelling introduction? Provide a detailed prompt and receive crafted, eloquent and attention-grabbing prose. With the power of prompt engineering, you can streamline your content creation process and spark fresh ideas. 

Data Analysis

Prompt engineering empowers you to transform raw data into meaningful insights. You can explore vast data sets, uncover correlations, make predictions and drive data-informed decisions. Generative AI isn’t a replacement for traditional data science and machine learning techniques, but it surely is supplemental for feature development and initial exploratory data analysis. From market research to forecasting, prompt engineering equips you to unlock the hidden insights buried within your data. 

Healthcare

Language models can assist healthcare professionals by creating reports based on patient records and other health data. Teams can also train AI models by feeding them different scenarios in which patients have various illnesses. Models can then combine this knowledge with a patient’s symptoms to determine a diagnosis and accelerate the treatment process.

 

What Does a Prompt Engineer Do?

A prompt engineer creates and develops questions, statements and processes either through written text or code and API endpoints that help improve the desired output of generative AI tools like ChatGPT. The outputs of generative AI are only as strong as the inputs, which is why companies are looking to hire, train or upskill prompt engineers to get the most business value out of AI technologies. 

Prompt engineers aren’t the traditional engineers we think of, but rather wear three hats: Part coder, part psychologist and part writer. The skills needed aren’t the same as for an engineer and developer. It requires the ability to think like a human and massage out the best results from the large language model (LLM).  LLMs are trained on human-made material and content. Understanding the business problem and the types of personas needed to produce the desired outputs are essential for being a good prompt engineer.

When you’re an effective prompt engineer, you can significantly boost the capabilities of generative AI and return better outcomes. That means more accuracy and focus on the specific task at hand. Skilled prompt engineers consider factors such as the context, tone of voice, target audience and relevant examples. They even provide specific formatting guidance and clear directions, ensuring you get the best possible results.

You do not have to officially hold the title of “prompt engineer” to practice prompt engineering and do it effectively.

Frequently Asked Questions

Prompt engineering is the practice of creating and tailoring input prompts or instructions to guide a language model to produce a desired response. For example, feeding a model more data or contextual information can elicit a response that is more relevant to a specific situation.

The instructions, context, input data and output indicator are the key elements of a prompt.

A prompt engineer uses written text or code to develop and refine questions, statements and other inputs that are fed to generative AI models. The goal is to train these models to produce more desirable outputs, improving their performance over time.

Explore Job Matches.