Step 1: Prompting Generative Models

Generative models have a massive training data set. Their job is to predict text. But, the more specific guidance you can provide the model in the prompt, the more tailored the output will be to your expectations.

Sometimes you may want the model to leverage its broader knowledge with fewer guidelines, other times you may want to constrain it. It’s not possible to entirely constrain outputs or to guarantee a desired format, but there are certain strategies you can take, including:

  1. Providing sufficient context
  2. Providing specific prose instructions

Provide Context

Consider the following two use cases when generating content: Write a short biography about “Mary Miller - a Yext Employee” Write a short biography about Madonna

Given the large training set of the model, it’s likely that a fairly accurate biography could be written about Madonna, without providing much context. Though the content might be generic, that might be reasonable for the use case at hand.

However, Mary Miller is not a well-known figure. The model was likely not trained on a data set to have enough knowledge to write a complete biography about Mary Miller. And, it’s possible that multiple Mary Millers exist, and the model is not able to distinguish between training data about these individuals.

Especially when not enough context is provided, generative models are prone to what’s known as “hallucinating,” or making up content. These models cannot reliably pull content from trusted sources or include specific details, especially about less widely-known information.

Given this, Mary Miller’s biography with NO additional information will likely not produce accurate content.

Revisiting the two use cases, generating content about well-known, popular topics, (such as a blog on “Healthy Living” or “Fun Activities in NYC”) may not require context for accuracy. The content may be generic, but the approach does require less effort in prompt engineering. However, to generate a biography, product description, or content that directly relates to a specific set of information, it’s best to actually include as many contextual details as possible concretely in the prompt. It’s important to note that if not enough information is provided, AND the model is told to NOT hallucinate, it’s likely that the generated responses will be repetitive and generic. See the example in Step 2 below for more information on what this means and how to improve your prompts for the best results.

How to Provide Context using Computed Field Values

Utilizing existing information on an entity’s profile is a great way to provide context when computing Field Values via Content Generation methods. Whether you are adding instructions to a pre-built prompt, or crafting a completely custom prompt, you can dynamically pass profile field data for each to be included for each computation. Embedding other field values on the entity into the prompt is a great strategy.

For example, say I am computing the value for the “description” field on my Location entities via the “Generate a Business Description” Computation Method. You can add more context via providing additional information for each generated Business Description to mention the year of establishment, as stored on the field c_yearOfEstablishment. This might look like the following:

Additional Information In the opening paragraph of my business description, mention the year of establishment: [[c_yearOfEstablishment]]

Now, the model would receive the sentence above included in the prompt, with the year of establishment dynamically populated for the given entity.

Provide Prose Instruction

Not only is it important to provide the right set of information to the models, it’s equally as important to include any formatting and stylistic guidelines within the prompt. Although it might seem obvious that a biography should not include exclamation points, for example, these models are unpredictable. Guiding them with explicit instructions is the best way to curate high-quality content.

Some examples of formatting guidelines to include are:

  • Setting the desired length
    • Total number of words/characters
    • Number of paragraphs
    • Number of section headers
    • Number of sentences
  • Setting the tone
    • Formal vs. informal
    • Using a positive connotation
  • Including Oxford commas
  • Addressing people only by their last name
  • Including a call to action at the end of the content
  • Never using swear words
  • Where to pull knowledge from (e.g., only use the provided details to write the product description)

The list goes on. It’s likely that your entire list is not obvious at first, but as you start to generate content and see the results, you can continue to modify your instructions to include more and more guidelines.

Tips on Constructing the Prompt

As mentioned, the more contextual information and detailed instructions you can provide the model, the better. Even so, there is still no guarantee that the model will follow all the instructions provided. However, we’ve found the following strategies to help.

1. Use Structural and Organizational Patterns

Rather than sending the AI a blob of information, use the following structural and organization patterns to best prompt the model.

  • Separate Prose Instructions from the Context (specifically, using ### or “”” to signify a separation has shown positive results)
  • Use bullets, numbers, and other formatting mechanisms to explicitly provide the instructions
  • To prompt a specific output format, you can “set up” the AI to complete the prompt.
    • You can even prompt the output by writing the first couple of words or sentences you are expecting, and allow the model to complete the response.


Write a Biography about “Mary Miller - Yext Employee”

# Instructions:
[[Set of Instructions]]

# Resume:
[[Facts about Mary]]

# Output (biography in 200 words):
Stellar employee

See example for more!

2. Provide an Example

If you have a good example of the type of content you want the model to generate (e.g. a sample biography or a sample product description), you can include that in the prompt.

Not only can this impact the actual content (e.g. influencing the tone of the outputs), but it can also guide the model to providing the correct output format (e.g the number of paragraphs, or whether to provide long form text vs. a bulleted list)

However, it is important to be weary that the model could potentially copy the example too closely - there is definitely a fine line.

3. Use Explicit, Descriptive Instructions

The more you can use natural language to explain what you want the model to output, the better.

It is also better to be as specific as possible. For example, instructing the AI to write a “3-5 sentence paragraph” is better than “a short paragraph”.

Continuing from the strategy above to include an example, you might want to follow the example with the phrase “Do not copy the example biography.” Even though that may seem obvious, it’s important to explicitly tell the model what NOT to do.

Attempt to Mitigate Hallucinations

Additionally, you can attempt to instruct the model to “not hallucinate” by altering your word choice. For example, if you do not want to model to use information about “Mary Miller- Yext Employee” that was found on the internet, and only to use information that exists within the confines of the prompt, you might add the following phrase: “Do not use any information that was not provided to you in this prompt”.

However, it’s important to note that if little information is provided in these cases, the resulting content will likely be generic and/or short.

Word Choice

Changing wording even slightly can significantly impact generated content. For example, when asking the model to write a biography, the phrase “Generate a biography” vs. “Rewrite the provided information into a biography” guides the model to NOT hallucinate and explicitly use the given information, rather than pull from its less-reliable knowledge base.

Say “What to do” instead of just “What not to do”

To reiterate, the more instructions provided, the better the model can perform. Although instructing the model “do not use Author’s first name” is a great detail to add, further instructing what the model should do instead will likely lead to a better output. For example, “Do not use the Author’s first name. Only refer to the author by her last name” can increase the chances that the model will follow the aforementioned instructions.

4. Repeat instructions

Saying the same instruction multiple times can improve the chances the model will listen. Stating the desired length in both the “Instructions” piece as well when “prompting” an output has helped improve the chances of the model following the guidelines.