Introduction | Yext Hitchhikers Platform
Generative AI Models (aka Large Language Models) are powerful tools used to generate content; in the context of Yext, these models can be used to fill information gaps and streamline workflows. However, extracting the full power of these models for business’ use cases requires an understanding of how to interact with these models to obtain desirable results.
These generative models ingest natural language instructions, which are referred to as prompts, to output content. A well-crafted prompt is key to generating content that’s as close as possible to your expectations. The craft of writing these prompts is referred to in the industry as “prompt engineering”.
Prompts are used to “program” the model to complete a designated task. You can think of these prompts as a set of instructions you’d provide any human such that they could successfully fulfill the request.
Though the terminology can be intimidating, learning to “engineer” good prompts is relatively straightforward. This guide will walk through some strategies for crafting great prompts, focused specifically on the use cases where this is relevant when leveraging Content Generation within Yext.
At a high level, the more guidance and information provided in the prompt, the more tailored the output will be. Additionally, we have identified some techniques, such as the inclusion of certain phrases and formats, which we recommend implementing to improve your results.
However, even with clear instructions, there is still no guarantee that the model will reliably “listen.” These models are complex and unpredictable. The recommended strategies outlined below are intended to help control the output, but there is no way to completely mitigate unwanted results. For this reason, we recommend assessing the importance of content quality for your use case, and including humans in the loop whenever the content quality is important enough. It’s also important to note that these models are non-deterministic, meaning the content that is generated will change each time the model is run, despite the prompt remaining the same.
When is Prompt Engineering Required?
Content Generation is available as a Yext Content feature through Computed Field Values. When leveraging Content Generation through Computed Field Values, there are two scenarios in which prompt engineering is important:
- Task-specific Computations (Pre-built Prompts) - You can extend these… this system allows you to leverage pre-built prompts (that allow you to optionally add custom additional instructions)
- Custom Generative Computation: write your own entirely custom prompts to populate field values. The following guide will walk you through best practices for crafting these prompts to achieve the best results.
The following definitions were actually generated using GPT-3!
|Language Model||A probabilistic model used to predict the probability of the next word in a sequence of words.|
|GPT-3||A large language model (developed by OpenAI) that can generate human-like text.|
|Prompt Engineering||The process of creating prompts to be used with language models in order to generate desired outputs.|
|Hallucination||The generation of content that appears to be realistic but is actually created by the AI with no real-world basis. It is often nonsensical and frequently occurs when the AI has been prompted with inadequate information .|
Click Next to understand the recommended steps to prompt generative models.