Prompt Engineering for CX

Posted Tuesday, September 19, 2023 by Jesse Breuer in AI, AI in CX

If you have a CX technical support staff that is getting up to speed on the use of generative AI to improve the tools available to your customer support professionals, her'es a short list of resources we recommend for them. In this article we explore the online resources available, as well as examine approaches to effective generative AI prompt writing.

Useful Resources for Mastering Prompt Engineering

Here's a short, annotated list of resources and tools.

OpenAI Playground

Brought to you by the inventors of chatGPT, This is probably the most robust resource for learning about various AI models, and practicing prompt writing, using various models optimized for different purposes. It is “sort of” free to use, initially, as there is both a cost associated with using each model, and also a credit applied when you first sign up. We will be exploring this one in depth later.

This is a free course. Some of the modules include:

Applied Prompting: Comprehensive Prompt Engineering process walkthroughs contributed by community members

Reliability: Enhancing the reliability of large language models (LLMs)

Image Prompting: Prompt engineering for text-to-image models, such as DALLE and Stable Diffusion

Prompt Hacking: Hacking, but for prompt engineering

Tooling: A review of various prompt engineering tools and IDEs

Prompt Tuning: Refining prompts using gradient-based techniques

Github’s Prompt Engineering Guide

Developers already know Github, as the place to host a repository of code using the leading version control system, Git. However Github is also home to documentation, and this guide is a great jumping off point.

Github’s “Awesome ChatGPT Prompts”

Basically, it's a resource for sharing experiences in prompt writing, examples, results.

ShareGPT: a Google Chrome extension

This extension is designed to allow quick sharing of ChatGPT prompts and their output with other users, and learn form their successes, and mistakes. It can be found in the Chrome webstore, under extensions


This is a collection of Prompts and Apps. Prompts are sorted by categories, such as Education or Creative Writing, and also by profession, such as Designers, Developers, Musicians..

Emergent Mind

A collection of AI-related news and trends, updated daily.

Techniques of Prompt Engineering

These are the techniques your team should know how to use.

Role prompting

Specifying a persona for the model to emulate, such as “you are a customer service representative at our company" or "you are an inside sales representative talking with prospective new customers for a product in the fast food industry"

This sets a tone, not only for grammar, but possibly also for what facts would be considered necessary to output.

Zero shot prompting

A question, with no example given of expected output. These may be lacking in detail or structure, unless a great deal of context is provided in the question. It is essentially using the model as an autocomplete engine.

N-shot prompting

For this, the number of “shots” refers to how many examples are provided. Sometimes called “one shot” or “few shot” prompting. The example provided gives the model a reference for tone, and length.

Here is an example of a Zero shot, no role prompt: Where do you have service centers in North America?

On submit: We have service centers in Newark, New Jersey; Denver Colorado; and Oakland, California

One shot, with role prompt (davinci 3 model)

You are an answering bot, giving the most concise answers possible

Q: what is the average life expectancy in the united states. of a Ford Taurus 2012

A: 10 years

Q: What are the hours for phone support

A: [left blank]

on submit: A:7am to 8pm EST

Settings in the AI playground 

Model Selection: It is not always necesarry to use the most advanced generative AI for a task. text-curie-001 might be cheaper to use than text-davinci-003 for example, and some models are better for generating code than conversation

Temperature: controls the "creativity" or randomness of the output. A higher temperature results in more diverse output, and a lower temperature will give more deterministic and focused results.

Max-length: with this setting it is possible to specify how concise or detailed a response is.

Overall, the settings and models available in the open AI playground provide an excellent slate for experimenting with how to modify output, select a LLM model, and set the tone and length of responses.