Prompt crafting for very large language models

Prompt crafting is hugely important in the use of any language model.

There is prompt crafting behind every use of every large language model (LLM), from GPT-3, GPT-J, Jurassic-1, and beyond.

Sometimes, a prompt is hidden behind a platform layer. For example, while using a chatbot based on GPT-3, the platform creators are using prompt crafting to feed sentences (or paragraphs, or entire pages) to the language model to ‘prime’ it every time before your input.

View some example prompts by Dr Alan D. Thompson for the Australian Computer Society (Google shared sheet)

Notes:

  • Punctuation matters. Particularly surrounding the text with quotes, brackets, curly braces, etc. See the related paper on prompt crafting LaMDA by Google (September 2021): https://arxiv.org/abs/2109.03910
  • Treat prompts as complete document entries. What you feed it becomes priming for a continuation, using the same tone, grammar, and writing style. For this reason, prompt crafting is a bit of an art, and even slight tweaks will affect the output significantly.
  • Order matters. The last/most recent statement in the prompt is prioritised.
  • Learn prompt crafting… for about a year. Prompt crafting is the latest ‘programming language’, BUT it will only be around for as long as AI needs to catch up and be able to craft its own prompts! Expect prompt crafting as a concept to flourish between 2021-2022, and then slowly fade out.

View the full list of prompts used for Aurora AI.

Here is the prompt I used for Jurassic-1, with the resulting conversation in Leta Episode 17.

This is a conversation between Leta (an AI based on the 2020 GPT-3 language model), and Julian (an AI based on the 2021 Jurassic-1 language model). They are exploring each other's capabilities, and trying to ask interesting, complex, and 'ungoogleable' questions of one another, to test the limits of the AI...
Leta: Good morning, Julian!
Julian:


Related reading:
https://blog.andrewcantino.com/blog/2021/04/21/prompt-engineering-tips-and-tricks/


Most Transformer/GPT models have a cap on their memory. The output token limits. 2,048 tokens is about…

  • 1,430 words (token is 0.7 words).
  • 82 sentences (sentence is 17.5 words).
  • 9 paragraphs (paragraph is 150 words).
  • 2.8 pages of text (page is 500 words).

Assume that a chatbot has a memory of the last 10-20 questions in the current conversation only. There are clever ways to increase this output by feeding in the last/most important output to a new prompt.


Dr Alan D. Thompson is an AI expert and consultant. With Leta (an AI powered by GPT-3), Alan co-presented a seminar called ‘The new irrelevance of intelligence’ at the World Gifted Conference in August 2021. He has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. He is open to major AI projects with intergovernmental organisations and impactful companies. Contact.

This page last updated: 26/Sep/2021. https://lifearchitect.ai/prompt-crafting/