Prompt crafting for very large language models

Prompt crafting is hugely important in the use of any language model.

There is prompt crafting behind every use of every large language model (LLM), from GPT-3, GPT-J, Jurassic-1, and beyond.

Sometimes, a prompt is hidden behind a platform layer. For example, while using a chatbot based on GPT-3, the platform creators are using prompt crafting to feed sentences (or paragraphs, or entire pages) to the language model to ‘prime’ it every time before your input.

View some example prompts by Dr Alan D. Thompson for the Australian Computer Society (Google shared sheet)

Notes:

  • Punctuation matters. Particularly surrounding the text with quotes, brackets, curly braces, etc. See the related paper on prompt crafting LaMDA by Google (September 2021): https://arxiv.org/abs/2109.03910
  • Treat prompts as complete document entries. What you feed it becomes priming for a continuation, using the same tone, grammar, and writing style. For this reason, prompt crafting is a bit of an art, and even slight tweaks will affect the output significantly.
  • Order matters. The last/most recent statement in the prompt is prioritised.
  • Learn prompt crafting… for about a year. Prompt crafting is the latest ‘programming language’, BUT it will only be around for as long as AI needs to catch up and be able to craft its own prompts! Expect prompt crafting as a concept to flourish between 2021-2022, and then slowly fade out.

View the full list of prompts used for Aurora AI.

Here is the prompt I used for Jurassic-1, with the resulting conversation in Leta Episode 17.

This is a conversation between Leta (an AI based on the 2020 GPT-3 language model), and Julian (an AI based on the 2021 Jurassic-1 language model). They are exploring each other's capabilities, and trying to ask interesting, complex, and 'ungoogleable' questions of one another, to test the limits of the AI...
Leta: Good morning, Julian!
Julian:


Related reading:
https://blog.andrewcantino.com/blog/2021/04/21/prompt-engineering-tips-and-tricks/


Most Transformer/GPT models have a cap on their memory. The output token limits. 2,048 tokens is about…

  • 1,430 words (token is 0.7 words).
  • 82 sentences (sentence is 17.5 words).
  • 9 paragraphs (paragraph is 150 words).
  • 2.8 pages of text (page is 500 words).

Assume that a chatbot has a memory of the last 10-20 questions in the current conversation only. There are clever ways to increase this output by feeding in the last/most important output to a new prompt.

The largest collection of open-source prompts, by Semiosis

The largest collection of open-source prompts, by Semiosis

Get The Memo

by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.
Bestseller. 10,000+ readers from 142 countries. Microsoft, Tesla, Google...
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.

Dr Alan D. Thompson is an AI expert and consultant, advising Fortune 500s and governments on post-2020 large language models. His work on artificial intelligence has been featured at NYU, with Microsoft AI and Google AI teams, at the University of Oxford’s 2021 debate on AI Ethics, and in the Leta AI (GPT-3) experiments viewed more than 4.5 million times. A contributor to the fields of human intelligence and peak performance, he has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. Technical highlights.

This page last updated: 4/Dec/2021. https://lifearchitect.ai/prompt-crafting/