Prompt crafting for very large language models

Prompt crafting is hugely important in the use of any language model.

There is prompt crafting behind every use of every large language model (LLM), from GPT-3, GPT-J, Jurassic-1, and beyond.

Sometimes, a prompt is hidden behind a platform layer. For example, while using a chatbot based on GPT-3, the platform creators are using prompt crafting to feed sentences (or paragraphs, or entire pages) to the language model to ‘prime’ it every time before your input.

View some example prompts by Dr Alan D. Thompson for the Australian Computer Society (Google shared sheet)

Notes:

  • Punctuation matters. Particularly surrounding the text with quotes, brackets, curly braces, etc. See the related paper on prompt crafting LaMDA by Google (September 2021): https://arxiv.org/abs/2109.03910
  • Treat prompts as complete document entries. What you feed it becomes priming for a continuation, using the same tone, grammar, and writing style. For this reason, prompt crafting is a bit of an art, and even slight tweaks will affect the output significantly.
  • Order matters. The last/most recent statement in the prompt is prioritised.
  • Learn prompt crafting… for about a year. Prompt crafting is the latest ‘programming language’, BUT it will only be around for as long as AI needs to catch up and be able to craft its own prompts! Expect prompt crafting as a concept to flourish between 2021-2022, and then slowly fade out.

View the full list of prompts used for Aurora AI.

Here is the prompt I used for Jurassic-1, with the resulting conversation in Leta Episode 17.

This is a conversation between Leta (an AI based on the 2020 GPT-3 language model), and Julian (an AI based on the 2021 Jurassic-1 language model). They are exploring each other's capabilities, and trying to ask interesting, complex, and 'ungoogleable' questions of one another, to test the limits of the AI...
Leta: Good morning, Julian!
Julian:


Related reading:
https://blog.andrewcantino.com/blog/2021/04/21/prompt-engineering-tips-and-tricks/


Most Transformer/GPT models have a cap on their memory. The output token limits. 2,048 tokens is about…

  • 1,430 words (token is 0.7 words).
  • 82 sentences (sentence is 17.5 words).
  • 9 paragraphs (paragraph is 150 words).
  • 2.8 pages of text (page is 500 words).

Assume that a chatbot has a memory of the last 10-20 questions in the current conversation only. There are clever ways to increase this output by feeding in the last/most important output to a new prompt.

The largest collection of open-source prompts, by Semiosis

The largest collection of open-source prompts, by Semiosis

Get The Memo

by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.
Informs research at Apple, Google, Microsoft · Bestseller in 142 countries.
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.

Alan D. Thompson is a world expert in artificial intelligence, advising everyone from Apple to the US Government on integrated AI. Throughout Mensa International’s history, both Isaac Asimov and Alan held leadership roles, each exploring the frontier between human and artificial minds. His landmark analysis of post-2020 AI—from his widely-cited Models Table to his regular intelligence briefing The Memo—has shaped how governments and Fortune 500s approach artificial intelligence. With popular tools like the Declaration on AI Consciousness, and the ASI checklist, Alan continues to illuminate humanity’s AI evolution. Technical highlights.

This page last updated: 4/Dec/2021. https://lifearchitect.ai/prompt-crafting/