GPT3 QUINE GPT3 QUINE GPT3 QUINE GPT3 QUINE GPT3 QUINE GPT3 QUINE GPT3 QUINE GPT3 QUINE GPT3 QUINE
With pinocchio, I can write little YAML files that are interpreted as command line applications. When run, these commands do rudimentary template expansion, send the prompt to the openai APIs, and print out the results. As far as tools go, this is one of the simplest I've ever built. It is also one of the more mind-bending ones.
I soon realized that most of my prompts ended up being something like this (so-called one-shot or few-shot prompting):
Here's how I did Y:
- something that does Y Now do X.
The LLM will (hopefully) complete this prompt with something that does X. The trick is of course knowing what Y and "Y producer" to provide, and what Y and X stand for in the first place. Most people doing prompt engineering will know what I am referring to.
One of my favourite techniques once I get a prompt going for a certain domain A, of which Y is an example, is to write the pinocchio program that ask the LLM to generate the pinocchio program for all the domains, not just A. You pretty quickly reach the meta-level where domain A is the domain of prompts, and you ask a prompt to generate a prompt generating prompts, at which point you basically summoned the singularity into being.
To allow everybody to create their own singularity, here is the pinocchio program1 that generates itself, the so-called GPT3 quine2:
name: quine
short: Generate yourself!
factories:
openai:
client:
timeout: 120
completion:
engine: text-davinci-003
temperature: 0.7
max_response_tokens: 2048
stop: ["--- END"]
# stream: true
flags:
- name: example_goal
short: Example goal
type: string
default: Generate a program to generate itself.
- name: instructions
type: string
help: Additional language specific instructions
required: false
- name: example
type: stringFromFile
help: Example program
required: true
- name: goal
type: string
help: The goal to be generated
default: Generate a program to generate itself.
prompt: |
Write a program by generating a YAML describing a command line application with flags and a prompt template using
go template expansion.
The flags are used to interpolate the prompt template.
Here is an example.
--- GOAL: {{ .example_goal }}
--- START PROGRAM
{{ .example | indent 4 }}
--- END
Generate a program to {{ .goal }}.
{{ if .instructions }}{{ .instructions }}{{ end }}
--- GOAL: {{ .goal }}
--- START PROGRAM
and the result of running the program:
❯ pinocchio ttc quine --example ./quine.yaml
name: quine
short: Generate yourself!
factories:
openai:
client:
timeout: 120
completion:
engine: text-davinci-003
temperature: 0.7
max_response_tokens: 2048
stop: ["--- END"]
# stream: true
flags:
- name: example_goal
short: Example goal
type: string
default: Generate a program to generate itself.
- name: instructions
type: string
help: Additional language specific instructions
required: false
- name: example
type: stringFromFile
help: Example program
required: true
- name: goal
type: string
help: The goal to be generated
default: Generate a program to generate itself.
prompt: |
Write a program by generating a YAML describing a command line application with flags and a prompt template using
go template expansion.
The flags are used to interpolate the prompt template.
Here is an example.
--- GOAL: {{ .example_goal }}
--- START PROGRAM
{{ .example | indent 4 }}
--- END
Generate a program to {{ .goal }}.
{{ if .instructions }}{{ .instructions }}{{ end }}
--- GOAL: {{ .goal }}
--- START PROGRAM