jes notes Index Gallery . Shaft passers Snap issues

2024-11-04

Last modified: 2024-11-08 10:17:03

< 2024-11-03 2024-11-05 >

One-handed LLM

The idea here is to use something like the LLM steganography stuff, but only allow the LLM to generate tokens that can be typed entirely on one hand.

It doesn't work very well. I don't have any decent sentences to show. I think the issue is that it sometimes generates a token that has a high probability but only because of the near-certain token that will come next, and if that token is not allowed then we are forced to generate a very low probability token. We need to be optimising the search over the whole sentence, not just the next token.

What about using beam search?

Meh, first try didn't work, forget about it.

Prompts as Source Code

The idea is that prompts are to source code as source code is to binaries.

In the beginning (I claim) there were only binaries (and without loss of generality, assembly code).

Then somebody invented the compiler, and now it was possible to write code in a more natural language and have the machine automatically turn it into binaries.

Except, for really small problems it was still easy enough to write the assembly code by hand. And for really big problems the compiler couldn't handle the size of the input, so you still had to write in assembly code.

But as hardware resources grew, the compilers' capabilities grew, and now the idea that there was programming before compilers is pretty weird.

Now take LLMs. If you program using an LLM, you probably give an initial prompt to get started, and then you refine the generated source code by giving follow-up prompts to ask for changes, and you never revisit your initial prompt. It's just a series of "patches" created by follow-up prompts.

This is like programming by writing source code once, compiling it, and then throwing the source code away and working directly on the binary with incremental patches!

So here's my outline for "prompts as source code":

The prompts will be committed to git, the outputs will not. The prompts will be big and split across multiple files just like source code is now, except it's all freeform text. The build system will annotate which version of the LLM you're using, and it will be evaluated deterministically so that everybody gets the same output from the same prompt. The LLMs will be bigger, have larger context windows, etc., and as the LLMs improve, and our understanding of how to work with them improves, we'll gain confidence that small changes to the prompt have correspondingly small changes in the final application.

It basically turns into writing a specification for the application you want, but the specification is version-controlled and deterministically turns into the resulting application.

Human beings will only look at the generated source code in rare cases. Normally they'll just use their tooling to automatically build and run the application directly from the prompts.

You'll be able to include inline code snippets in the prompt, of course. That's a bit like including inline assembly language in your source code.

And you could imagine the tooling could let you include some literal code files that the LLM won't touch, but will be aware of, and will be included verbatim in the output. That's a bit like linking with precompiled object files.

https://incoherency.co.uk/blog/stories/prompts-as-source-code.html

< 2024-11-03 2024-11-05 >