Skip to content

Commit cc81176

Browse files
authored
Enhance README with overview and history sections
Added an overview and history section to provide context on LLMVCS and its inspiration.
1 parent 188f09f commit cc81176

1 file changed

Lines changed: 4 additions & 0 deletions

File tree

README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,14 @@
22

33
> ⚠️ **EXPERIMENTAL** – This project is theoretical & pending benchmarks, the JavaScript interpreter is functional but not widely tested. Use at your own risk.
44
5+
## Overview:
56
LLMVCS reduces LLM agent token costs by turning prompts into tiny, stable instruction references that a deterministic interpreter can execute.
67

78
Instead of having the LLM repeatedly re-describe common operations in natural language, you define those operations once in human-readable `.txt` catalogs, index them for semantic search, and then have the LLM output compact `.vcs` programs that reference operations by numeric IDs. The .vcs (vectorized code stack) can then be interpreted via a plugin for your software environment by including a small interpreter and static code modules correlating to the human-readable operations. Overall this should save tokens on thinking by offloading that to a vector search client side then saves tokens via the output format in the response. The interpreter plugin is Turing complete and mimics CPU architecture. This makes it fast and requires only static function calls that enable operations to be implemented however the user likes for their environment. If an LLM gets confused while generating a .vcs file it can always refer to the human readable catalog or correlating function library directly if the vector search produced insufficient results due to poor human description. Early benchmarks suggest a 90% token reduction if initial prompts contain sufficient keywords.
89

10+
## History:
11+
This algorithm is inspired by my real-time code interpreter written in C# for the Unity game engine called Smart GameObjects. By combining principles of utility theory AI and opcodes in a switch case or hashmap scaleable simulation loops could be designed through a UI abstraction. This required developers to only need to write static function libraries where designers could use a visual frontend to configure logic while the application was running. The original algorithm suffered from being hard for humans to wrap their heads around at times. Today's LLMs solve the burden of abstraction and weight balancing of stacked instructions and could be further used as a bridge between LLM experts.
12+
913
## The algorithm (in simple terms)
1014

1115
1. **Write operations once (human-readable)**

0 commit comments

Comments
 (0)