Skip to content

Commit c04d7a0

Browse files
authored
Revise README to enhance clarity on algorithm inspiration
Clarify the description of the Smart GameObjects algorithm and its relation to LLMs.
1 parent e115e40 commit c04d7a0

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ LLMVCS reduces LLM agent token costs by turning prompts into tiny, stable instru
88
Instead of having the LLM repeatedly re-describe common operations in natural language, you define those operations once in human-readable `.txt` catalogs, index them for semantic search, and then have the LLM output compact `.vcs` programs that reference operations by numeric IDs. The .vcs (vectorized code stack) can then be interpreted via a plugin for your software environment by including a small interpreter and static code modules correlating to the human-readable operations. Overall this should save tokens on thinking by offloading that to a vector search client side then saves tokens via the output format in the response. The interpreter plugin is Turing complete and mimics CPU architecture. This makes it fast and requires only static function calls that enable operations to be implemented however the user likes for their environment. If an LLM gets confused while generating a .vcs file it can always refer to the human readable catalog or correlating function library directly if the vector search produced insufficient results due to poor human description. Early benchmarks suggest a 90% token reduction if initial prompts contain sufficient keywords.
99

1010
## History:
11-
This algorithm is inspired by my real-time code interpreter written in C# for the Unity game engine called Smart GameObjects. By combining principles of utility theory AI and opcodes in a switch case or hashmap scaleable simulation loops could be designed through a UI abstraction. This required developers to only need to write static function libraries where designers could use a visual frontend to configure logic while the application was running. The original algorithm suffered from being hard for humans to wrap their heads around at times. Today's LLMs solve the burden of abstraction and weight balancing of stacked instructions and could be further used as a bridge between LLM experts.
11+
This algorithm is inspired by my real-time code interpreter written in C# for the Unity game engine called Smart GameObjects. By combining principles of utility theory AI and opcodes in a switch case or hashmap scaleable simulation loops could be designed through a UI abstraction. This required developers to only need to write static function libraries where designers could use a visual frontend to configure logic while the application was running. The UI abstraction of Smart GameObjects could be hard for non-engineers to wrap their head around at times. Today's LLMs solve the abstraction burden and weight balancing of stacked instructions. I also theorize similar methods could be a good bridge between LLM experts or between an LLM and existing complex software like Unity.
1212

1313
## The algorithm (in simple terms)
1414

0 commit comments

Comments
 (0)