AI Theory - MAKER framework

MAKER framework

https://www.youtube.com/watch?v=TJ-vWGCosdQ
Very interesting approach to building software that is LLM centric. The person breaks down the paper very well

If you have a model that has 99% success rate that sounds production ready but in just a couple hundred of steps the model is guaranteed to fail.
This paper proposes the MAKER (Massively Decomposed ...) framework
Demonstrates by trying to solve a tower of hanoi problem.

Pillar 1:

Regular LLMs try each step secuentially carrying the history of all previous steps in it's context, making the model fail almost immediately.
With Maker you feed the rules and the current state of the problem for each step, no history.
Solves the context drift problem by simply removing the context

Pillar 2:

When a model is about to make a logical error it typically start by making a syntax error first or it starts rambling.
Maker uses a strict parser, if the output isn't perfectly formatted or it's too long it throws it away immediately and forces a retry, it doesn't try to repair the json

Pillar 3: First-to-ahead-by-K-voting

For every step of the million steps they don't ask the LLM once, they ask it multiple times in parallel and use a voting algorithm to determine the anwer.

Economics:

running a swarm of agents for every single step sounds slow and expensive but running a swarm of smaller agents is actually cheaper than big models.

building software

  1. Define Atomic State: File system, dataframe, compiler logs - not chat history
  2. Micro-Level Decomposition: Break tasks into smallest possible units
  3. Voting for Critical Steps: Parallel calls for decision points that matter
  4. Strict Validation: Red-flag syntax errors as logic warnings
    Reliability is an Engineering Problem, Not a Model Capability Problem