The Problem

Multi-agent systems need to communicate. Natural language is flexible but token-expensive (and ambiguous). Traditional protocols are efficient but lack semantic richness. nSLIP provides a middle path: 80%+ token reduction while maintaining full semantic content.

Why Token Efficiency Matters

In multi-agent AI systems, agents communicate constantly. A coordinator assigns tasks. A planner proposes approaches. An executor reports status. A critic evaluates results. Each message costs tokens.

Consider a simple task assignment in natural language:

# Natural Language (68 tokens)
"I need you to create a plan for implementing a health-check endpoint 
for the authentication service. This is a medium priority task. 
Please develop an approach and let me know your proposed solution."

The same message in nSLIP:

# nSLIP (12 tokens)
REQ/TSK|g=42,t=1,p=2,@=auth_healthcheck

82% token reduction. For systems making thousands of inter-agent calls, this compounds dramatically.

Message Structure

An nSLIP message consists of three parts:

<act>/<frame>|<slots>

where:
  act   = What the message does (REQUEST, PROPOSE, INFORM, etc.)
  frame = What domain it's about (TASK, PLAN, OBSERVATION, etc.)
  slots = Key-value pairs carrying the payload

Speech Acts

Code Act Purpose
REQREQUESTAsk another agent to do something
PROPROPOSESuggest a plan or approach
INFINFORMReport observations or status
EVLEVALProvide evaluation or judgment
ACCACCEPTAccept a proposal
REJREJECTReject with reason
QRYQUERYAsk for information
CFMCONFIRMConfirm understanding

Frame Types

Code Frame Domain
TSKTASKTask assignment or delegation
PLNPLANStrategic or tactical plans
OBSOBSERVATIONSensor data, execution results
EVLEVALUATIONQuality assessment, scoring
RSCRESOURCEResource allocation
CSTCONSTRAINTLimitations, requirements
ERRERRORFailure, exception

Standard Slots

Key Type Purpose
gintGoal identifier
tintTask identifier
rintResult identifier
pintPriority (1-3)
senumStatus
scintScore (0-10)
@stringHuman-readable tag
!stringReason/explanation

Communication Flow

User Goal → Coordinator ↓ REQ/TSK → Planner ↓ PRO/PLN → Coordinator ↓ [ACC/REJ] ↓ (if accepted) PLN → Executor ↓ INF/OBS → Coordinator → Critic ↓ EVL/EVL → Coordinator ↓ [Complete or iterate]

Example Conversation

# Coordinator assigns task to planner
REQ/TSK|g=42,t=1,p=2,@=auth_refactor

# Planner proposes approach
PRO/PLN|g=42,t=1,p=2,@=plan_incremental

# Coordinator accepts
ACC/PLN|g=42,t=1

# Executor reports completion
INF/OBS|g=42,t=1,r=1,s=done,@=tests_pass

# Critic evaluates
EVL/EVL|g=42,t=1,sc=8,@=good_coverage

# Rejection with reason
REJ/PLN|g=42,t=1,!=scope_too_large,@=needs_breakdown

Implementation

from enum import Enum
from dataclasses import dataclass

class Act(Enum):
    REQUEST = "REQ"
    PROPOSE = "PRO"
    INFORM = "INF"
    EVAL = "EVL"
    ACCEPT = "ACC"
    REJECT = "REJ"

def encode_message(msg: SlipMessage) -> str:
    """Encode to wire format."""
    slot_str = ",".join(
        f"{slot.value}={value}" 
        for slot, value in msg.slots.items()
    )
    return f"{msg.act.value}/{msg.frame.value}|{slot_str}"

def decode_message(wire: str) -> SlipMessage:
    """Decode from wire format."""
    header, slot_str = wire.split("|")
    act_code, frame_code = header.split("/")
    # ... parsing logic
    return SlipMessage(act=Act(act_code), ...)

Training Models to Speak nSLIP

nSLIP includes tools for generating synthetic training data so language models can learn to communicate natively in the format:

# Training example for planner agent
"""
Prompt:
You are a planner agent. You receive a REQUEST/TASK message.
Reply with a single PROPOSE/PLAN nSLIP message.

Current message:
REQ/TSK|g=42,t=42,p=2,@=goal_42

Goal[42]: Implement health-check endpoint for auth service.

Target:
PRO/PLN|g=42,t=42,p=2,@=plan_incremental
"""

Token Efficiency Analysis

Format 4-Message Cycle Savings
Natural Language~250 tokens
JSON~120 tokens52%
nSLIP~48 tokens81%

Why Human-Readable?

nSLIP could be even more compact with binary encoding. But human auditability matters:

The @ slot specifically exists for human-readable annotations that don't affect agent behavior but make logs comprehensible.

Papers & Code

GitHub Repository

Full nSLIP implementation with synthetic data generation.

Synthesis Framework

See nSLIP in action coordinating self-extending agents.

Related Research