Modern AI dazzles with fluent answers—but too often it asks us to trust a story we can’t verify. We get results that can’t be replayed, provenance we can’t inspect, and “consent” that never really traveled with the data. That isn’t a technical nuisance; it’s a loss of human agency.

Shared Object Networking (SON) 2.0 is a different path: start with rights, then encode them as rules every system must honor. The paper argues that identity should be a deliberate act, consent must be specific and bounded, explanations must rest on artifacts (not anecdotes), revision should add new commitments (not rewrite history), and disagreements should remain visible until evidence resolves them.

What Shared Object Networking is

Shared Object Networking isn’t a new model, a database, or a vendor stack. It’s a thin protocol for knowledge—the minimum set of guarantees that make answers portable, accountable, and replayable across systems. In practice, SON treats knowledge as objects with identity, schema, provenance, and obligation envelopes; it preserves contexts as distinct “z-axis” layers so competing claims don’t get silently averaged away; it records durable actions through gates; and it emits Query Result Objects (QROs)—compact ledgers that tie an answer to its inputs, thresholds, policies, and costs.

Why it matters

  • You can see the grounds of an answer (what was consulted, which thresholds applied).
  • You can replay it “as-of” its moment (same policies, same evidence, same outcome).
  • You can revise without amnesia (new commitments don’t overwrite yesterday’s record).
  • You can contest and recover (disagreements aren’t erased—they’re bounded and auditable). In short: where today’s stacks ask for trust, SON earns it—by design.

The paper at a glance

A manifesto, first

The work begins with a rights-forward stance: place Sovereignty of Intent at the center—every subsequent commitment stems from the foundational truth that a person’s intent must remain sovereign over how systems use data . That principle becomes four pillars:

  • Autonomy & Consent — identity is deliberate; consent is explicit; refusal is valid.
  • Transparency & Accountability — show work; make it replayable “as-of”; disclose trade-offs.
  • Control, Revision & Portability — change by new commitment; revocation without erasing history; carry terms with results.
  • Contestability & Resilience — challenge, qualify, or oppose results; keep tension visible; use error as fuel.

From rights to protocol

SON maps each pillar to concrete artifacts: objects with obligations, layers that keep contexts separate, gates that turn actions into signed decisions, QROs that explain answers, and feedback records that turn critique into structured input—not retroactive edits.


The protocol

SON’s architecture is intentionally thin. It fixes only what must be shared so independent systems can interoperate safely: what enters a layer, what leaves, and what the gate records . Everything else remains an implementation choice.

Figure 1 in the paper shows the layout; here’s the quick tour:

Tier I — Data Layers (make knowledge durable)

  • L0 • Protocol Boundary Management — Every ingress/egress is bound to a ServiceContract with cryptographic provenance; decisions are recorded as ingress/egress records. Think: truth at the edge.
  • L1 • Object Persistence & Objectification — Contract-bound inputs become SON objects with identity, schema, evidence chains, and obligations; admission is a signed decision (separation of duties).
  • L2 • Z-Axis Layer Management — Objects live in separable strata; sessions define which layers are in scope “as-of” a moment; results carry origin tags.
  • L3 • Intra-Layer Optimization — Improve quality within a stratum (gap detection, dedup boundaries, link validation) and record OptimizationDecision artifacts; errors become fuel, not silent edits.

Tier I promise: boundary truth → object truth → layer truth → quality truth. Higher tiers never guess what the substrate “must have meant”; they reason over audited structure .

Tier II — Reasoning Layers (make answers replayable)

  • L4 • Cross-Layer Goal-Seeking — Bring multiple strata into scope without collapsing them; capture an AlignmentTrace and directives (what to include/exclude, thresholds, diversity).
  • L5 • LLM Interface & Mediation — Build a KnowledgePatch (bounded, provenance-rich context); run pre/post guards; issue the QRO that binds goal, layers, evidence handles, policies, degradations, and cost.
  • L6 • Orchestration — Turn a QRO into a plan with Task Contracts, Execution Checkpoints, and a Post-Execution Report; keep epistemic vs. operational accountability clean.

Tier II promise: every cross-layer answer yields a QRO—your portable “why/how/at-what-cost” ledger.

Tier III — User Layers (make outcomes governable)

  • L7 • Agentic AI (Delegates) — Agents enact plans under fixed envelopes; results are accompanied by TERs (Task Execution Reports) and Feedback Records—not after-the-fact stories .
  • L8 • Applications / UI — Present narrative/evidence/performance views without mutating artifacts; bind new user intent into fresh contracts; show policy and obligations as first-class elements .
  • L9 • The User — The boundary where legibility meets enforceability: identity and consent are deliberate; explanations are rights; revision is additive; revocation is prospective and precise .

Tier III promise: intent goes in as contracts; explanations come out as auditable records—no silent rescoping, no hidden defaults.


What you gain with SON

  • Provenance that travels — claims carry evidence and obligations wherever they go.
  • Replayability “as-of” — answers are reproducible under the same policy/evidence state.
  • Visible trade-offs — requested vs. achieved scope; authorized vs. spent resources.
  • Structured revision & remedy — new commitments, not silent edits; contestation becomes a first-class signal.
  • Engine-agnostic accountability — thin, durable invariants let heterogeneous systems cooperate without collapsing into a black box  .

Bottom line: If today’s AI asks you to “just trust it,” SON provides the contract for trust—a protocol that turns knowledge into accountable agreements you can inspect, replay, challenge, and revise.

Resources