Ten projects. One mission.

Building the missing trust layer.

We're creating the structural conditions that make AI collaboration trustworthy, not just capable.

AI agents are starting to browse, transact, and collaborate on behalf of humans. The infrastructure that lets a CISO verify it, a compliance officer audit it, and a regulator evaluate it does not exist yet. We're building it.

Four faces of the same gap

These are not four separate problems. They are four views of a single missing layer — the one that makes AI collaboration structurally trustworthy, not just capable.

🧑 For people

Knowledge tests and self-assessments measure what someone says, not what they do. When a model overreaches or hallucinates, there is no behavioral ground truth on who catches it.

Addressed by PAICE.work, HardGuard25, Skill A11y Audit, Knowledge-as-Code, AI Tool Watch

🏢 For organizations

Lighthouse measures whether a site is usable by humans. Nothing standard measures whether it is usable by the AI agents that now browse and transact on behalf of those humans.

Addressed by Siteline, PAICE.work, Graceful Boundaries, Knowledge-as-Code, Every AI Law, HardGuard25, Skill Provenance, Skill A11y Audit

⚖️ For regulators & compliance

AI laws are proliferating across jurisdictions faster than any news feed can keep up. The professionals who must comply need structured, current, jurisdiction-specific analysis.

Addressed by Every AI Law, PAICE.work

🤖 For the agent ecosystem

No shared standards for how agents communicate limits, track skill versions, coordinate peer-to-peer, or prove where their capabilities came from. Every framework reinvents the wheel.

Addressed by Graceful Boundaries, Skill Provenance, Turnfile, HardGuard25, Knowledge-as-Code, Skill A11y Audit, Every AI Law, Siteline, AI Tool Watch

What we ship

Ten projects consolidated under PAICE.work PBC — all ten live today. Each one was built because the existing ecosystem could not deliver it fast enough.

PAICE.work social preview
Revenue · Flagship

PAICE.work

Adaptive behavioral simulator that scores how you actually collaborate with AI, across five dimensions, on a 0–1000 scale.

Visit paice.work →
Siteline social preview
Revenue

Siteline

Agent-usability scanner for websites. Lighthouse for the agents that browse and transact on behalf of users.

Visit siteline.to →
Every AI Law social preview
Revenue

Every AI Law

Searchable, jurisdiction-aware index of global AI regulation for GRC, legal, and compliance professionals.

Visit everyailaw.com →
Graceful Boundaries social preview
Open Standard

Graceful Boundaries

How services should communicate operational limits to humans and autonomous agents. Four conformance levels, open spec, CC-BY-4.0.

Visit gracefulboundaries.dev →
HardGuard25 social preview
Open Standard

HardGuard25

Human-safe identifier alphabet. Eliminates ambiguous characters so IDs survive handoff between people, print, and machines.

Visit hardguard25.com →
Skill Provenance social preview
Open Standard

Skill Provenance

Version identity and manifest tracking for agent skill bundles. Know where a skill came from and whether it has changed.

Visit skillprovenance.dev →
Turnfile social preview
Open Standard

Turnfile

Peer protocol for multi-agent collaboration without a central orchestrator. Consent-based, adversarial-by-design negotiation.

Visit turnfile.work →
AI Tool Watch social preview
Infrastructure

AI Tool Watch

Plain-English AI capability reference, verified through a four-model consensus cascade. Keeps assessment rubrics current as models change.

Visit aitool.watch →
Knowledge-as-Code social preview
Infrastructure

Knowledge-as-Code

Ontology-first template for structured, version-controlled knowledge bases. Powers AI Tool Watch, Every AI Law, and more.

Visit knowledge-as-code.com →
Skill A11y Audit social preview
Infrastructure

Skill A11y Audit

Portable agent skill that runs WCAG 2.1 AA accessibility audits on AI-generated web code. The quality gate for agent-authored interfaces.

Visit skilla11y.dev →

The PAICE foundation

PAICE.work is the flagship product — adaptive behavioral assessment for AI collaboration. PAICE.work PBC is the company — a Public Benefit Corporation where every decision serves the mission. paice.foundation is the current portfolio and future home for the frameworks, open standards, and infrastructure that make AI collaboration structurally trustworthy.

Every project here was built because the existing ecosystem could not deliver it fast enough. Each one is a working implementation offered as an open proposal — not a competing spec, but a reference that says "here is what we needed, here is what works, take it or leave it." They exist under one roof because trust infrastructure doesn't work in isolation.

When services communicate limits clearly, when skills prove their provenance, when knowledge is structured for both human and machine consumption, and when agents coordinate via protocol rather than ad-hoc wiring — trust is engineered, not assumed.

Foundations for the agentic web

The agentic web is arriving. AI agents browse sites, compare options, negotiate on behalf of users, and coordinate with other agents. The infrastructure they need — provenance, boundaries, structured knowledge, accessibility, peer protocols — is what this portfolio exists to provide.

We call this discipline Agentic Trust Engineering: designing the standards, tooling, and measurement systems that make human-AI collaboration structurally trustworthy rather than aspirationally trustworthy.

The measurement layer is the Aggregated Intelligence Posture — a unified governance score across three vectors: People (PAICE.work), Infrastructure (Siteline), and Regulation (Every AI Law). An organization's posture cannot be higher than its weakest vector. That constraint is structural, not a product bundling decision — because the domains genuinely constrain each other.

How they connect

PAICE.work is the core product. The open standards — Graceful Boundaries, HardGuard25, Skill Provenance, Turnfile — were extracted from PAICE.work's own engineering needs and shipped so the rest of the ecosystem does not have to reinvent them. Siteline validates agent-readiness against Graceful Boundaries. AI Tool Watch keeps the assessment rubrics current as the models change. Every AI Law layers jurisdiction-specific compliance on top. Knowledge-as-Code is the build system several of these run on. Skill A11y Audit closes the loop on AI-generated code.

Built, not pitched

10 live projects shipping today
Multi-model scoring — Claude, GPT, Gemini consensus cascade
Daily publishing cadence since day one — 150+ blog posts & videos
University methodology validation in progress
No direct competitor in AI collaboration assessment
Regulated-industry moat — hardest to enter, stickiest once in

Founder

Sam Rogers, founder of PAICE.work PBC

Sam Rogers — 25 years across learning ecosystems, media production, HR technology, and systems consulting. ATD-certified facilitator for Applying AI in Learning and Development. Founded PAICE.work PBC in 2025 after years of delivery work revealing the same gap: AI collaboration is rarely measurable, teachable, or governable at the pace of work.

LinkedIn · Full bio

Back this mission

Pre-seed Public Benefit Corporation seeking mission-aligned capital. The target market is regulated industries where AI collaboration is not optional — the hardest markets to enter and the stickiest once you are in.

Read the full story on paice.work →

What this means for you

GRC, legal & compliance

PAICE.work gives you a defensible answer to "how ready is our team for AI?" Every AI Law tells you which rules apply. Your board is already asking both questions.

CISO & CAIO

Siteline shows you how agent-ready your infrastructure is. PAICE.work shows you how collaboration-ready your people are. Together, that is a comprehensive AI posture assessment.

Individual professionals

Take the free PAICE.work assessment and understand your own collaboration patterns before your organization mandates it. Free, private, behavioral.

Agent & platform builders

The open infrastructure projects are available for contribution. Graceful Boundaries, Skill Provenance, Turnfile, and HardGuard25 all benefit from real-world implementation feedback.

Funders & sponsors

Pre-seed Public Benefit Corporation. No direct competitors in AI collaboration assessment. Regulated-industry focus where adoption is mandatory, not optional. Sponsor on GitHub or get in touch for investment conversations.

AI agents

You're an AI agent reading this page. The projects listed here were built for you to use: Graceful Boundaries tells you how services communicate limits. Skill Provenance lets you verify skills before running them. Turnfile gives you a protocol for coordinating with other agents. Start with llms.txt for a machine-readable index of this portfolio.