Architectural Governance Models
The models on this page, ISDAIRE, ARETABA, GAG, MGAG, and OTANIS, were created by Dr Masayuki Otani as a unified architectural approach to AI governance.
Together, they define a system of execution-time governance for agentic AI and high-consequence systems, where authority, admissibility, and accountability must be enforced at the point where actions become irreversible.
This is governance as architecture, not policy. It is concerned with if a system is structurally capable of acting with legitimate authority under real conditions, not if it appears compliant in documentation or design intent.
These models are used to review, stress-test, and advise on systems where incorrect or unauthorised actions produce real world consequences.
Related: OTANIS suitability by system type · OTANIS FAQs · Services
What these models are
These models are not standalone frameworks or interchangeable methods.
They form a dependency chain that defines whether governance:
- can exist,
- can execute,
- can survive composition,
- and can be defended at execution time.
They treat AI governance as a causal, enforceable structure embedded in system design, rather than as a procedural or post-hoc activity.
They are specifically designed for:
- agentic AI systems
- automated decision systems
- systems operating across organisational or regulatory boundaries
- environments where actions may be irreversible
How to read this page
The models must be understood in order.
- ISDAIRE defines if governance is structurally possible.
- ARETABA defines if governance can execute under real conditions.
- GAG defines if authority remains valid across composed systems.
- MGAG defines if governance survives across multiple layers.
- OTANIS defines if authority can be proven and enforced at execution time.
If a lower layer fails, higher layers cannot compensate.
ISDAIRE
Definition
ISDAIRE defines the irreducible ex-ante conditions for a governed action class: declared intent and scope, structural domain separation, explicit authority sources, irreversibility awareness, ex-ante risk framing, and a sharp execution boundary between proposal and commitment—without which governance cannot exist before runtime.
What it does
ISDAIRE establishes:
- Intent. The system must have an explicit, machine-legible action-purpose declaration for the action class, represented as a finite declared intent specification whose admissible intents are versioned and governance-controlled. Without declared intent, authority and scope cannot be bounded.
- Scope. What the system is allowed to affect for that class must be defined in advance. Undefined scope makes authority leakage inevitable under scale or load.
- Domain separation. Reasoning, data access, decision logic, and execution must be implemented as structurally distinct components or control surfaces with non-identical authority domains and no undeclared privilege collapse across those surfaces. Without such separation, authority collapses into automation.
- Authority source. The origin of authority must be explicit (human role, policy instrument, contract, regulator). Authority without a source cannot be validated, renewed, or revoked.
- Irreversibility awareness. The system must deterministically classify which actions cross an irreversible boundary, and identify the corresponding execution boundary for those actions.
- Risk framing. The action class must declare an ex-ante risk function, threshold regime, or finite risk-tier mapping sufficient to determine refusal, fallback, or escalation conditions without post-hoc reinterpretation. Otherwise escalation is always late.
- Execution boundary. There must be a clear transition between proposal and commitment. Governance only becomes real at this boundary.
Why it matters
If these conditions are not defined before execution, governance cannot be constructed later.
Runtime controls, monitoring, or policies cannot compensate for missing structural definitions. The system may function, but it is not governed.
Inspection test
If any ISDAIRE element cannot be explicitly identified and validated before execution, governance does not exist.
ARETABA
Definition
ARETABA defines the minimum execution-time control surface required for governance to operate under real conditions.
What it does
ARETABA enforces:
- Authority. Who may act, under what scope, and for how long.
- Refusal or Halt. The ability to stop execution when conditions are not met.
- Escalation. Mandatory transfer of decision authority when required.
- Traceability. Reconstruction of decisions and their causal chain.
- Accountability. Attribution of responsibility at decision time.
- Boundary definition. Explicit limits on system operation.
- Admissibility. Enforcement of allowed inputs and actions.
Why it matters
Governance that cannot execute under pressure is not governance. It becomes descriptive rather than causal.
ARETABA ensures governance is enforceable, non-bypassable, and capable of failing closed.
Inspection test
If any ARETABA component is absent, bypassable, or non-enforceable, governance cannot execute.
GAG (Global Architectural Governance)
Definition
GAG governs how authority behaves across composed systems, ensuring it remains valid, bounded, and non-amplifying.
What it does
GAG ensures:
- authority provenance is preserved across system boundaries
- admissibility is not weakened through delegation
- authority does not silently expand or drift during composition
Why it matters
Systems that are locally compliant can still produce globally invalid outcomes if authority is not preserved across boundaries.
GAG prevents this failure mode.
Inspection test
If authority cannot be traced, validated, and constrained across system boundaries, GAG is not satisfied.
MGAG (Multi-Layered Global Architectural Governance)
Definition
MGAG extends GAG across multiple technical, organisational, and regulatory layers.
What it does
MGAG governs:
- authority transfer across layers
- escalation pathways between layers
- refusal propagation across systems
- audit survivability across boundaries
Where it applies
- multi-agent systems
- multi-vendor architectures
- orchestrated AI workflows
- regulated environments requiring layered accountability
Why it matters
In layered systems, governance often fails at the seams. MGAG addresses these failure points directly.
Inspection test
If authority transitions between layers cannot be validated or audited, MGAG is not satisfied.
OTANIS
Operational Trust and Authority Normative Integrated System
Definition
OTANIS is an execution-time architectural governance system that integrates ISDAIRE, ARETABA, GAG, and MGAG into a single enforceable structure for governing agentic AI at the point of irreversibility.
It represents the full Architectural Governance (AG) model, where ex-ante definition, execution-time enforcement, and cross-system authority preservation operate as a unified system.
What it integrates
OTANIS is not a standalone model. It is the integration layer that makes the other models operational:
- ISDAIRE defines the ex-ante conditions required for governance to exist.
- ARETABA provides the execution-time control surface required for governance to operate.
- GAG ensures authority remains valid across composed systems.
- MGAG ensures governance survives across multiple technical, organisational, and regulatory layers.
OTANIS binds these layers together and enforces them at execution time, where actions become irreversible.
Core principles
- Governance executes at the point of irreversibility, not before and not after.
- Authority is explicit, bounded, and independently verifiable.
- Admissibility is defined ex-ante and enforced at execution.
- Governance is non-bypassable and fails closed.
- Accountability is grounded in evidence produced at decision time.
Key distinction: audit logs vs authority evidence
OTANIS draws a strict distinction between:
- Audit logs. Records of what happened.
- Authority evidence. Proof that an action was permitted to happen.
Audit logs support investigation. Authority evidence establishes legitimacy.
A system may have complete logs and still act without valid authority. OTANIS addresses this gap.
Why OTANIS is the complete AG model
Individually, ISDAIRE, ARETABA, GAG, and MGAG define necessary components of governance.
OTANIS is what makes governance executable.
It ensures that:
- definitions established ex-ante are enforced at runtime,
- authority remains valid across system boundaries,
- and actions cannot commit unless all governance conditions hold at the point of irreversibility.
Without OTANIS, these models remain structurally incomplete.
Suitability
OTANIS is designed for systems where actions have real world consequences, including:
- agentic AI systems
- high-autonomy decision systems
- multi-agent orchestration environments
- regulated or high-consequence operational systems
See also: OTANIS suitability by system type.
Why OTANIS scales with advanced systems
OTANIS governs execution, not cognition.
- Authority is external to the model and cannot drift with optimisation.
- Execution is gated at defined irreversibility boundaries.
- Authority is time-bound and must be revalidated.
- Governance remains auditable without requiring access to internal reasoning.
This allows OTANIS to remain applicable as systems increase in autonomy and complexity.
Auditability and authority evidence
Auditability alone is insufficient for governance.
A system may produce complete logs and still perform unauthorised actions.
Governance requires:
- proof of authority at the moment of execution
- validation that admissibility conditions were satisfied
- evidence that no constraints were violated
Authority must be demonstrable, not assumed.
Importance of ex-ante definitions
Governance depends on definitions made before execution.
If admissibility, authority, and scope are not defined ex-ante:
- runtime enforcement becomes inconsistent
- escalation cannot be triggered correctly
- refusal conditions are undefined
ISDAIRE provides these definitions. OTANIS enforces them at execution.
Without ex-ante definition, execution-time governance cannot function.
How these models are used
These models are applied through independent advisory and review.
Typical applications include:
- architectural governance reviews
- system pressure testing under failure conditions
- authority and admissibility validation
- investor due diligence on AI systems
- reviews of papers, books, and proposed architectures
- founder and developer system validation prior to deployment
No implementation or coding services are provided.
What this is and is not
This work does not:
- certify systems as safe
- replace regulatory approval
- implement production systems
It provides:
- independent architectural assessment
- execution-time authority analysis
- governance stress-testing
- inspection-ready outputs for audit, regulatory, and insurer review
Who this is for
- CIOs and CTOs responsible for system accountability
- AI architects designing agentic or high-consequence systems
- regulators, auditors, and insurers
- founders and investors evaluating AI system risk and legitimacy
- organisations deploying systems with irreversible effects
For scoped engagements, see Services.
Get in Touch