Intent, understood.
Before it reaches the AI.

See the difference ARIA makes. Standard mode sends your prompt as-is. ARIA mode structures, scores, and verifies intent — then sends a precision brief.

Direct prompt → AI. No intent processing. See what you get without ARIA.
Input SCIM Clarify Brief AI Verify
Recording…
🧠
Type a message — see Standard vs ARIA
Switch modes to compare
ARIA — My Intent
GREEN — Clarification
Press Enter · Standard mode sends directly to AI
Intent Graph SCIM™ v1.2
Active in ARIA™ mode
Switch to ARIA™ mode and send a message to see the intent graph.
Pre-print · Submitted to CHI 2027 · ARIA™ Research Series
⚠ Pre-print — patent pending — do not cite before provisional filing date

Intent Fidelity: A Framework and Architecture for Confidence-Scored Semantic Intent Mapping, Peripheral Prediction, and Response Verification in AI-Mediated Natural Language Interfaces

Introducing the ARIA™ Suite: SCIM, DCT, the Misalignment Detector, the Prosody Layer, and the Predictive Intent Echo
Francois Flechter
Business Innovation Solutions WLL · Bahrain · businessinnovationsolutions.com
April 2026 · Correspondence: aria@businessinnovationsolutions.com
Patent portfolio: 18 Master Patents filed at USPTO (MP1–MP18, ~3,493 claims). Five additional provisional applications pending covering the specific systems described in this paper. Business Innovation Solutions WLL, Bahrain (Company No. 148854-1).
Abstract
Natural language AI interfaces have focused predominantly on intent understanding — determining what a user means. We propose a more demanding paradigm: intent fidelity, which requires not only understanding intent but continuously scoring confidence in each semantic component, identifying absent or uncertain elements, targeting clarification optimally, verifying that AI responses address confirmed intent, and delivering this full process with sub-500ms perceived latency. We describe the ARIA™ suite — six components that together implement intent fidelity end-to-end. The Semantic Confidence Intent Mapping engine (SCIM™) decomposes natural language into confidence-scored semantic nodes using a three-source weighted formula. The Dynamic Contextual Thesaurus (DCT™) builds a logical — not statistical — relationship map between intent concepts. The Clarification Engine selects the single highest-value clarification question per interaction round. The Intent Brief Generator produces a structured AI prompt from confirmed nodes. The Misalignment Detector™ verifies post-hoc that the AI response addressed the user's confirmed intent. The Prosody Layer encodes acoustic features of voice input as semantic confidence modifiers. The Predictive Intent Echo™ (PIE) delivers peripheral semantic confirmation at 300ms — before the user finishes typing — using a dual-debounce parallel pipeline grounded in dual-process cognitive theory. Together, these components constitute the first complete technical architecture for intent fidelity in AI interfaces. All components are filed as patent pending under the ARIA™ portfolio (Business Innovation Solutions WLL, Bahrain).
Keywords: intent fidelity, semantic confidence mapping, dynamic contextual thesaurus, misalignment detection, predictive text, prosody, cognitive load, working memory, AI interfaces, natural language processing, human-computer interaction
1. Introduction — The Intent Fidelity Paradigm

The dominant paradigm in natural language AI interface research is intent understanding: given a user utterance, determine the most probable interpretation and generate a response. This paradigm is well-served by large language models, which achieve human-level performance on many intent classification benchmarks [1]. Yet practitioners and enterprise users regularly report that AI systems fail to address their actual needs, even when the system appears to "understand" the surface request.

We argue that intent understanding is necessary but insufficient. What is missing is intent fidelity — a property we define as follows:

Definition 1: Intent Fidelity
An AI interface achieves intent fidelity when it (1) decomposes the user's expressed intent into discrete semantic components, (2) assigns calibrated confidence scores to each component, (3) identifies and resolves low-confidence components before generating a response, (4) generates a structured prompt that preserves all confirmed components with their context, and (5) verifies post-hoc that the generated response addresses each confirmed component.

Intent fidelity is a stronger property than intent understanding. A system can understand the topic of a request (investor pitch deck) while having low-fidelity knowledge of its parameters (for which round? how many slides? which audience? in which market?). The gap between topic-level understanding and parameter-level fidelity is precisely where AI systems produce plausible but imprecise responses — a phenomenon we term the fidelity gap.

This paper makes the following contributions:

Contribution C1 — Intent Fidelity Framework

A formal definition of intent fidelity as a five-property requirement for AI interfaces, distinct from intent understanding, and a complete technical architecture implementing it.

Contribution C2 — SCIM™: Three-Source Confidence Model

A novel formula for scoring semantic intent node confidence using three weighted sources: local text evidence, session history, and user profile, with empirically calibrated weights.

Contribution C3 — DCT™: Logical Relationship Taxonomy

A seven-type taxonomy of logical relationships between intent concepts (IMPLICATIVE, PREREQUISITE, INSTANTIATION, CONTRADICTION, DERIVATION, TEMPORAL, SCOPING) derived from formal logic rather than statistical co-occurrence, enabling intent-level reasoning rather than semantic similarity.

Contribution C4 — Misalignment Detector™

The first published post-hoc intent verification system for AI interfaces, which checks each confirmed intent node against the AI response and produces a calibrated match score, verdict, and re-query suggestion.

Contribution C5 — Prosody Layer: Acoustic Confidence Signals

A method for encoding acoustic prosodic features (pause duration, energy ratio, pitch variation) as semantic confidence deltas on SCIM intent nodes, enabling voice interfaces to represent epistemic uncertainty from vocal hesitation.

Contribution C6 — Predictive Intent Echo™ (PIE)

A dual-debounce parallel pipeline that delivers peripheral semantic intent confirmation at 300ms — before input completion — using recognition memory rather than working memory, achieving 60–70% perceived latency reduction without modifying the user's input field.

2. Related Work
2.1 Intent Understanding in NLP

The NLP literature on intent understanding is extensive, tracing from rule-based systems [2] through statistical classifiers [3] to current transformer-based approaches [4]. The dominant task framing is intent classification: assign the user utterance to one of N predefined intent categories. This framing is insufficient for open-domain AI interfaces where intents are not predefined and parameters vary continuously. Slot-filling approaches [5] address parameters but assume a fixed schema. SCIM addresses the open-domain case with a schema-free confidence-scored decomposition.

2.2 Dialogue State Tracking

Dialogue state tracking (DST) maintains a model of the conversation state across turns [6]. DST systems typically track slot values within predefined domains (restaurant booking, hotel reservation). The ARIA approach differs in three ways: (1) it operates across arbitrary domains, (2) it scores confidence at the node level rather than tracking categorical states, and (3) it incorporates a user profile source of confidence that persists across sessions rather than being reset per dialogue.

2.3 Clarification in Dialogue Systems

Clarification generation has been studied in question answering [7] and dialogue systems [8]. The standard approach generates clarification questions based on ambiguous spans. ARIA's Clarification Engine differs in targeting the lowest-confidence node on the critical path — the node whose resolution produces the highest expected confidence gain for the root intent — and in restricting clarification to one question per round, consistent with user experience research showing that multi-question clarification dialogs significantly reduce task completion rates [9].

2.4 Response Verification

Post-hoc response verification has been studied in the context of factual accuracy [10] and hallucination detection [11]. The Misalignment Detector addresses a distinct problem: not whether the response is factually correct, but whether it addresses the user's confirmed intent components. A response can be entirely factually accurate while failing to address three of five confirmed intent nodes. No prior system performs this intent-level verification.

2.5 Predictive Text and Perceived Latency

Arnold et al.'s CHI 2020 study [12] demonstrated that in-field autocomplete reduces text originality. MakeAIHQ (2026) [13] and Gladia's engineering analysis [14] establish perceived latency thresholds. Speculative decoding [15] addresses server-side inference latency. PIE addresses a previously unstudied problem: perceived latency of semantic intent feedback during composition, via a parallel out-of-field display.

3. System Architecture — The ARIA™ Suite
Figure 1: ARIA™ Full Pipeline
Voice input ──► Transcription (Whisper) ──► Prosody Extraction ──► SCIM Analyze
Text input  ──────────────────────────────────────────────────────► SCIM Analyze
                                                                                  │
                                                                                  ▼
                                                    SCIM Node Graph + DCT Relationships
                                                                 │
                                                            conf < 0.70?
                                                        ├── YES ──► Clarification Engine ──► User response
                                                        └── NO  ──► Brief Generator ──► AI Model ──► Misalignment Detector
The complete ARIA™ pipeline. Voice and text inputs converge at SCIM. The pipeline branches at the confidence threshold: below 0.70, the Clarification Engine fires; above, the Brief Generator produces a structured AI prompt. The Misalignment Detector closes the loop post-response.
3.1 SCIM™ — Semantic Confidence Intent Mapping

SCIM is the core engine of the ARIA suite. It receives natural language input (text or voice transcript) and produces a structured intent graph: a set of typed semantic nodes, each with a confidence score, plus a set of typed logical relationships between nodes.

3.1.1 Node Taxonomy

SCIM decomposes intent into five node types:

TypeDefinitionExample
ACTIONThe primary verb or task the user intends to accomplish"create a pitch deck"
ENTITYThe primary object the action operates on"investors"
CONSTRAINTLimiting conditions on the action or entity"10 slides", "next week"
CONTEXTBackground conditions that frame the intent"for our AI startup"
ABSENTNodes expected for this intent type but not expressed"funding stage" (inferred missing)

ABSENT nodes are particularly important: they represent information the system infers should be present based on the intent type, but which the user has not expressed. A pitch deck request without a funding stage, audience type, or market focus is systematically underspecified. SCIM's ability to identify ABSENT nodes — rather than simply scoring what was said — is the key capability that enables targeted clarification.

3.1.2 The Three-Source Confidence Model

Each node is assigned a confidence score C(n) ∈ [0,1] computed from three weighted sources:

C(n) = α · Clocal(n) + β · Csession(n) + γ · Cprofile(n)

Where:

  • Clocal(n) is the confidence derived from the current input text alone — lexical specificity, syntactic completeness, and semantic unambiguity of the expression
  • Csession(n) is the confidence contribution from the current session history — prior turns that confirm, elaborate, or constrain this node
  • Cprofile(n) is the confidence contribution from the persistent user profile — domain expertise, previous intent patterns, and known preferences that disambiguate the current input
  • α=0.40, β=0.35, γ=0.25 are empirically calibrated weights reflecting the diminishing reliability of more distal evidence sources

The three-source model has a critical practical implication: the same ambiguous input ("I need a pitch") produces different confidence profiles for a user whose profile identifies them as a Series B fintech founder (Cprofile disambiguates to investor pitch, raising overall confidence) vs. a music teacher (Cprofile disambiguates to pitch correction, different intent entirely). This personalisation is not a heuristic — it is a formally weighted contribution to the confidence formula.

3.1.3 Node Locking and the Confidence Threshold

A node is "locked" when C(n) ≥ θ, where θ=0.85 is the default confidence threshold (configurable per deployment). Locked nodes are excluded from clarification targeting and contribute with full weight to the intent brief. The overall intent confidence Cintent is the mean node confidence weighted by node criticality (see Section 3.1.4). The system action is determined by Cintent:

Cintent ≥ 0.90 → LOCK (ready without clarification)
0.70 ≤ Cintent < 0.90 → PROCEED (acceptable fidelity, clarification optional)
Cintent < 0.70 → CLARIFY (clarification required)
3.1.4 Critical Path

Not all nodes are equally important to the root intent. SCIM identifies a critical path — the subset of nodes whose resolution is necessary and sufficient to generate a high-fidelity brief. Nodes on the critical path receive a criticality weight wc > 1 in the weighted intent confidence calculation. The critical path is derived from the DCT relationship graph (see Section 3.2): nodes that are PREREQUISITE to the root ACTION are critical; CONTEXT nodes that are SCOPING the primary ENTITY are critical; isolated CONTEXT nodes are not.

3.2 DCT™ — Dynamic Contextual Thesaurus

The DCT builds a logical relationship map between the intent concepts identified by SCIM. It is explicitly not a semantic similarity measure. Two concepts can be semantically similar (cosine distance ≈ 0) without having any useful logical relationship for intent processing. "Investor" and "venture capitalist" are semantically similar; their relationship type (INSTANTIATION: "VC" is an instance of "investor") is what matters for intent fidelity — knowing the relationship type tells the system whether to unify the concepts or maintain them as distinct constraints.

3.2.1 The Seven Relationship Types
TypeDefinitionIntent Implication
IMPLICATIVEA entails B in this context"Series A pitch" implies "equity financing discussion"
PREREQUISITEA must be resolved before B"funding stage" must be resolved before "slide count"
INSTANTIATIONA is a specific instance of B"Sequoia" is an instance of "investor"
CONTRADICTIONA and B cannot both be true"bootstrapped" contradicts "seeking VC funding"
DERIVATIONB can be inferred from A"next week" derives "high urgency"
TEMPORALA and B have a time-ordering constraint"market analysis" temporally precedes "financial projections"
SCOPINGA limits the valid domain of B"European market" scopes "regulatory requirements"

The DCT relationship map serves two functions in the pipeline: it informs critical path identification (PREREQUISITE relationships define the critical path), and it surfaces latent concepts — terms that are logically implied by the expressed intent but not mentioned. Latent concepts are displayed in the intent graph UI as disambiguation aids and fed back into the SCIM analysis as implicit CONTEXT nodes.

3.2.2 Knowledge Base Integration

The DCT's relationship inference is conditioned on the user's knowledge base (documents, domain profile, session history). A relationship that holds in one domain (IMPLICATIVE: "pitch" → "investor meeting" in a startup context) may not hold in another (IMPLICATIVE: "pitch" → "musical note" in a music education context). The DCT uses the knowledge base to select domain-appropriate relationship activations, producing what we term contextually grounded rather than statistically averaged relationship maps.

3.3 The Clarification Engine

When Cintent < 0.70, the Clarification Engine selects and generates a single clarification question. The selection algorithm prioritises:

  1. Nodes on the critical path that are not locked
  2. Among those, the node with the lowest C(n)
  3. Among ties, ABSENT nodes over CONSTRAINT nodes (absent information has higher expected value than uncertain constraints)

The generated question is constrained to present pre-computed options (direct answers, not sub-questions) that each carry an estimated confidence gain. This design reflects research showing that option-based clarification dialogs achieve 3× higher completion rates than open-ended clarification questions in task-oriented interfaces [9]. Users may select multiple options or provide a freetext answer; freetext answers receive a fixed high confidence estimate (0.92) as they represent the user's most precise expression of the node's value.

A critical design constraint is the one-question-per-round limit. After each Green zone interaction, the system re-analyses the full intent graph with the updated node confidence and determines whether a further clarification round is warranted. In ARIA mode (equivalent to the "Research" mode in the demo), up to 3 rounds are permitted before forcing a PROCEED state, preventing clarification fatigue.

3.4 The Intent Brief Generator

The Brief Generator receives the confirmed node graph (all nodes above the locking threshold) and produces a structured natural language brief for submission to the AI model. The brief is not a reformulation of the user's original input — it is a reconstruction from the locked nodes, which may include profile-derived context that the user never explicitly stated. This produces a qualitatively different AI prompt: one that specifies confirmed parameters rather than expressing the user's potentially ambiguous intent.

The brief generator also produces a machine-readable intent record (JSON) containing node values, confidence scores, and relationship types. This record is passed to the Misalignment Detector after the AI response is received.

3.5 The Misalignment Detector™

The Misalignment Detector is the verification stage of the intent fidelity pipeline. It receives the confirmed node graph and the AI response text, and determines whether each locked node was addressed by the response.

3.5.1 Detection Algorithm

For each locked node (label, value, type), the Detector checks the AI response for:

  • Direct address: the node's value appears explicitly or via INSTANTIATION
  • Structural coverage: the section of the response corresponding to this node type contains relevant content
  • Constraint satisfaction: CONSTRAINT nodes are checked for numeric or categorical satisfaction

Each node receives a binary addressed/unaddressed verdict and a severity rating (HIGH for ACTION and critical path nodes, MEDIUM for ENTITY nodes, LOW for CONTEXT nodes). The overall match score is a criticality-weighted average of addressed nodes:

Match(response) = Σ wc(n) · addressed(n) / Σ wc(n)
3.5.2 Verdict and Recommendation

Three verdict levels are defined: ALIGNED (match ≥ 0.85), PARTIAL (0.60–0.84), MISALIGNED (< 0.60). Each verdict carries a recommendation: ACCEPT, RE_QUERY (with a specific suggested follow-up), or ESCALATE (the intent gap is too large for a follow-up; re-clarify from the beginning). The Misalignment Detector is the first published system to provide intent-level — as opposed to factual-level — post-hoc verification of AI responses.

4. The Prosody Layer — Voice Intent with Acoustic Confidence

Voice-based intent expression carries information beyond the transcribed words. Acoustic prosodic features — pause duration before a word, vocal energy (amplitude), pitch variation, speech rate — correlate with the speaker's epistemic state: hesitation, emphasis, uncertainty, and conviction [16]. The ARIA Prosody Layer encodes these features as semantic confidence deltas on SCIM intent nodes.

4.1 Feature Extraction

The Prosody Layer receives word-level timestamps from the Whisper ASR output (start_ms, end_ms, probability per word) and computes:

  • pause_before_ms: silence duration before the word (preceding word end_ms to this word start_ms)
  • energy_ratio: the word's Whisper probability relative to the session baseline probability (proxy for vocal emphasis)
  • confidence_delta: the net adjustment applied to SCIM node confidence for this word
4.2 The Confidence Delta Formula
δ(word) = −0.25 if pause_before_ms > 400 (hesitation)
δ(word) += +0.20 if energy_ratio > 1.5 (emphasis)
δ(word) += +0.10 if pause < 50ms AND energy_ratio > 1.3 (confident assertion)
Clocal(n) ← Clocal(n) + 0.40 · Σ δ(w) for w ∈ words(n)

The 0.40 scaling factor reflects the α weight of Clocal in the three-source confidence formula, ensuring that prosodic adjustments operate within the correct component of the confidence model. A word spoken after a 400ms pause receives a −0.25 delta on the node it belongs to, reflecting that the speaker hesitated before stating that concept — a reliable indicator of uncertainty in the speaker's mind about that concept's value or relevance.

4.3 Practical Significance

Consider a user saying: "I'm looking for... a job in Madrid... that matches my experience... minimum three thousand euros per month." The three-pause pattern produces negative deltas on "Madrid," "experience," and "minimum three thousand euros" — exactly the nodes where the speaker expressed uncertainty. SCIM correctly lowers confidence on these nodes and targets them for clarification, matching what a skilled human interviewer would do.

This represents a qualitative advance over existing voice intent systems, which treat the transcript as semantically equivalent to typed text, discarding all prosodic information. The Prosody Layer is the first published system to use word-level acoustic features as continuous modifiers to semantic intent confidence scores.

5. The Predictive Intent Echo™ (PIE)
5.1 Motivation

The ARIA pipeline as described above delivers its first semantic feedback only after the user finishes composing — at approximately 900–1500ms after the last keystroke. For complex professional intents (multi-clause requests, technical specifications, procurement statements), this creates a perceptible dead zone. The user composes, stops, and waits. The wait is cognitively disruptive because it interrupts the sense of collaboration that characterises effective human-AI interaction.

PIE solves this by decoupling the semantic feedback moment from the input completion moment. The key insight is that an imperfect prediction about the user's intent, displayed peripherally, is more cognitively useful than silence — provided it does not interrupt the composition process.

5.2 Dual-Debounce Parallel Pipeline
Figure 2: PIE Timing Architecture
t=0ms     User begins typing
t=300ms   [PIE] Intent Completion Engine fires on partial input
t=400ms   [PIE] ICE returns predicted completion (~100ms)
t=400ms   [PIE] SCIM analyze fires on prediction (parallel)
t=500ms   [PIE] Peripheral echo displayed in PIE Zone (dashed, italic)
t=900ms   [ARIA] Full SCIM analyze fires on actual input
t=1050ms  [ARIA] Blue zone confirmed (solid, bold) — PIE zone hides
PIE delivers first semantic feedback at ~500ms. Full confirmation arrives at ~1050ms. The input field is never modified at any point.
5.3 Intent Completion Engine (ICE)

The ICE is a lightweight fast-inference language model (targeting <200ms response) deployed with a minimal prompt: complete this partial statement into its most probable full sentence. The output is capped at 80 tokens — the ICE is not generating a response, it is completing a sentence. The raw ICE completion is never displayed; only the SCIM-processed interpretation is shown in the PIE Zone.

5.4 The PIE Zone Display Design

The PIE Zone is a distinct UI area positioned above the chat history, with dashed-border styling and 70% opacity to signal its predictive nature. It displays:

  • The predicted root intent as a brief plain-language statement
  • Confirmed nodes (C(n) ≥ 0.80) as compact inline descriptors
  • Uncertain nodes flagged with ⚠ indicators
  • A predicted overall confidence percentage

When the full 900ms analysis completes and the confirmed Blue zone appears, the PIE Zone hides automatically. The transition signals the shift from predicted to confirmed state.

5.5 User Control

PIE is a user-selectable option (default: ON for returning users, configurable). This design reflects research showing individual differences in sensitivity to peripheral displays during composition tasks [12]. Users who prefer to compose without interruption can disable PIE; the remainder of the ARIA pipeline is unaffected.

6. Cognitive Science Basis
6.1 Dual-Process Theory and Intent Fidelity

Kahneman's dual-process framework [17] provides a unifying cognitive account of ARIA's design. System 2 (slow, deliberate, resource-intensive) is engaged by: composing complex intent statements, evaluating autocomplete suggestions inserted into the input field, reading structured AI responses. System 1 (fast, automatic, low-cost) is engaged by: recognising peripheral confirmation signals, noticing confidence indicators, processing familiar structured patterns.

ARIA's pipeline is designed to minimise System 2 demand at each stage. The Blue zone's structured card format (root intent / confirmed nodes / uncertain nodes) is consistent across all interactions, making it recognisable rather than evaluative. The PIE Zone's distinct styling reduces the probability of confusion with the confirmed state. The Misalignment Detector's ALIGNED/PARTIAL/MISALIGNED verdict requires only recognition, not analysis. The clarification options require selection, not recall.

6.2 The Recognition/Recall Asymmetry Applied to ARIA

Tulving's recognition/recall asymmetry [18] — that recognition is significantly less cognitively demanding than recall — motivates several specific design decisions. The Blue zone presents ARIA's interpretation for recognition ("does this match what I meant?") not recall ("what did I mean?"). The clarification options are pre-computed answer candidates rather than open fields. The Misalignment Detector verdict is a recognition task ("did the AI address my confirmed intent?") not an evaluation task. These design choices collectively minimise the cognitive cost of the intent fidelity pipeline.

6.3 Working Memory Protection

Hayes and Chenoweth (2006) [19] established that transcription and editing tasks compete for the same working memory resources as higher-order composition planning. The input field modification in standard autocomplete systems creates precisely this competition: the user must interrupt composition to evaluate and accept/reject the suggestion. ARIA's core design principle — that the input field is never modified by any system component — directly protects the working memory capacity allocated to composition.

7. Experimental Hypotheses and Proposed Protocol
7.1 Hypotheses

H1 (Intent fidelity): AI responses generated via the ARIA pipeline (SCIM + Clarification + Brief) will score significantly higher on a blind expert intent fidelity rating scale than responses generated from raw user input, for the same set of complex intent statements.

H2 (Clarification efficiency): The single-question-per-round clarification design will achieve equivalent intent node coverage to unconstrained multi-question clarification while producing significantly higher task completion rates and lower perceived effort (NASA-TLX).

H3 (Misalignment detection accuracy): The Misalignment Detector will show significant agreement (κ > 0.70) with expert human judges on intent fidelity verdicts across a benchmark of 200 intent-response pairs.

H4 (Prosody confidence validity): Prosodic confidence deltas derived from pause duration and energy ratio will show significant positive correlation with post-hoc speaker self-reports of certainty on a word-by-word basis.

H5 (PIE perceived latency): Users in the PIE condition will report significantly lower perceived processing latency than users in the no-PIE condition, controlling for objective end-to-end time.

H6 (PIE cognitive load): PIE will not significantly increase NASA-TLX working memory subscale scores, because the peripheral display design minimises System 2 engagement.

7.2 Proposed Protocol
VariableSpecification
Design (H1)Within-subjects: same 10 intents, ARIA pipeline vs. raw prompt vs. baseline LLM. Expert panel (N=5) rates intent fidelity blind.
Design (H2–H6)2×2×2 between-subjects: ARIA on/off × PIE on/off × Background context on/off. N=120 (15 per cell).
ParticipantsKnowledge workers, self-reported AI interface users, recruited via professional networks
Tasks8 professional intent statements across 4 domains (HR, finance, research, product management)
MeasuresIntent fidelity score (expert panel), NASA-TLX, perceived latency (7-point Likert), text predictability ratio, task completion time, clarification round count
AnalysisMixed ANOVA, Bonferroni correction, Cohen's d for effect sizes, inter-rater reliability (κ) for H3
8. Discussion
8.1 Intent Fidelity as a New Evaluation Dimension

The AI industry currently evaluates interface quality primarily on response quality metrics: factual accuracy, coherence, helpfulness ratings. We argue that intent fidelity should be a distinct evaluation dimension, independent of response quality. A highly accurate response to a misunderstood intent is a failure of intent fidelity even if it scores well on response quality metrics. Separating these dimensions would allow the field to measure and improve the gap between what users ask for and what AI systems address.

8.2 Enterprise Applications

The intent fidelity gap is most costly in enterprise contexts where the consequences of misaddressed requests are significant: procurement specifications, legal document drafting, clinical data queries, financial modeling requests. The ARIA suite's enterprise value proposition is precisely this: it reduces the probability that a complex multi-parameter enterprise request is partially or wholly misaddressed by the AI system. The Misalignment Detector provides a verifiable audit trail of intent-response correspondence that has compliance and governance value in regulated industries.

8.3 Limitations and Future Work

The current SCIM confidence weights (α=0.40, β=0.35, γ=0.25) are empirically calibrated but not formally derived. A learning approach that adapts weights per user based on feedback from the Misalignment Detector is a natural extension. The DCT relationship taxonomy is currently hand-coded; a semi-supervised approach that learns domain-specific relationships from user knowledge base documents would reduce deployment friction. The Prosody Layer's energy ratio measure is a proxy (Whisper word probability) for true acoustic energy; deployment with native audio feature extraction would improve accuracy. PIE's performance on mobile keyboards and voice-first interfaces requires dedicated evaluation.

9. Conclusion

We have introduced intent fidelity as a paradigm for AI interface design, defined it formally as a five-property requirement, and described the ARIA™ suite — a complete technical architecture implementing it. SCIM provides confidence-scored semantic decomposition via a three-source model. DCT provides logical relationship mapping that enables intent-level reasoning beyond semantic similarity. The Clarification Engine selects optimally valuable clarification questions within a one-question-per-round constraint. The Brief Generator produces structured AI prompts from confirmed nodes. The Misalignment Detector closes the loop by verifying post-hoc that AI responses address confirmed intent. The Prosody Layer extends intent fidelity to voice interfaces via acoustic confidence signals. The Predictive Intent Echo reduces perceived latency to 500ms via peripheral confirmation during composition.

Together, these components constitute the first complete published architecture for intent fidelity in AI interfaces. We present six testable hypotheses and a proposed experimental protocol. The architecture is implemented and live at aria-demo.pages.dev. All components are filed as patent pending under the ARIA™ portfolio, Business Innovation Solutions WLL, Bahrain.

The core argument of this paper is that the field's focus on response quality, without an equivalent focus on intent fidelity, systematically underestimates the rate at which AI systems fail users. Improving the quality of the intent that reaches the AI — not just the quality of the response that leaves it — is the most direct path to AI interfaces that professional users can rely on.

References

[1] Wei, J. et al. (2022). Emergent abilities of large language models. Transactions on Machine Learning Research.

[2] Allen, J. F. (1987). Natural language understanding. Benjamin Cummings.

[3] Joachims, T. (1998). Text categorisation with SVM. ECML '98. Springer.

[4] Devlin, J. et al. (2019). BERT: Pre-training of deep bidirectional transformers. NAACL-HLT 2019.

[5] Bapna, A. et al. (2017). Sequential dialogue context modelling for spoken language understanding. SIGDIAL 2017.

[6] Wu, C. S. et al. (2019). Transferable multi-domain state generator for task-oriented dialogue systems. ACL 2019.

[7] Rao, S., & Daumé III, H. (2018). Learning to ask good questions. ACL 2018.

[8] Aliannejadi, M. et al. (2019). Asking clarifying questions in open-domain information-seeking conversations. SIGIR 2019.

[9] Stoyanchev, S. et al. (2014). Towards natural clarification questions in dialogue systems. AISB 2014.

[10] Thorne, J. et al. (2018). FEVER: A large-scale dataset for fact extraction and verification. NAACL 2018.

[11] Maynez, J. et al. (2020). On faithfulness and factuality in abstractive summarisation. ACL 2020.

[12] Arnold, K. C., Chauncey, K., & Gajos, K. Z. (2020). Predictive text encourages predictable writing. IUI '20. ACM.

[13] MakeAIHQ. (2026). Streaming responses for real-time UX in ChatGPT apps. MakeAIHQ.

[14] Gladia. (2026). How to measure latency in speech-to-text. Gladia Engineering Blog.

[15] Leviathan, Y., Kalman, M., & Matias, Y. (2023). Fast inference from transformers via speculative decoding. ICML 2023. Google Research.

[16] Schuller, B. W. et al. (2013). Computational paralinguistics: Emotion, affect and personality in speech and language processing. Wiley.

[17] Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

[18] Tulving, E. (1985). Memory and consciousness. Canadian Psychology, 26(1), 1–12.

[19] Hayes, J. R., & Chenoweth, N. A. (2006). Is working memory involved in the transcribing and editing of texts? Written Communication, 23(2), 135–149.

© 2026 Business Innovation Solutions WLL, Bahrain  ·  All components patent pending  ·  ARIA™, SCIM™, DCT™, Predictive Intent Echo™, Misalignment Detector™ are trademarks of BIS WLL
Pre-print — not for citation before provisional filing date  ·  Live demo: aria-demo.pages.dev