Research
Research underpins us. We ship.
Parabable's cognitive architecture emerged from several years of research into sparse modular intelligence. Every engine is in production use. Every claim is testable.
Cognitive architecture origins
The architectural insight that became Parabable emerged from observing the gap between how organisations made consequential decisions and the tools available to help them. Committee meetings, spreadsheet models, one-off studies, and post-deployment monitoring produced decisions that were defensible in review but unreliable when they met reality.
The research question — could a cognitive architecture represent the behaviours of populations at sufficient realism to rehearse a decision — produced, over time, the answer that became Parabable's platform.
The research draws on multiple fields: multi-agent simulation, causal inference, graph-structured knowledge representation, large language model behaviour, and the biology of collective behaviour systems. The integration of these fields into a single coherent architecture is what makes cognitive simulation possible.
Technical leadership
Tom Bennett led the cognitive architecture design. He previously built the Bankers Trust risk and trading system that was deployed across Merrill Lynch, Credit Suisse, and the four largest Japanese banks. He is an AI consultant to the United Nations and global military organisations.
Tom's background spans the intersection of complex systems, institutional risk, and artificial intelligence. The cognitive architecture reflects this: it is built for institutional-scale decisions, not academic demonstrations.
Technical approach
Parabable's architecture addresses three requirements simultaneously:
- Behavioural realism. Synthetic agents must behave in ways that are realistic enough to surface consequences that matter. This requires cognitive modelling, not just statistical modelling. Each agent reasons about its context, forms beliefs, and acts on them.
- Structural fidelity. Simulations must operate on the actual topology of the organisation — real relationships, real transaction patterns, real geographic and demographic structure. Without structural fidelity, simulation outputs are generic rather than specific.
- Causal transparency. Every outcome must be explainable. The Causation engine produces audit-grade traces that show the chain from input conditions to emergent outcomes. This is not a post-hoc rationalisation; it is an integral part of the simulation process.
The six cognitive engines, the bidirectional semantics layer, and the global affective signalling layer are the architectural components that deliver these three requirements. Each component is described in detail on the platform page.
OASIS acknowledgement
The Parabable platform builds on OASIS, the open-source multi-agent simulation framework (Apache-2.0 licence). We acknowledge this provenance and contribute upstream.
OASIS provides the foundational multi-agent execution layer. Parabable's proprietary cognitive engines, semantics layer, and affective signalling layer are built on top of this foundation. The commercial model is platform licensing with enterprise support.
Publication roadmap
Parabable intends to publish architectural notes, technical papers, and patent filings as appropriate. The publication strategy balances open contribution to the field with protection of the proprietary components that constitute the platform's commercial differentiation.
Enquiries about research collaboration, academic partnerships, or technical deep-dives should be directed to our team.
Built on research. Used in production.
Our flagship customers are financial institutions, regulators, and governments — not academic labs. See the platform in context.