01
1950s – 1990s

Cognitive Science & Human Problem Solving

Allen Newell Herbert A. Simon
"The mind is a physical symbol system capable of having and manipulating symbolic representations of the world."

Allen Newell and Herbert Simon founded the scientific study of human problem solving. Their work on the General Problem Solver and the Physical Symbol System Hypothesis established that intelligence — whether human or artificial — operates through the manipulation of structured symbolic representations.

Simon's Nobel Prize-winning work on bounded rationality showed that human decision-making is not rigorous optimization but structured search within cognitive constraints. Humans don't compute the globally optimal answer — we reason through a problem space, applying heuristics, pruning branches, and arriving at solutions that are good enough given our knowledge and time.

This insight is foundational to Polycog. Our agents don't brute-force solutions — they reason through a structured problem space defined by domain knowledge, just as Newell and Simon described human experts operating in their fields.

How it shapes Polycog

Our orchestration layer is a direct descendant of problem-space theory. Agents decompose tasks into subproblems, apply decision-making theories to focus attention, and reason within subproblems — mirroring the cognitive architecture Newell and Simon mapped in human experts.

02
1950s – 1980s

Symbolic Reasoning & the Architecture of Mind

Marvin Minsky John McCarthy
"The power of a symbol is that it allows us to treat a very complex thing as though it were a simple one."

John McCarthy coined the term artificial intelligence and pioneered the use of formal logic as the language of machine reasoning. His work on situation calculus and non-monotonic reasoning established that intelligent systems could represent and reason about the world using structured symbolic formalisms — a rigorous alternative to pattern-matching alone.

Marvin Minsky's theory of frames and the Society of Mind proposed that intelligence emerges not from a single monolithic process, but from the interaction of many smaller, specialized reasoning systems — each operating with structured knowledge representation and identified objectives. Intelligence, Minsky argued, is inherently compositional.

Together, their work gave us the vocabulary for thinking about knowledge representation, inference, and the structure of intelligent systems. The field moved on — but the core insight that explicit symbolic structure enables reliable, explainable reasoning never went away. It went underground, and now it's back.

How it shapes Polycog

Our neuro-symbolic reasoning approach is grounded in insights developed by McCarthy and Minsky. We represent domain knowledge as structured graphs of concepts, relations, and constraints — giving agents the symbolic scaffolding they need to reason reliably, not just predict fluently.

03
1970s – Present

Human Behavior & Decision Making

Daniel Kahneman
"The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance"

Daniel Kahneman's landmark research on dual-process theory revealed a fundamental truth about intelligence: fast, intuitive thinking (System 1) and slow, deliberate reasoning (System 2) are not competing modes — they are complementary, and the most reliable judgment comes from knowing which to engage and when.

Kahneman demonstrated that human decision-making is systematically shaped by heuristics, framing, and context — and that the same structural forces apply whenever an intelligent system must act under uncertainty. Trustworthy decisions require not just information, but a principled reasoning process that is transparent and auditable.

His framework challenges AI builders directly: a system that produces outputs without exposing its reasoning process cannot be trusted, debugged, or improved. The black box is not a feature — it is a liability.

How it shapes Polycog

Polycog's observability and explainability layer is Kahneman's insight made operational. Every agent decision is traceable — not just what was decided, but the reasoning chain that produced it. Our causal manifest surfaces the "System 2" reasoning that justifies each action, making agent behavior auditable and improvable.

The Thread

Intelligence is not one model.
It is a system.

Newell and Simon, Minsky and McCarthy, Kahneman — each arrived at the same conclusion from a different direction: reliable intelligence emerges from structured, interacting components, not from a single process running alone. Fifty years of cognitive architecture research — Soar, ACT-R, CLARION — proved it could be built.

Polycog takes its name from polyphony — music where independent voices combine into something no single note could produce. The beauty is not in the soloist. It is in the system.

We are bringing those ideas to market — as composable primitives, declarative orchestration, and full causal traceability. Not a research prototype. A system built, from the ground up, on the principle that intelligence is what happens when structured components reason together.