February 2026
IRL: Entropy-Constrained Hypothesis Control via Superposition and Recursive Refinement
We present IRL, a neural architecture that combines frozen codebook representations inspired by superposition phenomena with entropy-gated recursive refinement for hypothesis-driven decision making. On the Credit Card Fraud Detection dataset (284,807 transactions), IRL achieves 94.08% AUC-ROC with ~15,000 parameters—an order of magnitude fewer than competing approaches—while providing explicit uncertainty quantification and interpretable reasoning paths.
H(P') ≤ H(P) + ε → Entropy-gated refinement ensures monotonic convergence