Dimensional Scaling

What happens to structured information when it crosses a dimensional boundary?

The Dimensional Scaling program addresses a foundational question in information theory: how much structure survives when a pattern is projected, embedded, or measured across a change in representational dimension? Initial empirical work documented a consistent 86% total information loss at dimensional boundaries, with structural information specifically collapsing by 99.6%. This loss proved content-independent, appearing across synthetic, natural, and adversarially constructed inputs with no significant variation by content type.

The empirical regularity was subsequently formalized as the Dimensional Loss Theorem, which provides a mathematical proof that information loss at dimensional boundaries is governed by geometric constraints rather than content properties. Validation in transformer architectures, including GPT-2 and Gemma-2, confirmed that the same loss dynamics manifest in the attention layers of large language models, where high-dimensional representations are repeatedly projected through lower-dimensional bottlenecks.

Current work in this program focuses on the inverse problem: given a measured loss profile, what can be inferred about the dimensional structure of the originating system? This line of inquiry connects dimensional scaling to questions of measurement theory and observational limits in complex systems, with applications to sensor design, lossy compression bounds, and the interpretability of neural network internals.

Program Output