An ode to isomorphisms
Similarity between systems, arbitrage, Munger's Worldly Wisdom, and building competitive advantage.
When I encounter an isomorphism or a transformation between two systems in any problem set, I am quick to grin. There is a deeply satisfying pleasure in solving a problem through a conversion to another space, modification via a novel set of tools, and a return back to the original space. Like most humans, I have a deep rooted enjoyment and psychological tendency toward similarity and thus, solutions of this sort scratch my brain in the right way.
Finding interesting isomorphisms are almost like finding an arbitrage. If you have a system where moving from one space to another unlocks a novel set of tools that maintain invertibility, you have a working isomorphism. That is a powerful tool. And that is a mental model that is extremely important for abstraction beyond any technical field. If one can look at two systems, see how they’re similar, and then exploit their symmetry, they have an immediate edge.
Stepping back from the math view of an isomorphism, there seems to be extreme power in using the isomorphism as an abstraction for building mental models at large. This has been covered tangentially by seminal thinkers (Hofstadter, Munger) but I want to provide my own perspectives on thinking through and building one's own abstract isomorphisms real time, for the sake of a competitive advantage.
Math & isomorphisms
When I look back on my hours of staring at proofs on a whiteboard, most of my favorite aha moments were isomorphism-related. I’ll give some interesting examples that still live in my head.
Side note: not all of these are true isomorphisms — some are just interesting ways of viewing the same system. Pure math people, please don’t get too mad at me.
The first and most obvious is integral transformations. Anytime you’re moving from one domain to another such as, time → frequency or space → momentum, you maintain all information while providing novel utility. Specifically with the Fourier transform, which takes a function in the time domain and maps it to a function in the frequency domain via:
That transformation is linear and invertible, and it’s actually an isometry, so it preserves energy. The intuitive explanation here is that there is no loss of energy or information. The novel space allows for some operations such as convolution to become simple (multiplication), and periodicity or filtering to become visual. This is a new frame of reference to view the existing state, which applies for all integral transforms.
But beyond that, there’s another example I think is really intriguing: the isomorphism of domains when orthogonally decomposing any function in a vector space of functions into a trigonometric basis—cosines and sines. In the space L squared [-pi, pi] any square-integrable function can be written as:
Here, we treating cos(nx) and sin(nx) as orthogonal vectors in an infinite-dimensional vector space. The coefficients live in the space of square-summable sequences. So we created a mapping between a function and a sequence of numbers. That transformation is powerful because it lets one compute, estimate, and compare functions as if they were vectors. And as a change of basis implies, we have only shifted the coordinate system, not the underlying "information".
Change of basis, in general, is one of the most elegant, fundamental tools in mathematics. It is the same object and environment, but a rotated perspective with clarity of structure and simplicity in solvability. It becomes much easier to attack the problem.
Another example I think about a lot is the log transform, especially in the context of programming and computation. Computers aren’t good at handling extremely large or extremely small numbers, a problem I ran into while trying to manually code the most common probability distributions. Try multiplying a thousand small probabilities or computing a huge factorial and quickly, there will be floating-point issues. But take logs, and suddenly everything is stable:
Now, multiplication becomes addition, division becomes subtraction, and exponentiation becomes scaling. It is the same operation, in a new coordinate system. It’s not a true isomorphism (you lose sign, can’t do zero), but for positive reals it’s a bijective, smooth map that preserves order and ratios. The computational landscape changes while the content is constant. This one method powers softmax, HMM decoding, Bayesian inference, binomial coefficient estimation, and more.
And finally, my friend Bruce explained this concept to me called homotopy continuation (follow him he's a beast). It’s a way to solve difficult systems of polynomial equations by deforming them into something simpler. For example, select some system F(x) = 0 that’s hard to solve. Then pick a simpler system G(x) = 0 with known solutions. Then construct a family of systems:
At t = 0, H is only the easy system. At t = 1, H is only the hard system. The idea is to track the solution paths as t moves from 0 → 1. The roots may not behave nicely, solutions may be hard to find, but under the right conditions, one can ride the "flow" of the system and reach the answers. It’s not a strict isomorphism as the dichotomy of two spaces does not apply, but it has that same flavor: deform one thing into another and preserve just enough structure to carry answers across.
Each example here can be abstracted to swapping the coordinates while maintaining the context. But in order to take this step first, one has to be confident that the two systems are related (isomorphic in some cases). This is easier in pure math, but in an abstract sense, how does similarity of systems apply beyond technical fields?
Munger's lattice
Transformations, decompositions, and domain shifts are broader than the niche of mathematical tools. They reflect a way of reasoning that privileges internal structure over surface form. Charlie Munger’s idea of a “latticework of mental models” captures this method of thinking, abstracted to the level of judgment.
In his 1994 USC speech A Lesson on Elementary Worldly Wisdom, Munger urged students to “step outside the narrow domain you trained in” and build a repertoire of 80 to 90 foundational models. The idea is not specialization, but structure. It is a clear recognizing that each discipline contains patterns of relationship that can be applied elsewhere, if the logic aligns. In a later talk, The Psychology of Human Misjudgment, he sharpened this point by cataloguing the systematic errors that arise when the lattice is sparse or underdeveloped.
Munger outlined a few key properties of a model:
1. It must be multidisciplinary. Models must come from a broad array of disciplines. This diversity ensures that any new problem has a chance of rhyming with something familiar. Each model becomes an isomorphism candidate. The job is to search for structural correspondence instead of superficial similarity.
2. It must include redundancy. Munger noted that using many models forces you to “check, and check again.” In mathematical terms, this reduces the risk of overfitting a single frame. A good lattice holds competing interpretations in tension. If multiple models align, the fit is more likely to reflect the structure of the problem rather than the biases of the solver.
3. It must favor simplicity. Models should be “elementary, but powerful.” This clearly echoes the use of sparse or orthogonal bases in function spaces. Simple models generalize well. They can be rotated, composed, and projected onto new problems without excessive distortion. A clean structure allows cleaner mappings.
I personally believe that none of this is metaphor. This is isomorphic thinking applied to real-world decision-making. The internal logic of one system is carried across to another where a solution space becomes more tractable. A saturation curve in biology becomes a diminishing returns model in economics. A feedback loop in thermodynamics informs a psychological reinforcement cycle. Coordinates change while maintaining the context.
It is important to realize that Munger’s lattice is not a slogan, nor a philosophy of eclecticism. It is a commitment to abstraction through structure. The same move seen in integral transforms or homotopy continuation reappears here, not in notation, but in mental motion.
Ultimately, structural thinking becomes the quiet engine behind good judgment, sharp abstraction, and the ability to act decisively in unfamiliar territory. Isomorphisms preserve essential behavior while allowing ideas to travel. And what emerges from this way of seeing is a different relationship to complexity. Novel problems can be solved with transformations of previously solved solutions. The solver isn’t relying on brute force or intuition alone, but on a library of mappings that make the invisible visible.
The people and systems that invest in building this lattice will spot structure where others see only chaos; will notice when two disjointed problems share the same spine; and will solve elegantly, not accidentally.