In 1994, John Meriwether started a hedge fund named Long-Term Capital Management (LTCM). In 1998, it had to be bailed out for $3.6 billion. By 2000, the fund was liquidated and then dissolved.

But it’s the story that took place in between these years that is interesting. Being a hedge fund, the goal of LTCM was to provide strong market returns to its clients while diversifying away as much risk as possible. And that it did, at least for a while. In the first year, after fees, the return was 21%. In the next year, it was 43%. In the year after that, it was 41%.

Without getting too deep into the complexities of financial derivatives, the way they did this was to apply a huge amount of leverage and debt, after diversifying their bets across a hundred trades, with thousands of positions, to squeeze out a return. Their mathematical model of risk was built by two famed economists who would go on to win a Nobel Prize during this time. Although debt and leverage naturally entail huge amounts of risk, they thought that their number-crunching had done away with this risk.

Before their losses began in 1998, their model predicted that the odds of an event like that occurring was 1 in 10²⁴. That’s essentially zero percent, hence the justification for all of their leverage. Of course, hindsight is twenty-twenty, and it’s easy to maybe shake your head at something like that now, but the more interesting question is: What went wrong? These were apparently smart people, apparently having done their due diligence.

Trader and philosopher Nassim Taleb might say that that they suffered from a Black Swan event. A Black Swan event being anything that is unpredictable in the present, only obvious in the future looking back. The success of Google was a Black Swan event. The Great Depression was a Black Swan event. These events can’t be predicted at the time, but they end up having a disproportionately large effect on our lives. Or he might have realized that these traders and economists had only built their models using 5 years of data, and this data completely ignored the larger history of the past.

Either way, the core problem here lies in the distinction between the map and the territory — between abstraction and reality, between thinking deeply and thinking clearly. And the element of risk that the game of finance naturally entails is a good lens to see this difference through because as organisms playing our own game of survival as pawns in the larger evolutionary dance, risk can be a question of life and death.

The ability to think gives us the ability to judge. The ability to judge means that we can learn to make decisions over the course of time, accounting for first and second and third-order effects of our choices, rather than just reactively responding to stimuli in our environment. This is the birth of abstraction. In order to problem-solve across time, we detach the process of thinking from the here and now, and we abstract it away into our imagination.

The imagination can be a great place. It can allow us to look ahead, to manage uncertainty there, and then use that thought process to better interact with reality as it comes along. But what the imagination can’t do is perfectly map or predict reality, because, well, it’s not reality itself. When we think deeply, whether that be through philosophical or mathematical language, it’s a purely creative act. But reality is messy, and as the meanings behind words and formulae get more and more detached from direct experience — from the real risk of living — the more we get lost in the mess.

The formulae that allowed LTCM to reach their initial return, which won their creators a Nobel Prize, were obviously right enough in some sense. They had thought well enough to make decent enough judgments based on their knowledge of the world. And perhaps if they hadn’t been so confident as to abuse their luck with so much leverage, they might even have succeeded in some other, similar way. In fact, there are many hedge funds that use an entirely quantitative approach that do well over many years.

But people who do well quantifying the world or, say, philosophers whose words are useful in getting them through life in a valuable and meaningful way, or anyone else for that matter who thinks deeply and then uses that knowledge to further their agenda in the world — all of these people only do well when they are, first and foremost, attached to the flesh and bones reality of day to day life, and their thoughts and their imagination are constantly changing and responding to the risks of the real world as they come up. They live first and then they abstract, because that abstraction is then able to respond and change in the face of uncertainty.

If we observe our thinking patterns as a complex system, the act of thinking deeply is like moving up and down a tower of emergence. In essence, the sum of some thoughts is greater than their parts and so on. As this linguistic abstraction grows and contracts in our mind, we end up at a different level, each one linked not by linear cause and effect but instead built as a tower without stairs connecting each level. It’s jumpy in ways that aren’t obvious until direct experience reminds us that we are living in the wrong world.

The only solution to this neverending mess is valuing clear thinking. What connects abstraction to reality is risk and opportunity as it relates to our body and its environment, and in the face of risk and opportunity, the question is less, “What does this mean, or what is the perfect answer across all the time dimensions?,” but instead, it is, “What works best, right now, in this context, without forcing me into a corner I don’t want to be in in the future?”

Thinking clearly is a present-focused activity — it means that, sometimes, it won’t make coherent logical sense in a way that deep thinking can because it keeps itself open to the uncertainty of life. It sees what is as it is, just then, just there. And in doing that, it captures a pearl of larger, intuitive wisdom that is often not visible right away but that nonetheless has elements of depth. This meshes well with what the inventor Nikola Tesla once said about how some scientists and other thinkers tend to approach life in modernity:

“The scientists of today think deeply instead of clearly. One must be sane to think clearly, but one can think deeply and be quite insane.”

The foundation for life and decision-making isn’t some infallible first principle found in thought and language that we can then build some house of cards on to guide our lives without needing any further flexibility. Rather, that foundation is simply the risks and opportunities that present themselves as our body moves through the challenges of space and time.

The deeper we go, the more clearly anchored we have to be to reality. Otherwise, the words we speak when we reference, say, the fifth or the sixth level of abstraction will represent very different meanings from what they were initially intended to represent.

This stuff is also true as it relates to day to day discourse. People will sometimes insist on using overly specific and technical words when talking about certain things, playing and creating their own language games in the name of some deeper rationality or accuracy. But the most commonly used words that we share are often clearer in a way that those deeper and more justified abstractions don’t understand. There is hidden wisdom in how our collective use of language has evolved, and the clarity they provide serves a very important purpose that otherwise gets lost in nuance.

Of course, this isn’t to say that there isn’t a place for nuance or accuracy, or that a deeper thing is never truer than a more clearly articulated thing. In fact, when it comes to philosophy or science or some other specialized discipline, these abstractions and the accurate definition they demand are what allow us to create theories and hypotheses we can then test to form a better representation of reality. That is important, and that does have a place.

The point is only that just as thinking deeply can lead to deep and original insights, in the wrong context, it also poses a higher risk of leading us to deep and unproductive insights. Without clarity as a foundation, depth can just as well venture towards delusion as it can truth. Clarity looks outward, whereas depth looks inward, and though the ideal would be to harmonize the two, outward-facing clarity has to come first.

LTCM created a wonderful mathematical model, using complex financial derivatives, to try to box reality into their own data-set, hoping that this would be enough to predict it. But in the process, they forgot the most important thing: reality isn’t concerned with their abstractions.

Join the newsletter

Subscribe to get our latest content by email.

We won't send you spam. Unsubscribe at any time. Powered by ConvertKit

Join 40,000+ readers for exclusive access to my newsletter: