Sensemaking is the act of making sense of reality from your own first person experience. This is related to both objectivism and science. Rationality can be considered a subdomain of sensemaking since it’s a method of seeing reality thoroughly and accurately (though not perfect).
We can never reach a non-model state of reality. All perceptions are defined by conditioning and entropy caused by our innate systems, inseparable from first person experience and the subject. Sensemaking is asymptotic, meaning we can get close to what reality fundamentally is, but we can never actually touch it.
I posit that good sensemaking is the morally righteous virtue because without it, you will make worse decisions that negatively harm others. There is a gradient of immorality here. The gradient depends on both the sensemaking and the decision itself (how many people does the decision effect? In what ways?). If one decides to make a decision that is downstream of something like prideful ego, epistemic arrogance, or great fear, and that decision affects others in harmful ways, it lies far down on the spectrum of immorality. What if desperation drives you to eat at any restaurant, I doubt the decision is immoral (or maybe it is because you could’ve chosen a better tasting restaurant providing yourself with more pleasure?).
So, because we can never touch reality as it is, doesn’t this mean we are in a way acting immoral at all times? Probably yes. But remember, these things lie on a gradient and someone who holds good sensemaking as a virtue and tries in good faith to act from that place is much less far on the gradient compared to, say, a deliberate bad faith actor.
We also act in systems that we cannot be separated from. This includes political systems, economic systems, our own biological systems, our brain, etc. These systems place binding constraints on the decisions we can make, such as regulations preventing one from building something that might have positive externalities. Also, if sensemaking is a moral foundation, does this mean people with better innate capacities or luckier circumstances are more moral? No. It is fair to say when I say good sensemaking, I am accounting for the binding constraints one is forced upon in their current state. The moral imperative is to maximize sensemaking relative to your starting conditions.
Because of this, at all times in the decision spaces we find ourselves in, pragmatically there are a limited number of decision we can make bound to our sensemaking system. What heuristics are present that you can use? What information/facts do you hold to support your decision? How many ways have you looked at the problem? Have you asked for good faith feedback from others? These are the types of questions one might ask themselves if they want to make a morally sound decision that affects others, such as in politics or rules and regulations, etc.
If we’re inseparable from our conditioning and systems, how do we evaluate whether our sensemaking is improving? We’d need a perspective outside the system to judge this, which seems impossible. There are sort-of objective ways of assessing one’s sensemaking like questionnaires and “sensemaking tests” but the problem still lies in the subjective axiological aspect of trying to define good sensemaking and reality from the first person. This is a problem I don’t know the answer to.
Back to the restaurant example, there are infinite Nth order effects to the decisions we make. Things get complicated quickly. What if the worse restaurant employs workers more ethically? What if the better restaurant donates some of their earnings to an effective charity? Even small decisions involve competing values and complexity that sensemaking alone doesn’t resolve. Again, we can never touch reality for how it truly is.
Another complication pops up here in which the goal of good sensemaking is seeing reality as it is, but what is that goal towards? Is it in reducing suffering for the most amount of people? Is it in prioritizing long term progress even if we have to make sacrifices in the short term? These differences in values have a way of creating entropy within the agent-to-agent space of making decisions. There are maybe some fundamentals most people can agree on such as reduction in suffering, but even then suffering looks different for different people.
A motivational component is needed separate from the epistemic one. It’s not enough to see reality clearly. One also needs to care about reducing harm, prioritize others’ wellbeing, or feel accountable to the collective. While many species show signs of empathy and theory of mind, humans seem uniquely capable of scaling these capacities beyond immediate kin and tribe, extending moral concern to distant strangers and even future generations we’ll never meet. The question is, what do the individual agents value that good sensemaking is contributing towards? I don’t really have a great answer to this problem of competing values other than empathy, compassion, and more good sensemaking.
Because our sensemaking is impossible to truly touch reality, it seems systems of anti-fragility and robustness are needed to counteract the default state of entropy decisions create. Each decision has infinite cascading effects within bounded systems creating chaos and complexity, but can we account for this and ensure our systems are robust enough to withstand bad sensemaking? Ironically, the answer comes back to good sensemaking at the fundamental level. Systems of agents need to come together to ensure robustness with order and effective diversity offering different sensemaking perspectives (which contributes to good sensemaking). This in of itself is sort of circular. We need good sensemaking to build systems that protect us from bad sensemaking. But if sensemaking is asymptotic and we’re always embedded in imperfect models, how do we bootstrap this? Who decides which sensemaking is “good enough” to design the robust systems? To put it bluntly: my model is imperfect.
Pragmatically individual solutions exist such as holding epistemic humility as a virtue, gaining as much knowledge as you can in the decision space, accounting for complex Nth order effects, talking and working with other good faith actors about the problem, improving your metacognition, and improving emotional regulation. However, these individual solutions are not enough as we are bound to systems, so we must account and work within constraints, which adds another layer of abstraction on top of sensemaking. To counter this, I recommend learning about systems thinking and what it entails.