lundi 22 avril 2019

The Case for One Global Health

When capitalism works well, participation in each transaction is voluntary. Where it breaks down, participants feel the transactions aren't voluntary, or aren't fully voluntary. Someone paying and someone taking payment as compensation should always have a choice; allowing us traders to choose among a reasonable number of informed options, including the option not to exchange now, is what makes a market coherent.

There are times when choices are spurious. For example, the choice between one health insurer and another is not constructive. What matters to a patient more than anything is the choice of doctor and treatment (along with their availability and quality), not the choice of insurer. Indeed, a diverse market of insurers fragments the pools of doctors, treatments, and patients. Even when looking to the immediate goal of insurance, the more fragmented the pool of the insured, the less the costs can be smoothed out across the population. The worst health insurer is a small, local one with few doctors and dictated treatments. Such an insurer has little choice but to pay out as little as legally possible.

Do you see the simple idea? Some options increase freedom. Others secretly decrease it. We need to focus on finding and maintaining options that increase freedom. Sets of options that ultimately reduce choices and coerce traders should be eliminated.

I have no particular prescription for how many health insurers would be the right number, other than that, based on the above, one would seem to be the best number to start with, and if one isn't enough then it could be increased until it's enough.

We don't have forty thousand internets. We have one. (The local instances of the internet (intranets) that are not connected to the global one may work the same way, but they are not easily confused with the real internet. If I navigate to Wikipedia and Google and Amazon and get nothing, but I seem to have an IP connection active, it's fair to assume I'm not connected to the internet.) Having forty thousand internets would not be better than having one. It would be a dramatic downgrade. It would reduce options for navigating and sharing information to a sliver of what we enjoy now.

So why not have one health insurer? If one isn't enough, try two. If two isn't enough, try three. Having one insurer, or very few, would seem to maximize the number of doctors available to everyone. Then we can focus on paying medical staff for eventual outcomes and quality of care, rather than for the technological and capital intensity of the treatments.

There's nothing magic in what I'm saying. It seems mathematical, but surely it's abstract and short on detail, and I could be deceived. Is the abstraction true to life?

The Faux

Any time someone lectures you about "the real world," they're making things up. What's immovable is a question you should be answering yourself. "The real world" is not what exists materially before us, without humanity. It's what we collectively create. As a map of history would show, you are here. It's your turn.

The WWI-era philosopher Ludwig Wittgenstein wrote:

"Es ist offenbar, dass auch eine von der wirklichen noch so verschieden gedachte Welt Etwas—eine Form—mit der wirklichen gemein haben muss."

Translation: "It is clear that however different from the real one an imagined world may be, it must have something—a form—in common with the real world."

And also:

"Um zu erkennen, ob das Bild wahr oder falsch ist, müssen wir es mit der Wirklichkeit vergleichen. Aus dem Bild allein ist nicht zu erkennen, ob es wahr oder falsch ist."

Translation: "In order to discover whether the picture is true or false we must compare it with reality."

We are here because we change what's here. Even our seeing is a bit of change.

Reading the above, my brother asked, "Without reality, what are you calling truth or fact?"

And Wittgenstein asked the same thing. But before I return to him, I want to talk about relativism. Post-modernism struggles immensely—and by ensnaring itself in post-modernism, current feminism also struggles in one way I hope we can easily alleviate—with what was expressed so well in the Japanese classic movie Rashomon: our accounts of life all differ, even our experiences of the same event. Yet we still do best to recognize, to admit as a more-than-provisional assumption, a postulate, a precept, an axiom—to take as a given—that external, material reality is also there. We are often surprised by it, by the external, because we did not create it. That is a kind of proof, an empirical one, or as close as we're likely to get, that we cannot be solipsists: surprise is. Surprise is proof there's an outside.

Just cipreyes.

Surprise is information that doesn't come from us. Not the conscious us, and so not the core us in that moment. It's what we listen for and attend to. It tells us reality is out there, not all in here. Maybe that reality resides in our own brain tissue, tissue whose activity isn't part of our consciousness. Often it goes deeper into the distant and unknown and unfamiliar. Others introduce entire unseen realms to us. Many of those exist even without the others guiding us. Objective reality is the common denominator of all our subjectivity and experiences, the reality that precedes their reality and is in turn influenced by their reality. Modern physics contests this in very small but universal ways that are, I believe, fashionably misconstrued by the humanities into all kinds of false relativities. Subjective and objective are both "real" and of spectacular importance, but in different ways. We care about the subjective through empathy and compassion and art and expression. And when we say the subjective is "real," we are aware that it informationally and energetically exists. How you feel is precisely how you feel; that, like you yourself, exists in the universe.

Feminism's emphasis on turn-taking, which is beautifully and disturbingly illustrated in the conflicting testimonies of witnesses in Rashomon, allows us not only to illuminate our inner worlds and see each other's, but also, we should admit, to triangulate in on what's actually happened and examine together what's likely to happen. Wittgenstein asked the same thing about reality—the same thing as feminism and post-modernism, and my brother and the parable of the elephant and Rashomon and every courtroom, and the Persian story of a trusted adviser who tells everyone in a dispute that they're right. The radical philosopher wanted to know how to call something "truth" or "fact" without reference to the outside. Is that possible? Is it just subjective?

For a while, he was obsessed with tautologies and contradictions, statements that logically cannot be false (the number 357 is the number 357) or cannot be fully true (the Titanic is unsinkable; it has sunk). These examples show that for two particular logical categories, you can tell whether a statement is true just by looking at it. The structure of the statement, to some extent, rather than its content, tells you whether it works. No one gets a lot of traction disputing conclusions about statements like these. 357 is 357, and it's also anything else 357 is, like 101 + 256. The Titanic can't have been unsinkable. Maybe an insight or pattern could be extracted from them.

Eventually, he gave up on that. Most statements can't be reduced to one or the other, tautology or contradiction. That isn't necessarily obvious without following the links of word definitions, but maybe it feels easy to believe. Here is a similar idea, a surprise that would seem obviously false. All logically solvable problems can be solved by a relatively simple piece of code, which was written in 1959, only 12 years after Wittgenstein's death. It relies on one core feature of logic. To simplify a little, it only takes one special rule of logic to solve all logical problems. That's pretty unintuitive.

At least, until I supply context about binary numbers: we know that sense information, like all information, can be reduced to binary numbers, and binary operations are generally very simple, yet they can accomplish anything a calculation can accomplish. Still, all logical problems would include all present and future mathematics, everything any piece of software or machinery could ever do, and perhaps anything any of us could ever figure out without simply guessing. This extraordinary ability of one single logical rule is, I'm told, provable (there are different accounts of how many problems can be solved this way in theory), but in practice most problems, even with today's computers, either cannot yet be formulated clearly and specifically enough for the program, or else it would take more than the lifetime of Earth or even the universe to produce its answer. This program, in its original form, is no longer in use, but its relatives have been given jobs simulating human problem-solving and automatically verifying mathematical theorems. It and its creators are widely credited with kicking off the field of artificial intelligence.

So a search for simplicity here, earlier, in the cacophony of World War I, in statements made by humans with words, wasn't necessarily foolish and naive, though it was unusual. Arguably, it's always brave to challenge the obvious in a search for better principles.

This philosopher who began writing in the trenches goes on to say:

"A priori knowledge that a thought was true would be possible only if its truth were recognizable from the thought itself (without anything to compare it with). In a proposition a thought finds an expression that can be perceived by the senses. We use the perceptible sign of a proposition (spoken or written, etc.) as a projection of a possible situation."

It seems perfectly unsurprising and not worth saying, once we slow down enough to absorb his Captain Obvious meaning with this line (his writing is not all so easy at all, and I confess I haven't read much of it). But he was challenging and documenting the obvious in search of better ways. He wanted to state the irrefutable. He was trying to record what must be, and branch out from there with logic, like an oak growing from an acorn. We can't really describe anything without referring to a situation, in other words to some potential for objectivity. The words themselves are meant for ears. Ears sense sounds. Sounds are energy waves in molecules. We mutated to speak because carrying and throwing world objects in sounds helped us survive.

When we assert what happens "in the real world," we are painting a picture. It's a kind of stereotype of the world. Sometimes that picture looks so realistic that we forget it's a picture. We forget how deceptive even a photograph can be, and how limited it always is. When most of a population is metaphorically forgetting the difference between a photograph and the entire future world, systematic errors can infiltrate apparently tough-minded views and practices. These become self-fulfilling and self-perpetuating prophecies. We fill prisons with non-violent citizens in a War on Drugs that still doesn't fix the issue. In the real world, you've got to be tough. We subsidize coal instead of renewables. In the real world, coal is what works. We elect a demagogue via the Electoral College, an institution created to prevent the election of demagogues. In the real world, glorious tradition is smarter than the people, and the country would fall apart without the Electoral College.

Realism is practical. Faux realism is the opposite.

It's worth spending time and effort to distinguish one from the other—some of us need to be doing this. Why not most of us? Is that a fantasy? Is our bravery for adventures in realism improving or getting worse? Where's the data? How many of us are checking the facts and the logic, and how are we doing at that, anyway? Almost counterintuitively, the difference between realistic, fauxistic (hehe, I couldn't resist twisting the word), and impractical often comes down to effort. How much effort are we willing to make, ourselves, to develop ideas that are exceptions to the apparent rules? Effort is an ingredient in the practicality recipe. But we'd better not assume we know the extent of that ingredient just from how we feel, or how motivated the people around us seem, or what impression we get from the news. Political will can appear suddenly. It was already there, wasn't it? Was a brick wall in the way? Perhaps it was only perceptions of missing "realism." In the meantime, a realistic change that isn't developed won't look so real, even if it's practical and sustainable.

Realism or faux realism?

A true story can sound made up, and fiction can be utterly convincing. How do we know? Are we going to trust a gut feeling? Shouldn't we be scientific about this?

A woman can't become president.

To me, that sounds plain crazy, but it's still accepted by many. Why not? Let's look at the empirical probability, so far, that this statement will hold up, taking the simplest approach: 45/45. Of historically observed US presidents, all 45 have been guys, and all 58 elections have gone to guys. 45 out of 45, or 58 out of 58, take your pick. That's 1! In probability, 1 means certainty! It's math! History repeats itself! All that experience! However, now let's state the equally obvious. US presidents are not the only presidents, other nations have female presidents, and eventually this trend is almost certain to change here as well. I won't even get into the various reasons why we might specifically need a woman to be president for once, or at least a non-cis man.

A non-Christian can't become president.

These are bets, not realities.

Recognizing that they're bets is the first step.

So far they've held up in one country, but the future is not the past.

Both ordinary examples are faux realism. They sound compelling. Cold hard truth. Like it or not, reality doesn't care how you feel. But these rules aren't what they claim to be. They're historical patterns, actually, not laws of nature, and not even rules. In all likeliness, other factors will soon be more influential than the factors making these trends as consistent as they've been.

Here it only takes a moment to see that these are more trend than fixed truth. In many everyday situations, though, no one makes even that minimal effort. As we get to know the ways of the world, we think more automatically about the familiar. It becomes more of a feeling than a thought, even. We just know. This tendency to think we know from experience, and stop thinking, is called "automaticity" by psychologists, and it's one of the biggest strengths and vulnerabilities of experts. And it's a reason newcomers can sometimes be right in a big way, and naivete isn't always bad.

These non-expert strengths have been given names like "beginner's mind," lack of "functional fixedness," and simply "openness." Anyone can notice something and report what they see. These democratic strengths are a reason that groups allowing all members to take turns and speak up, even newbies and people seen as the least competent, become more intelligent groups, groups that solve more difficult problems better. We value democracy highly for a solid reason that can now be measured. It works better than anarchy, aristocracy, or dictatorship. What we desperately need is to keep searching for the most effective democratic methods. We can't stop. We're not there yet.

Whether we're experts or not, each situation is unique. Oh yes, we'll recognize patterns. Some will be subtle but crucial. Some will be blazing red herrings. Some, we will have to depend on others to tell us about. Events with multiple actors quickly get complicated. Without warning, it will take so much more effort, then, to separate reality from "real world" opinion. Often we don't bother to try. We don't even realize we aren't bothering.

That's important to know and remember, isn't it?

And then there's this effect that's so common that anything else becomes strange and wonderful. Everyone is so determined not to run into any brick walls that they'll feel sure that no one else should be trying to knock down what looks like a brick wall, either. They don't just bet that it won't move; they often want to police their bet that it won't move. It's normal to hear people shaming each other about "the real world" (each word in this usage, "the," "real," and "world," strikes me as ironic), even when the supposedly unavoidable truths are clearly trends and bets specific to the time and place. It doesn't seem to matter. The photograph is the entire future world. Didn't you know that? Any difference doesn't matter, because there isn't one.

There are different trade-offs to different approaches. If we know which approach we're taking and what its strengths and weaknesses are, we're more likely to choose a good one for the situation. We take a turn. We let someone else take one. And they let someone else. And... Now we've completed the circle. Let's admit that trends and bets are critical in life, just as vision is critical—even though it's imperfect and can be hacked by optical illusions. But let's call trends and bets what they are. Let's not call a map a territory.

You are here on a map. And you are out in the territory. And you change the map you express, and you even change the territory. Right now, you're breathing and radiating heat and changing it. Just your heat changes the world around a little bit. And you are not alone, and the map is not all yours, and the territory is not really yours at all. We share a piece of it, but it exists. We cannot capture it. Our words do not replace it.

Fairness is Participatory

Some revolutions that have come out of studying games: probability theory (which leads to statistics, then all modern experimental science), game theory (clarifying economics and evolutionary processes; may also have helped defuse the nuclear arms race), and substantial parts of artificial intelligence. I simply do not believe that games are not important!

It isn't as if the insights above couldn't occur otherwise, but I notice a pattern over the centuries: games are useful!

For one more example, the idea of fairness itself is virtual. Are we all identical? No!!!

So, hold on, hold on. Let's pretend for a moment that we are interchangeable. This is a hypothetical space we're in now. We are not interchangeable, or only in some ways and partially. This is a strong abstraction: there are people in your life you would not allow to be replaced with a random stranger. And imagine how long your workplace would buzz along effectively if you all kept getting swapped out with people off the street. But supposing we were interchangeable, then what should the rules be, and how do things balance out?

That's precisely how a game works: the rules don't care about your name, genetics, or personal history, and you could step out and someone else could take your seat. The rules are tuned to be playable for many people. And that's also the beginning of fairness. Seeing different scenarios as the same and different people as interchangeable requires a hint of make-believe: all scenarios and all people are unique. Fairness can be seen as a flavor of virtuality built into most of us, a half-baked and fallible instinct, one that depends critically on a hypothetical, on unrealities of sameness, on falsifiable abstractions.

It also happens to be very useful in society. Over time, civilization tends toward fairness. The idea is most useful after we understand it. Fairness is a thing we co-create to encourage participation in a better society.