dimanche 27 décembre 2020

Toys R Us is dead and gone, and I always could relate a little bit to its ditty, "I don't wanna grow up, I'm a Toys R Us kid..." I once had that thought in a Toys R Us as grade school wore on, not the words of the song, which I don't think existed, but knowing I never wanted to get so old in mind and heart that I had no interest in toys. My dad had seconds before admitted that he wasn't really interested in toys, and when you grow up, you tend not to be. That seemed such a loss and so sad, I resolved never to forget. But I did.

I must say, ruefully or not, that I'm not very interested in toys at all. I am, however, interested in play, very much so, and just having a sense of humor and imagination, and I've never quite lost my interest in games. Even there, and this I find regrettable, I have lost most of my enthusiasm. Sometimes it's there - maybe in full - also evolved. If I hadn't been trying so hard not to lose the last vestiges, I probably would have. But I was trying so hard, at least some of the time, and so I have kept what I think is more than reasonable to keep.

It's less that I'm a Toys R Us kid than that I'm a Pinocchio kid. "If you wish upon a star..."

mardi 22 décembre 2020

Immunity

There's a relationship between cancer and delusion. Cancer is basically never one mutation. A cell typically has to be damaged in a bunch of ways. One of the most common involves contact inhibition - normally, cells will grow (keep dividing) until they bump into other cells. When they make contact, that inhibits their growth. This way functional tissues form. They don't turn into knots of tumors trying to outcompete each other for blood and other supplies. Knocking out the genetic machinery of contact inhibition is one step on the path to cancer.

Another common step is putting a protein on the cell's surface that is a special self-destruct signal for immune cells. Normally this protects against autoimmune disease: if immune cells learn to attack the self, then this signal on the surface of healthy cells gets the corrupted immune cells to self-destruct (and stop producing even more immune cells that act like racist law enforcement). It's a way to unlearn stray allergies that form against the body's own tissues. Meanwhile, the immune system also wipes out cancer cells. It has evolved to recognize cells that are breaking contact inhibition, and destroy them.

As you might imagine, if cancer cells have mutated to always express that self-destruct signal, now they are free to grow like crazy without much disruption from the immune system. Any immune cells that cotton on get triggered to self-destruct.

Those are just two genetic "fractures" if you will. The first is to my knowledge present in all cancer, while the second is very common. A typical malignant (cancer) cell will have a dozen broken gaskets like this. It's eerie, but cancer actually evolves in place from a healthy cell to a lethally rogue cell, step by step.

The same has been found for people who commit suicide: where evidence can be gathered, the tragedy developed step by step through little changes in what would be normal. Many little things are usually there - internal and external - to support self-preservation. So it's interesting that science now has knowledge of what those damage steps are, or can be. Four examples are being a victim of abuse, chronic pain, cutting, and failed suicide attempts, which act a bit like training wheels. Self-harm gradually gets normalized in the person's mind, or even gets associated with "solving" problems.

These are examples of evolution - not the kind we like, but the kind we don't like.

While a person is in chemotherapy or radiation therapy for cancer, the mutation steps and therefore the evolution comes even faster, which is why doctors consider it critical to use fast-acting and potent methods. If they don't, the cancer develops resistance to the drugs in much the same way bacteria do, and in a matter of weeks. Most cancer treatment actually breeds more lethal cancer cells, so it has to act fast.

However, it would be fatal to believe the stories that you can drink some kind of tea or eat brown rice or hold crystals and that'll work. People have been known to spontaneously get better, but this has been researched now, and it's around 1 in 10,000 cases. You don't like those odds, or shouldn't, however mystical you're feeling. But if you come down with cancer, everyone will tell you about this stuff. Everyone wants you to cheer up, and it's great to have them rooting for you. Everyone has heard of somebody who's heard of somebody who has the cure, or just got better through a good attitude. If I hadn't gone to the doctor and done what the pros said, I would definitely not be here. Survival rate for that kind of cancer without treatment is 0%. With treatment, I was fortunately one of the 70% who make it.

Which reminds me of conspiracy theories and just general delusionality. Delusions evolve into place. It's remarkably similar to what I've described above (which is the point of this post and the setup so far). There are many ways mentally healthy people are protected against inaccurate ideas:

- Other people will tell you if you're sounding too crazy

- The news is mostly factual

- There are many books

- If you know how to use the internet well, you can fact-check almost anything

- Crazy beliefs lead to crazy expectations, and when those don't come to pass, that gives correction

And so on and so forth. The "cancerous" mutations in these checks and balances can include things like these:

- Calling news fake in general

- Asserting without proof that scientists have a hidden agenda

- Saying all sources are biased and implying this means equally biased

- Claiming that anyone who is paid for their work is an unreliable source of information

- Claiming that any money link however lengthy to a disreputable group proves collusion

- Attacking the character of anyone who disagrees

- Using ridicule in place of an argument

- Stirring up guilt, fear, pride, or anger as if merely feeling any of these establishes fact or responsibility

- Casting blame elsewhere for what goes wrong and could have been predicted

- Altering data to fit a narrative

- Cherry-picking data to paint a narrative

- Ignoring dissenting arguments and making no effort to uncover and examine more of them

- The primally persuasive quality of self-confidence or unswayable belief

- Portraying belief itself as a fundamental good and doubt itself as a fundamental evil

- Threats of physical harm

- Disregarding what a person says in anger as clearly not factual or important, when actually angry people usually tell you over and over and over and over why they're angry

- Failing to recognize that anger makes you one-sided by its nature, and everyone who feels angry feels justified

- Accusing the person who says something that ends up making you feel uncomfortable or bad of being a jerk who is merely trying to insult you

- Focusing on whether someone sounds condescending rather than on what they're saying

- Confusing reading a book with being right

- Thinking that because you once held view A and now hold view B, you must have changed from an incorrect view to a correct one

- Thinking that experts are untrustworthy because you aren't smart, trained, or informed enough to follow their professional data and reasoning

- Accepting popularity as strong evidence

- Assuming that when you can poke holes in an argument, the person must not know what they're talking about, the holes are automatically major, the argument is invalid, the conclusion is wrong, and you're so very clever, whereas in truth it's difficult to present a complete argument without making everyone impatient, few people are trained for that, and many flaws are superficial or easily filled in on reflection

And so on. The more of these kinds of tendencies a person has, the more they will tend to suffer from delusions... because these bypass the reality checks that, like the immune system with cancer cells, should be finding and knocking out false beliefs.

dimanche 20 décembre 2020

Life Skill

Something I've often contended with is logic in the context of everyday life. In my experience, being really good at logic and applying it really well in ordinary situations are quite different skills. They are not unrelated: kind of like running/throwing versus baseball. If you really suck at running or throwing, you're going to suck at baseball. But if you're great at running and throwing, you won't be good at baseball without additional skills. You might still be terrible at baseball.

Having a keen eye for how to apply simple logic accurately in the complexity of human situations is very useful, much more useful than people who like to insult or dismiss logic believe. We live in a world of physics transforming information in regular patterns. It isn't that human concerns sidestep logic. It's that logic is more difficult and error-prone when it encounters this kind of complexity.

Possibly the best response to that is to keep your logic as simple, factually based, and open-minded as possible.

When you do that, you will find that logic rarely fails you, even when it's failing everyone else.

I've given you pretty much the whole secret! But I'll go into a little more detail.

Learn about IF-THEN and how it isn't reversible. IF the sun is shining outside, THEN it must be daytime. Great. But IF-THEN isn't reversible. (The reverse might be true, but that would be its own surprise.) IF it's daytime, that doesn't mean that THEN the sun is shining outside.

It seems ridiculous and people love to dismiss the usefulness of these basic logical constructs. But if you make sure you aren't intuitively reversing IF-THEN (seems easy to avoid here, but as soon as people get into things less familiar than sun and clouds, they go totally off the rails with THEN-IFs), you'll avoid a large chunk of the irrationality you see in the world. It's crazy how much craziness comes from reversing IF-THEN and thinking what you just did there must make sense. No, it doesn't.

That doesn't mean it must be wrong. There's a whole study of this called Bayesian statistics. Our brains, in fact, are very much Bayesian statistical machines. No joke. Absolutely true. But this also is a cause of enormous amounts of prejudice in every area, so we have to be careful. Bayesian statistics/intuition gives us some clues about when IF-THEN might be reversible. But it's highly fallible.

An excellent way to think about this: your brain knows how to make bets and present them as realities. But you need to know they are bets. You need to know your brain is a deeply evolved gambler, and the world you think you are seeing is its gamble. Never forget that, and the IF-THEN issue dissolves.

Get the very, very basics right. Understand that logic in the real world is quickly meaningless if it's based on facts that are even slightly inaccurate. Even slightly. You have to stay open to evidence and subtleties you've overlooked and assumptions you might be making (you are making assumptions, always, so stay open to looking into them).

The simplest logic you can use on the most bulletproof facts. And keep your eyes peeled and your ears tingling for the slightest defect in that logic or those facts. Thank me later. Happy holidays!

samedi 19 décembre 2020

NIIAI: No Island Is An Island

When we say "free will," I think by default we might think we mean "dependent on nothing." But nothing we care about is dependent on nothing. So on second thought, we might mean "it's entirely up to us." So it isn't "dependent on nothing," but, rather, on us. We define the action. It depends only on what's within us.

But now we get to another crossroads of interpretation: dependent on what's within us - our consciousness, now? - or our set of memories and deeper biology, the vast majority of which is not part of our awareness, some of it just now not, but most of it not ever. We are far more information-dense than we know. But let's take the responsible approach and own up to whatever is inside our skin and moved by it, whether we feel we called that forth consciously or not.

So "free will" now is constrained - dependent on - whatever is in us, conscious or not. In both aspects, it is not quite "free." The very fact of "free will" becoming determined by a conscious "me" or "I" - even without the unconscious influences - begins to hint at its lack of liberty. After all, if I have power over it, doesn't that make it subjugated, rather than free? Am I separate from that will, or am I the will itself? If I am the will itself, maybe that begins to untangle the knot. Then again, maybe it ties a tighter one.

"Free" will is subject to me, the conscious I, but we might suppose, at least for the sake of argument, that these are identical, the freedom of my will and the conscious I.

But then "free" will is certainly less than free when it meets the unconscious influences.

Surprisingly, also, we have not considered by far the biggest factor in will, whether free or not: the situation. Without context, a choice has no meaning whatsoever. Options can only be evaluated in a given context, preferably with a measurable purpose in mind. Until we look out at the rest of the world, will might be free, but it is meaningless. One choice is exactly as good and bad as the next, and as much and as little of everything else, as well. It's all a wash.

See, we only care about "free will" because it is not itself. We only care about it because it has meaning in context: by definition, a reduction in degrees of freedom. Or... is it? I suppose if there is no context, then everything is the same, which ultimately isn't very free. Perfect randomness out of context is maximally free - in a symbolic sense, it carries maximum entropy - but although the surprise of each new outcome would be at its greatest if you cared, the amount you care is absolutely minimal, if not zero itself. Absolutely free will is meaningless because it has not even the dependency to be noticed, to matter, to make a difference. 

The oxygen molecules inside a scuba tank at a fixed temperature are something like this free, but most of the time, even none of the other oxygen molecules in the tank care or are affected by any particular molecule we focus on. Besides, this all unfolds according to deterministic laws of mechanics. Informationally, given ignorance of all the positions, the molecular arrangements resemble free will. The trouble is that the universe is watching, and the laws of physics seem to have a plan for where each molecule is at each time. The freedom mentioned is illusory: it's actually the freedom of an ignorant observer from knowing what they don't know, ie, the positions and velocities of the oxygen molecules. None of that freedom resides in the molecules themselves, which simply follow the contours of spacetime and particle interactions.

Freedom that has no meaning is no freedom at all, or only a very technical informational freedom divorced from any application, as the information by definition cannot affect anything else. As soon as information can affect something else, it begins to have meaning.

One might feel tempted to separate "freedom" and "meaning" and be done with it, but I think if you try that, you'll quickly run into a wall. The trouble is, if you are informationally free - high entropy, true randomness in all relevant degrees of freedom - then you cannot respond to a situation, anticipate the outcomes of options. At the extreme, there would be no such influence in either direction. We might call this "insular freedom." In the traditional interpretation of Schrodinger's Cat, there is insular freedom for the cat to be alive or dead, since no observer can know the difference (until some unarrived moment of truth). As long as the box remains closed, that insular freedom, in a theoretical sense, remains. Inside the box, at least as far as the "deciding" particle mechanism behind the poison dispenser is concerned (and that particle mechanism would be entangled with the entire cat as one system), there is full "insular freedom." But this freedom by its nature depends not at all on the outside world (we haven't opened the box, and we've specified it's not to be disturbed in the experiment). And likewise it has no meaning for the outside world, which cannot know what's happening one way or another and quite possibly doesn't know anything about the box, its contents, or the experiment at all.

What I'm getting at here is that full "insular freedom" is a restriction of "meaning." In that sense, we could consider them opposites, and maybe it would make sense for one to stand as the other falls. But when we talk about "free will," we do not mean "insular freedom." Insular freedom would be a greater freedom, perhaps, if it could know the outside world, respond to it, and by responding, affect it. Yet this would automatically constrain the possibility space, wouldn't it? So "free will" would be less free than "insular freedom," but also more free in another sense. It would have the freedom of meaning.

You see why it's a problem to separate "freedom" and "meaning" and be done with it. They are too intertwined for so simple a solution.

We could say that "intention" is an interplay between "insular freedom" and "meaning" (ie, anticipation, understanding, and influence, all via physics in the inside and outside world). Intention requires some thaw in the crystal of outer and inner constraint, yet also some causality and apprehension of its immediate parameters.

vendredi 20 novembre 2020

Shoegazing with Spectacles

For all that I make a big effort to expand and refine my perspective, and try to keep ego at bay as a conflict of interest, in the end I don't know whether I overvalue or undervalue my own observations. It feels like both. And ultimately, value tends to be subjective, so both answers could be valid.

Accuracy isn't everything: I could spend my whole life accurately copying out the 1s and 0s of a reality TV episode by hand. Even with perfect accuracy, that would be a waste of a life (in my opinion, but come on). Accuracy isn't everything, and a big chunk of value is subjective. But lots of value is transferable.

This is probably too much navel-gazing for most people. And it's exactly where I think, "Is there a point to the thought? Could it lead anywhere useful, actionable?" It feels like an unsolvable maze. A dead end.

You can ask other people, listen to feedback. You have to. But other people are in the same quandary. Accuracy can be very costly, difficult to attain and difficult to verify after that. Value is certainly subjective but there's enough overlap that we can start to believe some value is real, isn't just in our heads. So we make some effort at accuracy where we perceive it's valuable, through our own instincts and reasons and the feedback we hear and the money and other material rewards available. It's all a patchwork.

But because it's complicated, there are many places we can go wrong. We often won't even know it. When the problem is difficult enough, you don't even know whether you're solving it or not. You're in the dark and in the silence, not only for your shot, but also for most or all the time after.

I told you this was getting too omphaloskeptic (a fun word for "navel-gazing") for most people. Sometimes the worst answer in the world is "It's complicated, it depends," even though that's the best answer we can give.

I'm talking about what the Serenity Prayer talks about, "and the wisdom to know the difference." We do wisdom a disservice, and ourselves and our fellows, when we portray wisdom as easier and simpler than it really is. Wisdom is not equally available to all at all times. False identicality is a misuse of equality. If we conclude that political equality implies we all have the same strengths and weaknesses, the same a priori capacities exactly, then we run logic backwards into that brick wall at high speed. We cannot ordain that every individual is in wisdom identical. We cannot treat everyone as if they begin at the same beginning and only deviate by their own fully informed choices, or else by the malicious and unfair incursions of others. That is inaccurate and its "value" is a slow-acting poison, a loose thread unravelling the fabric.

We do not begin the same, we are not the same in the middle, and we do not end the same. We are only the same, apparently, before conception and after death. Never while we experience are we the same as anyone else.

We have enormous amounts in common. And I don't mean to try to isolate anyone from our senses of unity and shared humanity. But I hope you already knew I wasn't dismissing any of that. I was simply pointing out the obvious, because it's relevant.

We are often similar but never the same. The pieces that are identical - individual highly conserved genes for example - diverge enormously, still, once in context. The same string of nucleotides means something else in a different cell, or in a cell that's in a different mood. Multiply that by tens of thousands, millions, billions of pieces, or trillions or more, depending on what building blocks you use and how close two need to be to be called "same."

The best we can do is apply all our senses and improve them as well as we can.

vendredi 13 novembre 2020

> 2

I've always been a huge fan of the idea of dialectic - two opposites that can't agree, but there's some deeper truth neither is quite getting that explains the confusion. The yin-yang is one representation. It's also called thesis + antithesis = synthesis.

We live in a field day for dialectic.

It's important to recognize that dialectic doesn't mean that in every opposition, the synthesis lies directly in the middle. That would be called the middle-ground fallacy, bothsidesism, or false balance.

One way I remind myself of this is, I say, "There are MORE THAN two sides to every story." When you recognize that, and keep it in mind, you're less likely to fall for either pole or some mythical happy land where all claims are equally true and false.

In the US political spectrum, I avoided identifying as a Democrat specifically until I really felt I had no choice. But I've never called myself an Independent. Independents seem to fall for the middle-ground fallacy a lot. I'll never forget those well-meaning, respectable people on the eve of the 2016 election, in town halls, responsibly asking questions at the mic, reading from their note cards. They couldn't decide who would be better for our country, and they were going to ask just the right questions to find out.

They were all massively deluded, I'm afraid. They had fallen for bothsidesism.

But anyway. People make mistakes. They weren't trying to be confused. They just were.

It's my natural inclination to gently pull people, including myself, away from poles - not all extremes under the supposition that all extremes are wrong, but just in general by pointing out what hasn't been said, yet is surely relevant. The simpler the better. Often this takes the form of a "maybe." Maybes are very important, because if you are closed to maybes, you become inflexible.

I've learned to choose my timing better, and I've learned that if it doesn't feel natural to say something, it probably won't feel natural to hear it. Sometimes you still press on, sometimes you wait. Sometimes you find better words. Sometimes you only have one chance to affect someone, but if they feel you yanking them around, they will become resistant, or even more resistant than before. Often a person hears you better if you say something once and they know where you stand, but you don't lecture or chide. These are my personal findings, anyway, mixed with research I've read and the opinions of people who know about this stuff.

After all, at work it's my job to affect people. I'm supposed to change people's minds. But it isn't an agenda. They're supposed to walk out knowing and understanding more, and feeling less stuck.

My philosophy here clashes with progressive notions about ostracizing what we don't like, in an effort to fight it or control it. And my stance on that is actually very simple. Each person can choose who or what to ostracize or boycott. Personally, I like to remember that if I refuse to talk to someone, that person goes right on existing no less than before. It rarely sends much of a message. We think it does. But still, there are times when that message can work.

In my opinion, silence says nothing. All it does is amplify what's around it. So if you go silent, you amplify whatever you said most recently, and whatever you might say after the silence. The silence itself, however much we might like to believe in its eloquence, says nothing on its own. To believe silence speaks volumes is otherwise known as passive aggression. To believe silence is consent is, at the extreme, rape.

We need to fight for justice. It doesn't establish itself. It doesn't just grow in the soil with sunlight and water. But let's not forget the importance of sunlight and water and soil and growth. We spend too much time refusing to understand people who cause problems, because we believe that understanding, compassion, empathy - in short, humanization - ends up promoting evil. This is not actually true. If we stand around and watch evil happen and do nothing, then we are implicitly promoting it, in a sense. That would be condoning. At the same time, all of us are limited. We can't help but condone most of the world's problems, if we are honest with ourselves. We know there are a great many problems, and we are not solving all of them, or even most of them.

What I would like to see is more of a focus on science. What does science say about what will help the problems we face? In the area of social justice, I usually see very little in the way of citing research. And I'm sure that's partly because a lot more research needs to be done. But more research will be done if we take more interest in it. Then there will be more demand for research, and, eventually, by various routes, less demand for prisons. We all believe that loudly condemning what we don't like shakes the baddies down and shows them we mean business. Policing basic, necessary standards is critical. But this symbolic show of force that everyone believes in, this moral fire and brimstone, only goes so far, and can even backfire. It does not actually solve the problems we believe that, if we are only loud and insistent enough, it will solve.

Solving a problem without understanding it is called "luck."

Fully understand a problem and it melts.

That's dialectic.

A Balloon of Gaskets

Republicans these days have a simple, common-sense view of regulation. For Republicans, regulations mean "do" or "do not," and like any sensible person who loves freedom, they infer that too many of those means too little freedom and prosperity.

But there are many kinds of regulation. Your brain, for example, is a regulator.

"Do" and "do not" may appear to restrict freedom while increasing it. Forbidding smoking on airplanes increases net freedom. And if the cravings become unpleasant enough that some smokers quit smoking, then this has lengthened lives and healthspans, which increases freedom. Of course, you are free to smoke, addict yourself, and limit your own freedom as you see fit. But you are not free to limit other people's freedom while limiting your own. When you smoke on an airplane, you make a choice for everyone on the plane without asking them. And so forbidding smoking "for" other people on an airplane (people who did not choose to smoke, nor would they) increases net freedom, without any doubt in the matter.

At the most basic level, once you get past "do" and "do not," the simplest kind of regulation is actually a thermostat. The principle of the thermostat is what allows your body and mind to operate so beautifully. Our genes are, yes, often "do" or "do not" signals, but also often thermostats. Homeostasis, a fundamental principle of life, is another way of talking about the thermostat - albeit many, interconnected. And so it's both sensible and revealing to look at every brain as a large, highly evolved tangle of electrochemical thermostats embodying regulations. Without those regulations, you would not exist to call yourself free.

So draw your own conclusions about how much you want to remove regulations and treat them as automatically a problem.

The drive to keep legal regulations concise and intuitive is a solid one (people cannot intentionally follow rules they do not know, remember, or understand), but it should really attend to individuals before it attends to businesses. Businesses can fail and reform without loss of individual life. No individual comes back from the grave. Protect individuals well, and you also protect business. Protect businesses in a way that seems well at first, and you may end up undermining individuals and businesses both.

mercredi 11 novembre 2020

Sensation Firewall

There is at least one long-standing theme in my thinking about social change. Usually people like to assign responsibility for social patterns. We give ownership of responsibility for the Department of Education to one person. Then when things go wrong, we know who to blame. If they themselves can isolate the blame to one person under them, they fire the person and everyone moves on.

I'm not saying this paradigm never works. It often works. It's a component of meritocracy. But there is an interesting, permanent shortfall. By focusing on individuals, we keep overlooking the systems.

To some extent, holding one person to account for a system failure is a way to appease anger without making the larger concessions of fixing the system.

Hate is a normal enough human emotion which I define as "anger + disgust, justified or rationalized so it gets stuck." To my knowledge everyone has the first experience, anger + disgust; the second step, applying reason to cling to it, is optional. For me, it's also inadvisable - it's misery and seems to cloud judgment.

So yes, we choose and we are responsible. Leaders take that on more than most. Existentialism gives a great, simple answer here: whatever you do, that's what you did. You're fooling yourself to think otherwise. It's the one aspect of morality that is up for no debate at all.

Let's get back to social change. It's social. And it's change.

For that, I suggest "anger + disgust" is normal and often drives constructive awareness and action. But that is not healthy as a permanent condition, and I think we do have choice in it. And while I don't think we should command each other how to feel, it is totally fine to share how we feel, and make suggestions. My policy? In the end, for clarity and humanity if possible, hate the game, not the player. And then don't even hate the game. Go and fix it. Or at least suggest ways it can be fixed, and talk about ways it can be fixed.

We don't do this enough. We assume the key systems can't be altered. I don't understand why... except that I think we are a little too preoccupied with the model of one individual owning an institution's responsibility.

mercredi 28 octobre 2020

A Recipe for Overtaking the Number Two

So here's an idea I don't have good words for, but it keeps cropping up. Imagine we're in Congress and a recently drafted bill is in revision. Let's say it's about transportation. That affects everyone almost equally, in the sense that we all absolutely need it for food and other supplies.

We can probably safely predict that Republicans and Democrats agree that there are transportation problems to solve, and that they're important. There's a call for bipartisan support for a bipartisan bill. That seems rational to most.

Also, we can probably also predict that Democrats are proposing more spending, and Republicans are proposing less spending, or even cutting existing expenditures. There's a stereotype of fiscally responsible conservatives and fiscally wasteful liberals. If history supplies evidence of that, it's certainly very mixed at best. For example, in recent times, Democratic control seems correlated with more robust fiscal decision-making, prosperity, and even balancing the budget. No doubt Republicans can point to evidence that says the opposite. I don't claim to know for sure.

These ins and outs are not my wheelhouse. But I think there's enough evidence for someone out there to have extracted more or less the right answer, whatever it is.

My point is not about which way that goes. But for the sake of illustration, let's say Democrats want to tax gasoline more and build high-speed rail lines and add bus routes, and Republicans want to allow more tollbooths and expand existing highways and set aside HOV lanes for the environmentalists. Both sides say they are trying to use what's there, expand throughput, and reduce emissions, while keeping a close eye on the budget.

Because Democrats expect Republicans to shoot down new spending, they go big on the proposals. Compromise comes later. They'll argue for exactly what they want up front, and make it sound more self-evident than the sun in the sky, except that it isn't going far enough. Republicans will be waiting for this by assembling a number of arguments for shooting it all down.

Maybe we could call this an arms race. That seems too vague. I don't have a good, more specific name for it. But here's a great cartoon of Microsoft when it had entrenchment problems. (Entrenchment, hedging, arms races, negotiation bids, and polarization all describe what I'm talking about, but no phrase with the precision I would like as to what, why, and how: the idea of taking specific opposition as a hidden assumption and the distortion that creates.)

You've probably seen it before. Critical detail: the imagined guns inside the bubbles, to which the actual guns are a response.

My point is that Democrats end up defining themselves as not-Republicans, and Republicans as not-Democrats. Any time any of them says anything, it has to be taken in the context of what they are not. Democrats often do not speak from the origin, but in response to what was just said. Not "gun laws need improvement" but "guns are slaughtering our children by the thousand." The same with Republicans. Because they can rely on their opposition to be there, and rely on encountering resistance, they get used to pushing harder. They go X amount one way, and the other side pulls them Y amount back. So a back-of-the-napkin calculation shows that any time they want to go X far, they have to push for X + Y. If they believe anything a bit, they suddenly have to believe in it absolutely, or it counts for nothing - or, worse, less than nothing, because people are weird about transparent uncertainty.

This leads to extreme or at least entrenched attitudes. Everything they say is within this tug-o-war context. It can't be taken at face value.

Yet it often is, even by them.

And we watch it and get involved and start to mirror this.

This business in Congress is not the only reason for polarization; we don't need a legislative body for that. There is something called Heider balance in social psychology that shows, in math, the basic reason for and mechanism of polarization. It isn't complicated. But the Congress image is one clear example of it: the common enemy. The enemy of my enemy is my friend. The friend of my enemy is my enemy. The idea is in the Bible, but it lives deeper in the mind than recorded history, in instinct. People fall in with the party line so they aren't taken for the enemy. That's why polarization happens. It's a mathematical consequence when you apply those rules to a network. You get two camps. If things are tense enough, a war breaks out between the two camps. It's happened a million times - maybe a trillion throughout human and primate evolution. Maybe more.

But if we know that, we can alter it, maybe even adjust for it completely.

The point is we understand how it happens.

The better you understand a problem and its context, the closer you are to fixing it.

Here is my ultra-simple prescription:

1) There are MORE THAN two sides to every story.

2) Curiosity

samedi 24 octobre 2020

Is Your Brain a Matrix?

"Artificial networks are the most promising current models for understanding the brain."

Skepticism on that point has been popular for a long time. Skepticism is good. But your brain has a lot in common with a giant matrix. Your senses are like a giant vector (the data in your favorite song), and your thoughts and actions another giant vector (a synth recording of you covering that song, capturing your input to the instrument), with your brain a matrix that converts one to the other. How far does the similarity go? Unknown... a giant matrix can approximate any process as closely as you want, if it's big enough and has enough time/examples to learn.

If all that - a giant matrix as a brain - seems too simple to be possible, keep in mind that this kind of matrix represents an interacting network. There's a math proof that a matrix can approximate any process, meaning any natural or computational process as far as we understand the word "process," and it's very closely related to the way you can break any sound down into a spectrum of frequencies. The proof actually depends on the same idea.

The "deep" in "deep learning" just means using a bigger matrix. Often that means using fancier hardware to run the learning faster, but not necessarily. This is very similar to cameras and screens with higher and higher resolutions. A newer phone should have a faster chip to keep up with a higher pixel count in camera and screen. But it doesn't technically need a faster chip. It would just slow down otherwise. Images didn't get more complicated, only bigger.

But for that ability to sculpt a matrix into any process to really work, the matrix needs to be broken up into individual vectors, and those are run against the input - the vector representing senses - one at a time, with each result - a work-in-progress vector - put on a curve a bit like a grading curve. This curved result is then sent to interact with the next vector that was broken off the matrix. Rinse and repeat!

Eventually that work-in-progress vector is done, at which point it represents the thoughts/actions that are the output. Think of each number in the vector as the strength of each dimension of possible response, the probability of hitting each note on a piano, or how much to move each muscle, etc. So to put the last paragraph in different words, a "deep learning" matrix, aka neural network, is no more than a bunch of multiplications in the form of dot products between pairs of vectors, with a little filter/curve after each one.

Incidentally, each one of those vectors broken off the matrix can be visualized as a line or edge. You can imagine that you could draw any picture, even a 3D one, even a 5005D one, with enough lines or edges. You can make it as clean and accurate as you want by adding more lines. We know that intuitively, because that's how sketching works. Deep learning is not unlike sketching very fast. Similarly, you can draw a very smooth circle, as smooth as you want, with enough little square pixels. See it? Now we can do that with concepts.

But those are details. Students who think matrix math is boring will typically hear about AI from me, haha. And they do tend to find it interesting.

The curve, or conditioning, after each step is what makes this different from just multiplying a giant vector by a giant matrix to get another giant vector. That would be too simple, and it's kind of the lie I told at the start. Instead, information flows step by step through the layers of the matrix much like energy filtering up through the layers of an ecosystem, towards apex predators and decomposers. And there's that curve/filter between each level. I suppose it's a bit like a goat eating grass which is converted into goat; something changes in the middle. It isn't grass to grass, it's grass to goat, so there's a left turn in there somewhere. That bend is critical but not complicated at all, though why it's critical is more difficult and I don't fully understand why. That filter doesn't even have to be a curve, it can just mean putting a kink in each line - just a bend in each vector, like a knee or elbow. It almost doesn't matter what the bend is, just that it's there. That's surprisingly essential to the universality of neural networks, so apparently it adds a lot for very little. I don't have a good analogy for why that's true, except that the world isn't actually made up of a bunch of straight lines. It's more like a bunch of curves and surfaces and volumes and energy and particles and static and other noise and signals between interconnected systems, and this step, putting kinks in the lines, allows the processing to break out into a much larger possibility space.

Theoretically, the old possibility space (without bends) was the stuff that you could accomplish with the "transformations" you learned in geometry - stretches, rotations, reflections, glides. The new space is all possibility space - or any "before/after" that can be measured and processed as a measurement. Artificially aging your neighbor's cat, painting today's sunset from weather data... If there's any logical connection between input and output, between before and after if there's time involved - even if that connection is just the laws of physics - or even if it's just a random association to memorize, like didn't you know volcanoes and lemons are connected because I said so - that connection can be represented by a big enough matrix.

So instead of pixels, it's lines, and instead of lines, it's bends. Think of bends as moments of change. Maybe this is a little like adding 3D glasses and color to a greyscale picture without altering the resolution. But... the effect of the curving/filtering/bending I've been talking about would be far more shocking than the image upgrade if you could directly experience the difference, given that we get the potential of learning and mimicking every known process. Maybe we do directly experience that difference as a key component of being alive. It's more like adding motion to that image, and an understanding of where the motion comes from and where it's going. Or to rephrase, the greyscale picture with our "kinks" update is now more like a mind than a photo - which, after all, is a simpler kind of matrix, one that is not a network.

The other simplification I made is that the big matrix is actually broken down into multiple matrices first, before those are broken down into individual vectors, each of which is roughly equivalent to a single neuron. What I described was a single-file chain of neurons, but there can be many neurons next to each other. Each layer of neurons in a neural network is its own matrix. Each neuron is its own vector. But I'd say that aspect of the layers is the least important detail here, other than realizing you can see each row of a matrix as a brain cell, which is neat. And you can very roughly imagine each brain cell as knowing how to draw one line-with-bend through concept space and give its vote on that basis.

We have 6 layers of neurons in the cerebral cortex, for reference, so at a gross simplification that would be 6 big matrices in a chain, with the rows of each matrix representing individual neurons.

samedi 19 septembre 2020

How to Unroll Convolutions

In my last post I showed how you can get a bell curve starting with any real number, in only three tiny steps.

This time I want to get a bit more practical, or at least hands-on. You've heard of a bell curve (or normal distribution, Gaussian, etc). Not only that, but you've been put on bell curves at various times, and inspected many other bells to understand aspects of the world. We all know this is a basic measurement and a shape that shows up, well, all over the show.

Why?

I want to explain that simply.

Briefly, a bell curve describes a sum of random influences. Height is normally distributed (in other words on a bell curve) because many little random-ish factors (genes, nutrition, etc) combine to produce your total height—and everyone else's, for that matter, and the same would go for any species. If you're an alien, chances are strong that height for your kind is normally distributed, not to mention many other measurements.

But let's start simple. Let's roll the ol' die.

(Wikimedia Commons)


We got 4. But we could have gotten 1, 2, 3, 4, 5, or 6. I'm assuming you thought of a 6-sided die first, and that's what we're using. (By the way, that die is from ancient Rome, so it actually is "ol'.")

(Academo.org)

Here's what things would look like over many rolls: 6 equally likely outcomes. If we kept a tally, it would look lopsided at first, but we wouldn't expect any of the 6 numbers to pull ahead and stay ahead. (See above.) Actually, the longer we kept a tally, the more the race would be neck-and-neck. (That's what the law of large numbers says. See below.)

(Academo.org)

(Xactly.com)

What if we roll 2 dice, though?

(Unknown Source)

At first, we're going to ignore the green and pink. There are 36 total outcomes (6*6=36) arranged above. As with the single rolls previous, all of these 36 rolls are equally likely. That's important, so I'll say it again: every square (representing a pair of rolled dice) in the chart is equally likely. Here's a chart that makes them look the same.

(Math-Only-Math.com)

(Just a heads-up, between here and the pictures of black dominoes below, we're taking an illustrative detour that you can skip and come back to later. It helps with filling in and seeing the whole picture from end to end.)

Ready? If we go deeper with two dice, the landscape changes. First, notice we could tell the difference between the dice before, as if each had its own size, color, and personality. This was indicated by the dice on the left, which are really just different faces of one die (call it "Left"), and the dice along the top, which are just faces of the other die (call it "Top"). We know which is which. When we roll both dice, the "Left" number is shown on the left between parentheses, like an x-coordinate, and the "Top" number is shown on the right, like a y-coordinate. If we can tell them apart, there are 36 outcomes. But if we can't, only 21 outcomes appear.

Ok. We shall now require our experienced roller of dice to treat all colors as equal. Just to be clear, in this color-equal scenario, rolling 2 and 4 will be the same as rolling 4 and 2. That combination is different from 1 and 5 (which is the same as 5 and 1). Even though all four pairings add up to 6, we'd count two distinct combinations, which I'll call 2&4 and 1&5.

Notice that when both dice come out the same, there's only one way it could have happened. In the chart above, there's (1, 3) and (3, 1), which are the same combination. But there's only (3, 3) once, (4, 4) once, (5, 5) once, etc. So you're twice as likely to get the combination 3&1 as you are to get 3&3.

Let's recap. If we only care about the actual numbers on the dice, meaning we don't care about the colors of the dice, or which die is to the left and which is to the right, etc, then we no longer have outcomes that are all equally likely. This is an artifact of combining two dice which can be mistaken for each other, or at least which are allowed to work exactly the same: there is no priority, rank, order, favorite, etc. One die is as good as another—it doesn't matter which has the 2 and which has the 3. I'm belaboring the point because an interesting transition has happened.

As mentioned earlier, there are 21 of these results that are no longer equally common. The calculation that leads to 21 is technically a combination with replacement (specifically, "6-combinations-of-2 with replacement," the 6 because of 6 sides, the 2 because of 2 dice, and with replacement because both dice are always put back, so they can start out the same from roll to roll).

This way of counting does show up materially. It shows up in nature. And it shows up in games, which, after all, are the birthplace of probability and statistical theory (traceable to a letter exchange between Blaise Pascal and a gambler who asked for his help). For example, dominoes began a millennium ago as frozen die rolls for two dice, and they soon evolved into playing cards. The normal 52-card deck has come to reflect 52 weeks in a year. But tarot cards preserve their origins. A deck of tarot cards has 21 trumps, a zero card, and 56 pip/court cards. 6-combinations-of-2 is 21, and 6-combinations-of-3 is 56. So a tarot deck can be seen as a group of cards representing all the rolls of 2 dice, another group representing all the rolls of 3 dice, and then a 0 that's a wild card.

Here are 21 trumps and the 0, for good measure. Bet you never thought of them as dominoes before. (Try to pretend it doesn't talk about "levels of consciousness.")

(Triple 7 Center)

The original Chinese domino sets have 21 types of dominoes for precisely this reason. Meanwhile, traditional European dominoes come in sets of 28 and include 0 as a possible "roll" for each die, so a full set would correspond to all the frozen rolls of a pair of 7-sided dice. True to form, 7-combinations-of-2 is indeed 28. But let's set these aside. The rolls of 0 bring somewhat more confusion than the lone 0 card in tarot. The takeaway is that cards, dominoes, and dice are closely related, and they're all excellent mental tools for thinking about probability and statistics.

In the image below, the upper group of Chinese dominoes is doubled up (two of each kind), while the lower group is made of singletons. We can ignore the decorative colors. There are 32 dominoes in all, but you'll see 21 types. (Specifically, there are 11 different civilian dominoes, and remember that number 11, because it comes up in a second. And there are 10 different military dominoes.) The doubling up doesn't follow the same logic as I talked about above—things get shuffled around a bit—but a glance shows that some pairings (patterns from the civilian suit, for example 1&3) are twice as common/likely as others (patterns from the military suit, for example 2&4).

(LearnPlayWin.net)

Curiously, they even have names. Here are just the 21 Chinese domino types. Again, these embody all the rolls of two dice if you don't care which die is which. Hey, bet you didn't see ancient names like "Copper Hammer Six" on your radar!

(Amazon)

That was a bit of a detour just to reflect on what happens when two dice act like identical twins with the same name: almost counter-intuitively, equally likely faces of a cube give way to unequally likely patterns of pairing. Structure emerges. Possibilities begin to pile up here and not there. Multiple paths lead to the same result. The change is relevant to the creation of a bell curve.

Let's get to the really interesting part. When you add the dice, there will only be 11 possible sums. These are the numbers in the squares in the first chart, which I'll bring back to make things easier.

(Unknown Source)

The sums will go from 2 (ie, 1+1) up to 12 (ie, 6+6), which means there are 11 of them. (If you took the excursion above, remember when I said 11 would come up soon? Not too surprisingly, early on, 11 was the number of cards in a suit. The chain goes: 11 distinct sums with a pair of dice, 11 dominoes in the doubled up civilian suit, 11 cards in the early card suits. It's cultural evolution!) Our transition by the route of adding is important, too: we've gone from 36 outcomes, total, and those are equally likely (called "microstates"), to 11 after adding the faces of the dice, and those sums are not equally likely (called "macrostates" because they can include many microstates). If you look at the grid above, you'll see that 7 occurs more often than 10 as a sum. There are more ways to get to it. So if you ever have a chance to bet on the sum of two dice, bet on 7 and you'll win more often than anyone choosing other bets. As it so happens, 7 is the mean, median, and mode, and it's six times more likely than 2 or 12.

To use the terminology just introduced, 7 and 10 are two different macrostates. The macrostate of 10 covers only 3 microstates, but the macrostate of 7 covers 6 microstates. I like to avoid jargon, but these three levels of thinking about a pair of dice (36 equal microstates/permutations-with-replacement, 21 unequal macrostates/combinations-with-replacement, and 11 unequal macrostates/sums) can be confusing to think about. It really helps to have some words. You can pick whichever you like.

(Xactly.com)

Here is what the outcomes would look like over many rolls. Above is an experiment and below is the idealized, long-term version. Macrostates line the horizonal axis. Each black-and-white pair below is a microstate. (Also, notice the similarity between this way of showing a pair of dice and a domino. Normal dominoes are macrostates of dice, but these ones are microstates, because all 36 permutations are included, and no symmetry is removed.)


(Unknown Source)

Long story short, the picture is no longer flat. It is not a uniform distribution anymore. Taking two die rolls and adding them does something a bit different from just rolling a die with 11 sides, or 21 sides. In fact, we've done something interesting that could look scary, something that involves calculus, but we don't need that right now. By rolling dice and plotting, we've done it. This is called the convolution of two random variables.

Let's convolve again. With one more die, we are convolving three dice together. What does that look like?

(American Scientist)

Note that the sums of 3 to 18 correspond to sums of 2 to 12 with two dice. (Meanwhile, and not shown, there are 56 clearly different combinations for three dice, corresponding to the 21 combinations for two dice discussed earlier; relatedly, there are 56 suit cards in a tarot deck, and 21 trump cards, not including the 0 card.) The chart above uses three colors, one for each die, to show how the triples of dice are ordered into all 6*6*6 = 216 permutations and sorted into sums—3 through 18.

Though the above (that code is Lisp by the way, but we're ignoring it) is not exactly a normal distribution as it's discrete and jagged, it already approximates one. The more dice you throw, the more the histogram will approximate a perfectly smooth bell curve.

(Wolfram MathWorld)

A bell curve is the result of convolving ("adding up") random influences.

The especially interesting thing about bell curves is that even if your dice were weighted for cheating, you'd still get a bell curve. Maybe it makes sense that many fair dice would produce that nice smooth shape, given enough dice. But unfair dice would, also. We'll say more about those dice in a second, but let's divert to the real world first.

When you're adding up the influence of genes and nutrients and so on to get a person's final height, it doesn't really matter how common the genes are compared to each other, for example. Some variants could be rare, others common. When you add up their influences, you'll still get a bell curve.

And how often do random factors add up to something? Very! That's why the shape is so common!

(Xactly.com)

It turns out that as long as the dice (random variables/factors) behave something like physical dice, then convolving enough of them will produce a bell curve. What dice would not be allowed to do is lack an average value and a standard deviation. If the die had infinitely many sides, for example, this wouldn't work.

As a consequence, when you see a bell curve, you can conclude that the main contributing factors, even if they're really quite random, all have well-defined averages and standard deviations. The random variables may or may not themselves look like bell curves when analyzed individually. They could be uniformly distributed (flat) like a single die. They could be noisily scattered within a band of possible values. Or the weighting could be anything else that has a finite, consistent average and standard deviation. Bell curves show up everywhere because when you add up randomness, it's very difficult to avoid them.

A More Normal Formula

You may not have realized quite how simple the normal curve is. The formula usually shown obscures it almost irresponsibly.

Take ex...



Square the x, and the function is now positive on the left side.



You get an extremely narrow parabola variant. Here's a parabola in green for comparison. (The next two images are just for illustration, not part of the process.)



(Technically, it's the exponential of a parabola. It's ex2 instead of x2. If you ask me, that counts as a parabola variant. But it grows much faster.)



Now negate the exponent.





Voila.



That's the rule. It's an upside-down parabola for an exponent.

The rest of the famed formula is tweaking—specifying the unit size via the mean (\(\mu\)) and the standard deviation (\(\sigma\)). It looks complicated, but this is a lot like describing a parabola with \( (y-k) = 4p(x-h)^2 \) instead of just \(x^2\). The second gives you the foundational idea, while the first incorporates adjustments.

\[\frac{1}{\sigma \sqrt{2\pi}} e^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}\]

Great to have on hand as a reference, but we already have the essential bell curve just from two modifications to basic exponential growth. We square the exponent and then negate the exponent.

\[e^x \rightarrow e^{-x^2} \] Oh, and if the e is confusing, we could have started with any example of exponential growth. For example, we could use a base of 2. (A base multiplier of 2 sets a slightly slower growth rate than with e.) The picture would look much the same.

\[2^x \rightarrow 2^{-x^2} \]

This time I'll leave out the two graphs comparing with a basic parabola, because they weren't really part of the process anyway, and they look the same. And remember, there's nothing very special about 2 or e here. Any real number can be turned into a bell curve by exponentiating, squaring the exponent, and negating the exponent.

\[2 \rightarrow 2^x \rightarrow 2^{x^2} \rightarrow 2^{-x^2} \]

\[3 \rightarrow 3^x \rightarrow 3^{x^2} \rightarrow 3^{-x^2} \]

\[5.8316 \rightarrow 5.8316^x \rightarrow 5.8316^{x^2} \rightarrow 5.8316^{-x^2} \]

\[\pi \rightarrow \pi^x \rightarrow \pi^{x^2} \rightarrow \pi^{-x^2} \]

Below is with a base of 2.

mardi 15 septembre 2020

The First Rule of Presumptuousness

I've always felt that "know thy audience" becomes a form of stereotyping. Certainly, in this or that design you'll get to know, and meanwhile learn to incorporate, facts and measurements of a special group of people. An ATM should really work for everyone who might have a bank account. But there's a big difference between "usability" and "targeting an audience." Usability doesn't pander, it just makes accessible and comfortable. Audiences, though, are too often pandered to.

samedi 15 août 2020

How to Step Back to Go Forward

Debate is a game. It's something most people don't quite understand, or don't apply.

If you're in a disagreement and you don't want it to turn into a fight, you both need to move towards treating the discussion as a debate, which is a game. That can be difficult, especially for a big issue, but if you can't, it's best to put the discussion aside for now.

Let me bring in a pretty typical definition of game. A game is a finite contest with rules, an experience you can win or lose, but the consequences are negotiable.

Russian Roulette has rules: put one bullet in a revolver, spin the thing, point the gun at your head, and pull the trigger. If it was a live chamber, you're probably instantly dead, and you've lost. If it was a blissfully empty chamber, you've won. But see, the consequences are not negotiable: either you're dead or you're alive at the end. So while Russian Roulette resembles a game, it is not.

Debate, on the other hand, is a game.

Many people very understandably don't want a discussion to be just hot air—they want to get something done with it—and by driving at that purpose too hard, they lose the value of discussion.

The value of discussion is that it's virtual. The words are only words. Your debate is a little virtual world of words, where you're trying to follow the rules of logic and evidence—you want the debate to have bearing on actual life. Yet the consequences of the debate are very much negotiable. What does it mean to win or lose a debate? Maybe nothing. Maybe something. That's all TBD by the participants afterwards. Nothing is set in stone. And, as I said, that is the actual value of debate. If you had to move all the bricks physically into different arrangements while debating architectural choices, civilization would still be in the stone age today.

The virtuality of conversation, and debate, is its greatest strength. And then people go and forget that, or never quite understand it and its implications.

Some people understand debate is a game, whether they would use that word or not. And they are usually much more pleasant and engaging to discuss controversial topics with. This doesn't mean they have no beliefs or don't consider the topics important, or even critical. It's just that debate is a game. You don't solve global poverty in your chit-chat with your housemate over dinner. So stop acting like you do, and if you have a particularly good round of debate, you might actually be somewhere at the end that you weren't at the beginning.

That's how it works, and it really truly does work.



* Note: just because it's a game doesn't mean you have to be goofy, though that's often a very useful approach. Taking the pressure off allows people to speak more freely and think more creatively together. But you can also be very serious. Debate that's a game can still get heated, but it never gets personal. The feminist "everyone's perspective is valid" approach is also a great way to make a debate a game. Goofy, spirited but not personal, listening while perspective-taking—all of these are ways to make debate do its job, and they share the same principle. To the extent they would ever get a little intense or combative, everyone understands that this is sparring, and nothing to do with liking or disliking each other. May the best idea win.

** The definition of game and the Russian Roulette example I'm pretty sure both come from the book Half-Real. My copy went to a good friend 6 years ago before I went into a big surgery I thought I might not survive (statistically, there was a 1% chance I wouldn't, which makes a very big and relatively safe revolver for Russian Roulette, but that's a smaller revolver than for most surgeries, and it was a 5 hour procedure that left me with 55 staples and now a 13-inch scar, so maybe I wasn't being that dramatic). I hadn't finished reading it, but it has the best definition of game I've ever seen anywhere, by a long shot. Most books on game studies start out defining game (yawn). This one, though, makes that triply worth your while to read.

jeudi 18 juin 2020

We Should VOB (Vote Our Best)

For better democracy, we shouldn't be voting on whether or what we think we can win today. That force distorts public deliberation and saddens outcomes. We should always feel emancipated to vote with our best sense. This way, votes in total contain the most true information. The advantage that arrives on the same boat is exactly how and why democracy succeeds. Wherever humans aren't empowered to vote strictly on best information, we have our work cut out for us.

dimanche 31 mai 2020

Leapfrog Photos

Someone who is 100 today... imagine when they were born. The roaring 20s are just starting. A newborn baby. Can you imagine it, at least a little? Take your time. Good. Find a seat there in a black and white photo. While we're at it, make it a color photo. Nah, make it real. You're there. Here. Ok. Now that we're situated, imagine someone who is 100. Maybe this person is related to the baby, maybe not. Whatever you want. Just imagine. From this birth in 1920, we only need 3 more life jumps, and we're in Shakespeare's time.

100 years ago, it's 1920.

Jump One: 100 years before that, it's 1820.

Jump Two: 100 years again, and it's 1720.

Jump Three: 100 years more will take us to 1620.

Ok, well darn, I lied slightly. Shakespeare is already dead. He died 4 years ago. But many people he knew are alive, and we've landed about when the first complete edition of his plays is published. We'll have to wait 3 years. But we'll wager it's on more than one person's mind already to put the works together for publication. Dozens of original first editions are still around in 2020, studied microscopically.

Within living memory, we are only 3 similar jumps away from living memory of Shakespeare's own acting.

Within one longer lifespan, history and even language change considerably.

jeudi 21 mai 2020

Breadcrumbs

The real economy is energy. Money is a marker. Energy is what's actually going on.

If the two disagree, the answer is ready. It's energy. Follow the money? Well, if you want. Ok. But really, you'll follow the energy.

Wild ecosystems do not engage in any monetary trade, yet they are massively active and complex economies.

Capitalism is not the only economy that works. Wilderness is an economy that works. It's more brutal than capitalism. And capitalism is more brutal than whatever will replace it or augment it.

What we must learn from wild ecosystems is that energy is the foundation, not human law, not philosophy, not divine commandments, not physical strength, not market valuations.

It's energy arranged as information.

That cannot change. Everything else can.

lundi 18 mai 2020

Representative Art

I'd like to nudge representative democracy away from the popularity contest without removing contests or popularity. Does that make sense?

It may sound contradictory, but I believe it isn't.

How many introverts are going to run for public office? Does that mean we should have much less political say? Does it mean we have much less to say? Does it mean the world can do without all of our insights? Or these must always be whispered in the ear of a particularly generous extravert?

Reflect on the assumptions in our system for a while.

Many experts—not all, not necessarily even a majority, but certainly a strong contingent—are introverts. (I haven't verified the claim it's a majority. Either way, there are many introverted and extraverted experts.)

Expertise is very obviously and painfully underrepresented in republics.

Now you have an idea why: experts should not be expected to be the same people winning popularity contests. Some experts are popular, some not, and it has little or nothing to do with their level of expertise. If your angle is "Well if it's all the same, then why can't we just make everyone happy and get photogenic, popular experts in here?" then I hope you understand that's a bias.

Decisions by and/or for the group should be made based on knowledge, skill, ability, results, and sharing. The best answer should rise to the top every time, not by suppressing other answers or flattering a crowd but by succeeding on its merits.

We are so far from this vision that it hurts.

If you've been blotting, please stop blotting out the realization that representative democracy needs a big update.

Economic and social and environmental problems, issues with government corruption and inefficiency and overreach—all of these can be addressed by an improved process. Without improving the process, where is the hope?

You can't keep fixing a TV by hitting it. At some point, you need an upgrade. You need to go out and get a rethought version of the same implement.

If one doesn't exist yet, guess what?

Nobody is stopping you from coming up with a better way.

How would you know? How would you test it?

You can't know, and you can't test it, if you don't even try.

As I once heard a professor of philosophy say to a student: "Confusion!? Wonderful! Confusion—is the first step on the path to understanding."

You have to be willing not to know, willing to feel stupid, willing to get confused. Then you start your search. You'll find something new and useful if you stick with it.

I promise.

lundi 11 mai 2020

How

If democracy means "Give everyone an equal vote and that's how you decide absolutely everything," then I don't think I'm for it. Life would be much easier and probably better if that worked well; I'm still waiting for evidence that it does.

Besides, complete equality in all decision-making is anti-meritocratic. You don't earn being right on a topic just by stepping into the room. Conversely, though, anyone could be right—credentials don't make you right, either. Arriving at a good answer by a sound process: whoever you are, that makes you right this time.

If democracy means "The larger population has the best questions and the best answers, because it has all the available information," then I'm for it, because that's a true statement. No one's got even a hundredth of everything known. But take the entire population, and it knows everything known. The total crowd has all the brains. And it has all the heart.

The tricky bit is how to extract the best answers from all the answers.

Good democracy is less about "one man, one vote" (though universal suffrage would be a good thing) than about needles and haystacks. The needle's in there. How do you find it and get it out?

You just might have picked up on my belief that how we've been doing this is suboptimal in a big way.

I know that rubs people's patriotism the wrong way, but it has to be said.

I'm far less interested in convincing you of a particular answer than in getting you to ask the question, and often: How can democracy work better?

vendredi 1 mai 2020

Bowtie Pasta

Capitalism doesn't just work out of the box. It isn't an Apple product! Whether it should be is a separate question.

Complicated but dependable systems need very careful design and constant testing. They won't keep working out of the box if the box holds the original prototype.

Ownership, exchange, currency, and freedom: all are critical in a healthy society. But it's also critical that we persist in crossing out "might makes right." Keep crossing it out as it comes up. Cross it out, cross it out, cross it out. Meritocracy equates to neither might nor financial demand. What it equates to is skill and wisdom in the right place: people doing what they're good at, getting better at what they can get better at, saving and advancing and beautifying lives and society. Demand backed by wallets is a splendid mechanism insofar as it brings this about. But it doesn't always, and it is sometimes profoundly undermining or damaging.

"What people will pay for" is important in a business model, but it is not truth from on high. As powerful as it is, it's still only temporary desire. It's a set of evolved signals responding to beliefs about a person's (and a group's) surroundings. We know that the most popular thing is by no means always the best, but we go on believing that capitalism just works out of the box.

And no, I don't personally believe that having money proves that you know better, and therefore your greater clout (in the tally of demand) indicates proportionally more wisdom. That's another partial fallacy that's only one step behind the more glaring one.

"Money knows what to do with money" is a piece of an answer, nonetheless. It makes decent sense. The founder of Amazon is probably not such a bad person to lend money! What I'm calling a partial answer is actually a principle very closely related to why Google searches are so effective. The PageRank algorithm gives the links from one site, say the Mercedes homepage, an importance that depends on how many pages link to the Mercedes site, and how influential they are in turn. This is quite similar to the way the transactions of a rich person have more influence on society because more dollars are sent to the rich person. Still, we do not make arguments like: "This hit came up higher than the other one in the search results, so I will cite the one that is higher up, because it must be better." We should not be so rote about matters of economy either.

In both cases it would degrade the process. If people start going to the Mercedes page and linking to it only because it's higher up, then it will climb further in the search results for no good reason. And the more this happens, the more overrated the site will get in the rankings, and the less sense those will make. Likewise, should we really give rich people and rich corporations our money, preferentially, because they are already rich? If the reason is only that they are already rich, then to do so will actually degrade the economy.

What I worry has been happening for decades is a series of false dilemmas. Either you are for precisely how we do things, or you are against freedom and against markets and against success and against democracy.

Not really.

Actually, not even slightly.

And that goes on and on in many forms.

While it's reasonable to suppose that people who make it their career to understand, respond to, and perhaps even alter markets know what's going on and how to fix problems, it's also reasonable to suppose that experts have blind spots, just like everyone else. It isn't just reasonable, it's well established that experts tend to have biases that come along with being experts.

Experts will discard some ideas out of hand pretty much automatically. It's part of what makes them so skillful and efficient. But some of what they discard out of hand would actually work, or else with a little tweaking and development it would—and could even work better.

The tendency to get what we might call "too efficient" as you gain skill in an area is called "automaticity." It's a double-edged sword. We need one of those edges. The other... we just need to be aware it's there.

I'm not sure what counteracts automaticity best... or its close relative "functional fixedness," which means making too many snap assumptions about how tools work or could work. I've never been entirely sure there's a difference, ever since I learned about these in some detail in a cognitive psychology class. It's probably fairest to say that functional fixedness is one kind of automaticity. Another closely related term is the "expert blind spot," which appears in the context of teaching. Often a teacher can't see what a student wouldn't know yet, but has to know in order to understand. Not everything we know was ever made explicit, and even if it were, we forget how much we've learned.

A good amount of understanding is intuitive filtering, which can difficult or impossible to put into words, at least until you've done some deep diving and practiced expressing it.

For example, after studying geometry, you know that when you see two lines crossing in a diagram, you can assume that they intersect at precisely one point and the lines are perfectly straight and extend infinitely. All of those are completely non-obvious assumptions you have to learn to make. They are conventions about how the diagrams are drawn and interpreted. You had to get used to them. And eventually you'll forget that you learned the assumptions. Similarly, if you read a problem about someone driving 62 miles per hour for 2 hours, you are trained to assume it's exactly 62 miles per hour (not 62.00000000000003, 62.000959, or any of an infinite number of similar values within the margin of error) with no acceleration or deceleration, for exactly 2 hours, in a perfectly straight line. Without the training, none of those is at all obvious, and in fact, all of those assumptions are going to be false. We learn particular ways it's helpful to be wrong. If we're skillful enough at that, we can make excellent predictions. Obvious?

So how do we get past these blind spots as to how things work, or could work? One thought that would look random anywhere but here is that adventure games (ie, interactive stories that unfold through realistic-ish puzzles involving objects and conversations) have always seemed to be a nice exercise. You end up really wracking your brains to see how the few items available to you could be used in ways you hadn't considered yet, and normally never would consider. You basically make believe that you're MacGyver, only it's usually not quite that intense. Nobody lives like MacGyver.

Encouraging newbies (and everyone else) to speak up brutally honestly in safe "Braintrust" meetings works for Pixar and other companies. Then experts are primed both to think out of the box and to listen to feedback from people who, yes, might not know what they're talking about, but then again might have an excellent angle. If you suspect the Braintrust approach only applies where stuff doesn't have to stand up to harsh reality, it also works at Frank Gehry's company—an architecture team famous for bizarre and wonderful buildings that look like they should fall down, but don't. Material suppliers often question them or say it can't be done, but the team are no strangers to being more thorough than the experts in the materials, although they will of course listen. Useful information goes both ways. Take a look at the Louis Vuitton Foundation building in Paris for a typical example. I like to imagine that's standing because of radical openness to feedback.

The public doesn't trust experts and experts don't trust the public, but we must work together well for democracy to thrive. The "how" seems to be the core question that republics try to answer. How do you get people with the whole range of experiences and skills deciding together wisely?

So I'd like you to think about the question as you go about your daily life. What else can or might help with this? How do we make getting past blind spots and hearing and engaging with new ideas more the routine and less the exception in our democratic institutions?

Polyvalence

Sometimes lack of validation is validation. You know when you're playing Clue and you toss a hypothesis out there? Maybe you've got the Lead Pipe in your own cards, and you say, "I think it was Colonel Mustard in the Conservatory with the Lead Pipe." Maybe you even have the Conservatory. Heh heh. But the funny thing is, when you're done with your words, nothing happens.

You look up. People are just watching you, or they're idly adjusting their cards or notepads and pencils. "Anyone?" One or two shake their heads. So you repeat it: Mustard, Pipe, Conservatory. "Nope." You ask the last person. "Jason?" He shakes his head. "Got nothin." There's a vaguely concerned air.*

Nobody ever, ever, ever comes right out and says, "I don't have any of those," glances around, surmises that you must be on to something, and congratulates you. "Nice work!" Absolutely never. You have to prod and repeat yourself. At best, someone will joke that you just won. Silence is the norm here.

I call it the Clue effect.

Sometimes in life when people can't address what you say, they push back with intensity. But that's easy to spot. Their logic doesn't make sense. They just think it does in the moment. On cursory analysis, it doesn't. On thorough analysis, it also doesn't. This is just how they're reacting. Rather than admitting the value of what you've said, or its possible value, or their lack of a good reply just this second, they basically vent.

But other times it isn't like that. Other times you just never get traction with a thought. It rolls right off, repeatedly. Like rain off a nice new raincoat.

You look up. No one's got a response.

It's like Clue.

Usually this slight impasse will come up in conversation as "shutting down" someone whose view we don't like, forcing them to splutter and go silent. But that's an overly simple reading of the meaning of no reply. The Clue effect, as I call it, is the situation where you feel as if silence means you might be on to something. And it's uncomfortable for you, the person who might have "shut someone down," partly because that could be entirely misleading. In Clue, someone could be cheating or not listening or forgetting they actually do have Colonel Mustard in their hand. Whoops! And in real life, there are a million reasons for no response.

Leaping to the conclusion that no response means we're right is a quick route to delusions. At the same time, if we are repeatedly ignored when we mention something, that can be extremely indicative, perhaps of a cultural or personal blind spot, or simply an unwillingness to confront an issue honestly. Often it's about that moment: "Now's not the time."

In our minds we often think someone's opinion ain't right, and we believe we could prove it in open discussion. But if we don't have that discussion, how do we know? It's so easy to look down on someone's foolishness, brush right by; meanwhile you're the one with the greater, more troubling misconception. A classic way to do this is to point to a flaw mentally without spending too long considering whether the flaw is superficial or deep.

If you think silence speaks volumes, I have a lot to say about that:

(A little joke...) No, see, silence emphasizes what's around it, but fails to carry its own message. Paradoxically, it does still give information. How can you read a communication without a message, you might ask? Ok! Excellent question and not asked enough! When you hear the wind in the leaves, is that a message? No... unless you're schizophrenic or having a religious experience, I suppose. It's information, though. What does it tell you? Not much, but also not zero. The air outside is not still, for example. Perhaps you don't want to wear a hat.

The fashion by which this empty string of no reply (in mathematics represented as ∅) gives definite information, but very different amounts to different observers in Clue, is reminiscent of the famous Monty Hall puzzle, which used to appear on the game show Let's Make a Deal (hosted by, not so surprisingly, Monty Hall). The situation on that broadcast stage has mired and fascinated viewers and even students of math ever since. I won't go into any more detail today, but a friend pointed out the connection after reading the above, and it's well worth noticing.

-

* For those who've never played Clue, this means that as long as no one is cheating, Colonel Mustard is definitely the culprit... figuring out whodunit is 1/3 of winning. Interestingly, after the silence, everyone has info about the murder that they didn't have before. But it'll take the others some extra detective work to reach the same conclusion: the killer was definitely Colonel Mustard. So your move tells everyone something but tells you more: exactly one fact without ambiguity or wild geese. Each of your friends now has to chase three geese, metaphorically, to figure out which two are wild. Was the murder weapon the Lead Pipe that they don't know that you have in your pocket? Was the scene of the crime the Conservatory which they also don't know you have in your pocket? Nice bluff! Meanwhile you can focus on other things. Obviously no one is exactly thrilled, because they want to win themselves, and you just pulled ahead!

jeudi 30 avril 2020

It Isn't Drinking

We too often pit free-wheeling markets against big-government communism. It isn't one or the other. We don't even live in a cocktail, eg, 60% free-wheeling markets + 40% big-government communism.

It's all about particulars. Crucially, it's about why we choose these particulars rather than those particulars. The world is not a one-line drawing. Free-Wheeling Markets vs. Big-Government Communism is not what's playing out before us.

For example, a strong social safety net makes purer capitalism more achievable. It isn't one or the other, or one obstructing the other, or some ratio mixed; here we observe a counterintuitive interaction. Counterintuitive yes, but it makes perfect sense to anyone who studies games and understands the metaphor of the "magic circle."

There must be some kind of method to this madness. We cannot just rely on God to sort the globe out for us. The famed invisible hand of trade patterns is not magic. It is actually complex cause and effect, which we can trace and harness and influence.

Can? Must.

samedi 28 mars 2020

Context

There isn't much of a market for preparing for a virus that doesn't exist... until it's tearing the market apart.

Confusing a current price tag with value is myopic.

Confusing business viability with purpose is dangerous.

Some people want government to do little more than serve business.

What's forgotten is that government outlines markets. Without rules of property and exchange and institutions for policing them, there would be no marketplace. There would be a barter mixed with a brawl.