Rules systems for play and progress. When you choose, do you divide the infinite?
dimanche 27 décembre 2020
I must say, ruefully or not, that I'm not very interested in toys at all. I am, however, interested in play, very much so, and just having a sense of humor and imagination, and I've never quite lost my interest in games. Even there, and this I find regrettable, I have lost most of my enthusiasm. Sometimes it's there - maybe in full - also evolved. If I hadn't been trying so hard not to lose the last vestiges, I probably would have. But I was trying so hard, at least some of the time, and so I have kept what I think is more than reasonable to keep.
It's less that I'm a Toys R Us kid than that I'm a Pinocchio kid. "If you wish upon a star..."
mardi 22 décembre 2020
Immunity
There's a relationship between cancer and delusion. Cancer is basically never one mutation. A cell typically has to be damaged in a bunch of ways. One of the most common involves contact inhibition - normally, cells will grow (keep dividing) until they bump into other cells. When they make contact, that inhibits their growth. This way functional tissues form. They don't turn into knots of tumors trying to outcompete each other for blood and other supplies. Knocking out the genetic machinery of contact inhibition is one step on the path to cancer.
Another common step is putting a protein on the cell's surface that is a special self-destruct signal for immune cells. Normally this protects against autoimmune disease: if immune cells learn to attack the self, then this signal on the surface of healthy cells gets the corrupted immune cells to self-destruct (and stop producing even more immune cells that act like racist law enforcement). It's a way to unlearn stray allergies that form against the body's own tissues. Meanwhile, the immune system also wipes out cancer cells. It has evolved to recognize cells that are breaking contact inhibition, and destroy them.
As you might imagine, if cancer cells have mutated to always express that self-destruct signal, now they are free to grow like crazy without much disruption from the immune system. Any immune cells that cotton on get triggered to self-destruct.
Those are just two genetic "fractures" if you will. The first is to my knowledge present in all cancer, while the second is very common. A typical malignant (cancer) cell will have a dozen broken gaskets like this. It's eerie, but cancer actually evolves in place from a healthy cell to a lethally rogue cell, step by step.
The same has been found for people who commit suicide: where evidence can be gathered, the tragedy developed step by step through little changes in what would be normal. Many little things are usually there - internal and external - to support self-preservation. So it's interesting that science now has knowledge of what those damage steps are, or can be. Four examples are being a victim of abuse, chronic pain, cutting, and failed suicide attempts, which act a bit like training wheels. Self-harm gradually gets normalized in the person's mind, or even gets associated with "solving" problems.
These are examples of evolution - not the kind we like, but the kind we don't like.
While a person is in chemotherapy or radiation therapy for cancer, the mutation steps and therefore the evolution comes even faster, which is why doctors consider it critical to use fast-acting and potent methods. If they don't, the cancer develops resistance to the drugs in much the same way bacteria do, and in a matter of weeks. Most cancer treatment actually breeds more lethal cancer cells, so it has to act fast.
However, it would be fatal to believe the stories that you can drink some kind of tea or eat brown rice or hold crystals and that'll work. People have been known to spontaneously get better, but this has been researched now, and it's around 1 in 10,000 cases. You don't like those odds, or shouldn't, however mystical you're feeling. But if you come down with cancer, everyone will tell you about this stuff. Everyone wants you to cheer up, and it's great to have them rooting for you. Everyone has heard of somebody who's heard of somebody who has the cure, or just got better through a good attitude. If I hadn't gone to the doctor and done what the pros said, I would definitely not be here. Survival rate for that kind of cancer without treatment is 0%. With treatment, I was fortunately one of the 70% who make it.
Which reminds me of conspiracy theories and just general delusionality. Delusions evolve into place. It's remarkably similar to what I've described above (which is the point of this post and the setup so far). There are many ways mentally healthy people are protected against inaccurate ideas:
- Other people will tell you if you're sounding too crazy
- The news is mostly factual
- There are many books
- If you know how to use the internet well, you can fact-check almost anything
- Crazy beliefs lead to crazy expectations, and when those don't come to pass, that gives correction
And so on and so forth. The "cancerous" mutations in these checks and balances can include things like these:
- Calling news fake in general
- Asserting without proof that scientists have a hidden agenda
- Saying all sources are biased and implying this means equally biased
- Claiming that anyone who is paid for their work is an unreliable source of information
- Claiming that any money link however lengthy to a disreputable group proves collusion
- Attacking the character of anyone who disagrees
- Using ridicule in place of an argument
- Stirring up guilt, fear, pride, or anger as if merely feeling any of these establishes fact or responsibility
- Casting blame elsewhere for what goes wrong and could have been predicted
- Altering data to fit a narrative
- Cherry-picking data to paint a narrative
- Ignoring dissenting arguments and making no effort to uncover and examine more of them
- The primally persuasive quality of self-confidence or unswayable belief
- Portraying belief itself as a fundamental good and doubt itself as a fundamental evil
- Threats of physical harm
- Disregarding what a person says in anger as clearly not factual or important, when actually angry people usually tell you over and over and over and over why they're angry
- Failing to recognize that anger makes you one-sided by its nature, and everyone who feels angry feels justified
- Accusing the person who says something that ends up making you feel uncomfortable or bad of being a jerk who is merely trying to insult you
- Focusing on whether someone sounds condescending rather than on what they're saying
- Confusing reading a book with being right
- Thinking that because you once held view A and now hold view B, you must have changed from an incorrect view to a correct one
- Thinking that experts are untrustworthy because you aren't smart, trained, or informed enough to follow their professional data and reasoning
- Accepting popularity as strong evidence
- Assuming that when you can poke holes in an argument, the person must not know what they're talking about, the holes are automatically major, the argument is invalid, the conclusion is wrong, and you're so very clever, whereas in truth it's difficult to present a complete argument without making everyone impatient, few people are trained for that, and many flaws are superficial or easily filled in on reflection
And so on. The more of these kinds of tendencies a person has, the more they will tend to suffer from delusions... because these bypass the reality checks that, like the immune system with cancer cells, should be finding and knocking out false beliefs.
dimanche 20 décembre 2020
Life Skill
samedi 19 décembre 2020
NIIAI: No Island Is An Island
vendredi 20 novembre 2020
Shoegazing with Spectacles
For all that I make a big effort to expand and refine my perspective, and try to keep ego at bay as a conflict of interest, in the end I don't know whether I overvalue or undervalue my own observations. It feels like both. And ultimately, value tends to be subjective, so both answers could be valid.
Accuracy isn't everything: I could spend my whole life accurately copying out the 1s and 0s of a reality TV episode by hand. Even with perfect accuracy, that would be a waste of a life (in my opinion, but come on). Accuracy isn't everything, and a big chunk of value is subjective. But lots of value is transferable.
This is probably too much navel-gazing for most people. And it's exactly where I think, "Is there a point to the thought? Could it lead anywhere useful, actionable?" It feels like an unsolvable maze. A dead end.
You can ask other people, listen to feedback. You have to. But other people are in the same quandary. Accuracy can be very costly, difficult to attain and difficult to verify after that. Value is certainly subjective but there's enough overlap that we can start to believe some value is real, isn't just in our heads. So we make some effort at accuracy where we perceive it's valuable, through our own instincts and reasons and the feedback we hear and the money and other material rewards available. It's all a patchwork.
But because it's complicated, there are many places we can go wrong. We often won't even know it. When the problem is difficult enough, you don't even know whether you're solving it or not. You're in the dark and in the silence, not only for your shot, but also for most or all the time after.
I told you this was getting too omphaloskeptic (a fun word for "navel-gazing") for most people. Sometimes the worst answer in the world is "It's complicated, it depends," even though that's the best answer we can give.
I'm talking about what the Serenity Prayer talks about, "and the wisdom to know the difference." We do wisdom a disservice, and ourselves and our fellows, when we portray wisdom as easier and simpler than it really is. Wisdom is not equally available to all at all times. False identicality is a misuse of equality. If we conclude that political equality implies we all have the same strengths and weaknesses, the same a priori capacities exactly, then we run logic backwards into that brick wall at high speed. We cannot ordain that every individual is in wisdom identical. We cannot treat everyone as if they begin at the same beginning and only deviate by their own fully informed choices, or else by the malicious and unfair incursions of others. That is inaccurate and its "value" is a slow-acting poison, a loose thread unravelling the fabric.
We do not begin the same, we are not the same in the middle, and we do not end the same. We are only the same, apparently, before conception and after death. Never while we experience are we the same as anyone else.
We have enormous amounts in common. And I don't mean to try to isolate anyone from our senses of unity and shared humanity. But I hope you already knew I wasn't dismissing any of that. I was simply pointing out the obvious, because it's relevant.
We are often similar but never the same. The pieces that are identical - individual highly conserved genes for example - diverge enormously, still, once in context. The same string of nucleotides means something else in a different cell, or in a cell that's in a different mood. Multiply that by tens of thousands, millions, billions of pieces, or trillions or more, depending on what building blocks you use and how close two need to be to be called "same."
The best we can do is apply all our senses and improve them as well as we can.
vendredi 13 novembre 2020
> 2
A Balloon of Gaskets
mercredi 11 novembre 2020
Sensation Firewall
mercredi 28 octobre 2020
A Recipe for Overtaking the Number Two
samedi 24 octobre 2020
Is Your Brain a Matrix?
If all that - a giant matrix as a brain - seems too simple to be possible, keep in mind that this kind of matrix represents an interacting network. There's a math proof that a matrix can approximate any process, meaning any natural or computational process as far as we understand the word "process," and it's very closely related to the way you can break any sound down into a spectrum of frequencies. The proof actually depends on the same idea.
The "deep" in "deep learning" just means using a bigger matrix. Often that means using fancier hardware to run the learning faster, but not necessarily. This is very similar to cameras and screens with higher and higher resolutions. A newer phone should have a faster chip to keep up with a higher pixel count in camera and screen. But it doesn't technically need a faster chip. It would just slow down otherwise. Images didn't get more complicated, only bigger.
But for that ability to sculpt a matrix into any process to really work, the matrix needs to be broken up into individual vectors, and those are run against the input - the vector representing senses - one at a time, with each result - a work-in-progress vector - put on a curve a bit like a grading curve. This curved result is then sent to interact with the next vector that was broken off the matrix. Rinse and repeat!
Eventually that work-in-progress vector is done, at which point it represents the thoughts/actions that are the output. Think of each number in the vector as the strength of each dimension of possible response, the probability of hitting each note on a piano, or how much to move each muscle, etc. So to put the last paragraph in different words, a "deep learning" matrix, aka neural network, is no more than a bunch of multiplications in the form of dot products between pairs of vectors, with a little filter/curve after each one.
Incidentally, each one of those vectors broken off the matrix can be visualized as a line or edge. You can imagine that you could draw any picture, even a 3D one, even a 5005D one, with enough lines or edges. You can make it as clean and accurate as you want by adding more lines. We know that intuitively, because that's how sketching works. Deep learning is not unlike sketching very fast. Similarly, you can draw a very smooth circle, as smooth as you want, with enough little square pixels. See it? Now we can do that with concepts.
But those are details. Students who think matrix math is boring will typically hear about AI from me, haha. And they do tend to find it interesting.
The curve, or conditioning, after each step is what makes this different from just multiplying a giant vector by a giant matrix to get another giant vector. That would be too simple, and it's kind of the lie I told at the start. Instead, information flows step by step through the layers of the matrix much like energy filtering up through the layers of an ecosystem, towards apex predators and decomposers. And there's that curve/filter between each level. I suppose it's a bit like a goat eating grass which is converted into goat; something changes in the middle. It isn't grass to grass, it's grass to goat, so there's a left turn in there somewhere. That bend is critical but not complicated at all, though why it's critical is more difficult and I don't fully understand why. That filter doesn't even have to be a curve, it can just mean putting a kink in each line - just a bend in each vector, like a knee or elbow. It almost doesn't matter what the bend is, just that it's there. That's surprisingly essential to the universality of neural networks, so apparently it adds a lot for very little. I don't have a good analogy for why that's true, except that the world isn't actually made up of a bunch of straight lines. It's more like a bunch of curves and surfaces and volumes and energy and particles and static and other noise and signals between interconnected systems, and this step, putting kinks in the lines, allows the processing to break out into a much larger possibility space.
Theoretically, the old possibility space (without bends) was the stuff that you could accomplish with the "transformations" you learned in geometry - stretches, rotations, reflections, glides. The new space is all possibility space - or any "before/after" that can be measured and processed as a measurement. Artificially aging your neighbor's cat, painting today's sunset from weather data... If there's any logical connection between input and output, between before and after if there's time involved - even if that connection is just the laws of physics - or even if it's just a random association to memorize, like didn't you know volcanoes and lemons are connected because I said so - that connection can be represented by a big enough matrix.
So instead of pixels, it's lines, and instead of lines, it's bends. Think of bends as moments of change. Maybe this is a little like adding 3D glasses and color to a greyscale picture without altering the resolution. But... the effect of the curving/filtering/bending I've been talking about would be far more shocking than the image upgrade if you could directly experience the difference, given that we get the potential of learning and mimicking every known process. Maybe we do directly experience that difference as a key component of being alive. It's more like adding motion to that image, and an understanding of where the motion comes from and where it's going. Or to rephrase, the greyscale picture with our "kinks" update is now more like a mind than a photo - which, after all, is a simpler kind of matrix, one that is not a network.
The other simplification I made is that the big matrix is actually broken down into multiple matrices first, before those are broken down into individual vectors, each of which is roughly equivalent to a single neuron. What I described was a single-file chain of neurons, but there can be many neurons next to each other. Each layer of neurons in a neural network is its own matrix. Each neuron is its own vector. But I'd say that aspect of the layers is the least important detail here, other than realizing you can see each row of a matrix as a brain cell, which is neat. And you can very roughly imagine each brain cell as knowing how to draw one line-with-bend through concept space and give its vote on that basis.
We have 6 layers of neurons in the cerebral cortex, for reference, so at a gross simplification that would be 6 big matrices in a chain, with the rows of each matrix representing individual neurons.
samedi 19 septembre 2020
How to Unroll Convolutions
A More Normal Formula
Take ex...
Square the x, and the function is now positive on the left side.
You get an extremely narrow parabola variant. Here's a parabola in green for comparison. (The next two images are just for illustration, not part of the process.)
(Technically, it's the exponential of a parabola. It's ex2 instead of x2. If you ask me, that counts as a parabola variant. But it grows much faster.)
Now negate the exponent.
Voila.
That's the rule. It's an upside-down parabola for an exponent.
The rest of the famed formula is tweaking—specifying the unit size via the mean (\(\mu\)) and the standard deviation (\(\sigma\)). It looks complicated, but this is a lot like describing a parabola with \( (y-k) = 4p(x-h)^2 \) instead of just \(x^2\). The second gives you the foundational idea, while the first incorporates adjustments.
\[\frac{1}{\sigma \sqrt{2\pi}} e^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}\]
Great to have on hand as a reference, but we already have the essential bell curve just from two modifications to basic exponential growth. We square the exponent and then negate the exponent.
\[e^x \rightarrow e^{-x^2} \] Oh, and if the e is confusing, we could have started with any example of exponential growth. For example, we could use a base of 2. (A base multiplier of 2 sets a slightly slower growth rate than with e.) The picture would look much the same.\[2^x \rightarrow 2^{-x^2} \]
This time I'll leave out the two graphs comparing with a basic parabola, because they weren't really part of the process anyway, and they look the same. And remember, there's nothing very special about 2 or e here. Any real number can be turned into a bell curve by exponentiating, squaring the exponent, and negating the exponent.\[2 \rightarrow 2^x \rightarrow 2^{x^2} \rightarrow 2^{-x^2} \]
\[3 \rightarrow 3^x \rightarrow 3^{x^2} \rightarrow 3^{-x^2} \]
\[5.8316 \rightarrow 5.8316^x \rightarrow 5.8316^{x^2} \rightarrow 5.8316^{-x^2} \]
\[\pi \rightarrow \pi^x \rightarrow \pi^{x^2} \rightarrow \pi^{-x^2} \]
Below is with a base of 2.mardi 15 septembre 2020
The First Rule of Presumptuousness
I've always felt that "know thy audience" becomes a form of stereotyping. Certainly, in this or that design you'll get to know, and meanwhile learn to incorporate, facts and measurements of a special group of people. An ATM should really work for everyone who might have a bank account. But there's a big difference between "usability" and "targeting an audience." Usability doesn't pander, it just makes accessible and comfortable. Audiences, though, are too often pandered to.
samedi 15 août 2020
How to Step Back to Go Forward
Debate is a game. It's something most people don't quite understand, or don't apply.
If you're in a disagreement and you don't want it to turn into a fight, you both need to move towards treating the discussion as a debate, which is a game. That can be difficult, especially for a big issue, but if you can't, it's best to put the discussion aside for now.
Let me bring in a pretty typical definition of game. A game is a finite contest with rules, an experience you can win or lose, but the consequences are negotiable.
Russian Roulette has rules: put one bullet in a revolver, spin the thing, point the gun at your head, and pull the trigger. If it was a live chamber, you're probably instantly dead, and you've lost. If it was a blissfully empty chamber, you've won. But see, the consequences are not negotiable: either you're dead or you're alive at the end. So while Russian Roulette resembles a game, it is not.
Debate, on the other hand, is a game.
Many people very understandably don't want a discussion to be just hot air—they want to get something done with it—and by driving at that purpose too hard, they lose the value of discussion.
The value of discussion is that it's virtual. The words are only words. Your debate is a little virtual world of words, where you're trying to follow the rules of logic and evidence—you want the debate to have bearing on actual life. Yet the consequences of the debate are very much negotiable. What does it mean to win or lose a debate? Maybe nothing. Maybe something. That's all TBD by the participants afterwards. Nothing is set in stone. And, as I said, that is the actual value of debate. If you had to move all the bricks physically into different arrangements while debating architectural choices, civilization would still be in the stone age today.
The virtuality of conversation, and debate, is its greatest strength. And then people go and forget that, or never quite understand it and its implications.
Some people understand debate is a game, whether they would use that word or not. And they are usually much more pleasant and engaging to discuss controversial topics with. This doesn't mean they have no beliefs or don't consider the topics important, or even critical. It's just that debate is a game. You don't solve global poverty in your chit-chat with your housemate over dinner. So stop acting like you do, and if you have a particularly good round of debate, you might actually be somewhere at the end that you weren't at the beginning.
That's how it works, and it really truly does work.
* Note: just because it's a game doesn't mean you have to be goofy, though that's often a very useful approach. Taking the pressure off allows people to speak more freely and think more creatively together. But you can also be very serious. Debate that's a game can still get heated, but it never gets personal. The feminist "everyone's perspective is valid" approach is also a great way to make a debate a game. Goofy, spirited but not personal, listening while perspective-taking—all of these are ways to make debate do its job, and they share the same principle. To the extent they would ever get a little intense or combative, everyone understands that this is sparring, and nothing to do with liking or disliking each other. May the best idea win.
** The definition of game and the Russian Roulette example I'm pretty sure both come from the book Half-Real. My copy went to a good friend 6 years ago before I went into a big surgery I thought I might not survive (statistically, there was a 1% chance I wouldn't, which makes a very big and relatively safe revolver for Russian Roulette, but that's a smaller revolver than for most surgeries, and it was a 5 hour procedure that left me with 55 staples and now a 13-inch scar, so maybe I wasn't being that dramatic). I hadn't finished reading it, but it has the best definition of game I've ever seen anywhere, by a long shot. Most books on game studies start out defining game (yawn). This one, though, makes that triply worth your while to read.
jeudi 18 juin 2020
We Should VOB (Vote Our Best)
dimanche 31 mai 2020
Leapfrog Photos
jeudi 21 mai 2020
Breadcrumbs
lundi 18 mai 2020
Representative Art
I promise.
lundi 11 mai 2020
How
Besides, complete equality in all decision-making is anti-meritocratic. You don't earn being right on a topic just by stepping into the room. Conversely, though, anyone could be right—credentials don't make you right, either. Arriving at a good answer by a sound process: whoever you are, that makes you right this time.
vendredi 1 mai 2020
Bowtie Pasta
Complicated but dependable systems need very careful design and constant testing. They won't keep working out of the box if the box holds the original prototype.
Ownership, exchange, currency, and freedom: all are critical in a healthy society. But it's also critical that we persist in crossing out "might makes right." Keep crossing it out as it comes up. Cross it out, cross it out, cross it out. Meritocracy equates to neither might nor financial demand. What it equates to is skill and wisdom in the right place: people doing what they're good at, getting better at what they can get better at, saving and advancing and beautifying lives and society. Demand backed by wallets is a splendid mechanism insofar as it brings this about. But it doesn't always, and it is sometimes profoundly undermining or damaging.
"What people will pay for" is important in a business model, but it is not truth from on high. As powerful as it is, it's still only temporary desire. It's a set of evolved signals responding to beliefs about a person's (and a group's) surroundings. We know that the most popular thing is by no means always the best, but we go on believing that capitalism just works out of the box.
And no, I don't personally believe that having money proves that you know better, and therefore your greater clout (in the tally of demand) indicates proportionally more wisdom. That's another partial fallacy that's only one step behind the more glaring one.
"Money knows what to do with money" is a piece of an answer, nonetheless. It makes decent sense. The founder of Amazon is probably not such a bad person to lend money! What I'm calling a partial answer is actually a principle very closely related to why Google searches are so effective. The PageRank algorithm gives the links from one site, say the Mercedes homepage, an importance that depends on how many pages link to the Mercedes site, and how influential they are in turn. This is quite similar to the way the transactions of a rich person have more influence on society because more dollars are sent to the rich person. Still, we do not make arguments like: "This hit came up higher than the other one in the search results, so I will cite the one that is higher up, because it must be better." We should not be so rote about matters of economy either.
In both cases it would degrade the process. If people start going to the Mercedes page and linking to it only because it's higher up, then it will climb further in the search results for no good reason. And the more this happens, the more overrated the site will get in the rankings, and the less sense those will make. Likewise, should we really give rich people and rich corporations our money, preferentially, because they are already rich? If the reason is only that they are already rich, then to do so will actually degrade the economy.
What I worry has been happening for decades is a series of false dilemmas. Either you are for precisely how we do things, or you are against freedom and against markets and against success and against democracy.
Not really.
Actually, not even slightly.
And that goes on and on in many forms.
While it's reasonable to suppose that people who make it their career to understand, respond to, and perhaps even alter markets know what's going on and how to fix problems, it's also reasonable to suppose that experts have blind spots, just like everyone else. It isn't just reasonable, it's well established that experts tend to have biases that come along with being experts.
Experts will discard some ideas out of hand pretty much automatically. It's part of what makes them so skillful and efficient. But some of what they discard out of hand would actually work, or else with a little tweaking and development it would—and could even work better.
The tendency to get what we might call "too efficient" as you gain skill in an area is called "automaticity." It's a double-edged sword. We need one of those edges. The other... we just need to be aware it's there.
I'm not sure what counteracts automaticity best... or its close relative "functional fixedness," which means making too many snap assumptions about how tools work or could work. I've never been entirely sure there's a difference, ever since I learned about these in some detail in a cognitive psychology class. It's probably fairest to say that functional fixedness is one kind of automaticity. Another closely related term is the "expert blind spot," which appears in the context of teaching. Often a teacher can't see what a student wouldn't know yet, but has to know in order to understand. Not everything we know was ever made explicit, and even if it were, we forget how much we've learned.
A good amount of understanding is intuitive filtering, which can difficult or impossible to put into words, at least until you've done some deep diving and practiced expressing it.
For example, after studying geometry, you know that when you see two lines crossing in a diagram, you can assume that they intersect at precisely one point and the lines are perfectly straight and extend infinitely. All of those are completely non-obvious assumptions you have to learn to make. They are conventions about how the diagrams are drawn and interpreted. You had to get used to them. And eventually you'll forget that you learned the assumptions. Similarly, if you read a problem about someone driving 62 miles per hour for 2 hours, you are trained to assume it's exactly 62 miles per hour (not 62.00000000000003, 62.000959, or any of an infinite number of similar values within the margin of error) with no acceleration or deceleration, for exactly 2 hours, in a perfectly straight line. Without the training, none of those is at all obvious, and in fact, all of those assumptions are going to be false. We learn particular ways it's helpful to be wrong. If we're skillful enough at that, we can make excellent predictions. Obvious?
So how do we get past these blind spots as to how things work, or could work? One thought that would look random anywhere but here is that adventure games (ie, interactive stories that unfold through realistic-ish puzzles involving objects and conversations) have always seemed to be a nice exercise. You end up really wracking your brains to see how the few items available to you could be used in ways you hadn't considered yet, and normally never would consider. You basically make believe that you're MacGyver, only it's usually not quite that intense. Nobody lives like MacGyver.
Encouraging newbies (and everyone else) to speak up brutally honestly in safe "Braintrust" meetings works for Pixar and other companies. Then experts are primed both to think out of the box and to listen to feedback from people who, yes, might not know what they're talking about, but then again might have an excellent angle. If you suspect the Braintrust approach only applies where stuff doesn't have to stand up to harsh reality, it also works at Frank Gehry's company—an architecture team famous for bizarre and wonderful buildings that look like they should fall down, but don't. Material suppliers often question them or say it can't be done, but the team are no strangers to being more thorough than the experts in the materials, although they will of course listen. Useful information goes both ways. Take a look at the Louis Vuitton Foundation building in Paris for a typical example. I like to imagine that's standing because of radical openness to feedback.
The public doesn't trust experts and experts don't trust the public, but we must work together well for democracy to thrive. The "how" seems to be the core question that republics try to answer. How do you get people with the whole range of experiences and skills deciding together wisely?
So I'd like you to think about the question as you go about your daily life. What else can or might help with this? How do we make getting past blind spots and hearing and engaging with new ideas more the routine and less the exception in our democratic institutions?
Polyvalence
Usually this slight impasse will come up in conversation as "shutting down" someone whose view we don't like, forcing them to splutter and go silent. But that's an overly simple reading of the meaning of no reply. The Clue effect, as I call it, is the situation where you feel as if silence means you might be on to something. And it's uncomfortable for you, the person who might have "shut someone down," partly because that could be entirely misleading. In Clue, someone could be cheating or not listening or forgetting they actually do have Colonel Mustard in their hand. Whoops! And in real life, there are a million reasons for no response.
Leaping to the conclusion that no response means we're right is a quick route to delusions. At the same time, if we are repeatedly ignored when we mention something, that can be extremely indicative, perhaps of a cultural or personal blind spot, or simply an unwillingness to confront an issue honestly. Often it's about that moment: "Now's not the time."
In our minds we often think someone's opinion ain't right, and we believe we could prove it in open discussion. But if we don't have that discussion, how do we know? It's so easy to look down on someone's foolishness, brush right by; meanwhile you're the one with the greater, more troubling misconception. A classic way to do this is to point to a flaw mentally without spending too long considering whether the flaw is superficial or deep.
If you think silence speaks volumes, I have a lot to say about that:
(A little joke...) No, see, silence emphasizes what's around it, but fails to carry its own message. Paradoxically, it does still give information. How can you read a communication without a message, you might ask? Ok! Excellent question and not asked enough! When you hear the wind in the leaves, is that a message? No... unless you're schizophrenic or having a religious experience, I suppose. It's information, though. What does it tell you? Not much, but also not zero. The air outside is not still, for example. Perhaps you don't want to wear a hat.
The fashion by which this empty string of no reply (in mathematics represented as ∅) gives definite information, but very different amounts to different observers in Clue, is reminiscent of the famous Monty Hall puzzle, which used to appear on the game show Let's Make a Deal (hosted by, not so surprisingly, Monty Hall). The situation on that broadcast stage has mired and fascinated viewers and even students of math ever since. I won't go into any more detail today, but a friend pointed out the connection after reading the above, and it's well worth noticing.
-
* For those who've never played Clue, this means that as long as no one is cheating, Colonel Mustard is definitely the culprit... figuring out whodunit is 1/3 of winning. Interestingly, after the silence, everyone has info about the murder that they didn't have before. But it'll take the others some extra detective work to reach the same conclusion: the killer was definitely Colonel Mustard. So your move tells everyone something but tells you more: exactly one fact without ambiguity or wild geese. Each of your friends now has to chase three geese, metaphorically, to figure out which two are wild. Was the murder weapon the Lead Pipe that they don't know that you have in your pocket? Was the scene of the crime the Conservatory which they also don't know you have in your pocket? Nice bluff! Meanwhile you can focus on other things. Obviously no one is exactly thrilled, because they want to win themselves, and you just pulled ahead!
jeudi 30 avril 2020
It Isn't Drinking
It's all about particulars. Crucially, it's about why we choose these particulars rather than those particulars. The world is not a one-line drawing. Free-Wheeling Markets vs. Big-Government Communism is not what's playing out before us.
For example, a strong social safety net makes purer capitalism more achievable. It isn't one or the other, or one obstructing the other, or some ratio mixed; here we observe a counterintuitive interaction. Counterintuitive yes, but it makes perfect sense to anyone who studies games and understands the metaphor of the "magic circle."
There must be some kind of method to this madness. We cannot just rely on God to sort the globe out for us. The famed invisible hand of trade patterns is not magic. It is actually complex cause and effect, which we can trace and harness and influence.
Can? Must.