vendredi 27 décembre 2019

The Novel City

Ever since I was a kid, five years old, I've pitched my tent on the slopes and in the breezes of the same subculture. And the breeze somehow came down to me from the mountain of elders, even though they don't play games: we see computer games and video games as possibly a little bit different from each other. Whatever name you use for all this stuff, for the larger parasol, interactive media is very ancient indeed, even prehistoric. It isn't unusual to consider all those dicey games of Knucklebones before the invention of writing a prototype of interactive media. And whatever you think/feel about them, digital games are artistically and intellectually new and meaningful (to many, including yours truly), creating a central, though not always recognized or respected, aspect of modern thinking and expression.

As with "comic" versus "graphic novel," or "flick" versus "film," or "pop science" versus "published research," or "porn" versus "erotic art," or "vegging out" versus "watching a historical miniseries," or a hundred other ways we puff ideas up or make them cozy or throw subtle barbs, the words we choose give impressions about status and seriousness that have little bearing on the potentials. It certainly occurs in more fully real, human contexts. For example, we give our judgments with word selections like "lust" versus "love." After all, what is the difference between "infatuation," "obsession," "crush," or "feeling like you're in love"? Not really anything, except the context or judgment you choose to assign. And we do love judging on context. (I recommend checking out the book "Love and Limerence" by Dorothy Tennov for research on the not-quite-universal experience of being in love.)

With games, this is an especially starkly delineated phenomenon, because "game" and "play" are (for most people) more or less dictionary equivalents of "unimportant," "unserious," "unproductive," and "childish." Even worse than that, "video game" today often means "total waste of life." To be fair, I wouldn't deny that games can be intentionally cheap and exploitative. That concerns me a lot. But then so can food or work... Can those not be dangerous? Don't they require good judgment, also? Those also concern me. Just as I don't want my food to kill me, I don't want my games to kill time.

It's difficult to talk about games without full awareness of this social attitude. I don't know if you've ever noticed it, but there are certain words that most people sound awkward and self-conscious saying: "sex" and "money" are at the top of the list. You can hear something in their voice. "Game" is another one. It's loaded with connotations that you don't necessarily want to call up. This affects the way we say the word. With many people, you'll catch a tinge of embarrassment, disapproval, or both‡. Words count, tone counts, but they don't necessarily affect what you use the word for, or even say anything about it.

There is not actually, of course, any real discontinuity between computer game and video game, or for that matter between digital game and board game and sport. But in practice, "computer game" has always said to me that the work may take nature and human experience as questions. Some refer to this category as "serious games," others as "educational games" or "edutainment," or "art games," or "interactive fiction," or even "indie games," but these are all panels on the greater umbrella, which need not, after all, present itself as serious, or have an educational purpose, or look aesthetic, or include characters who develop, or be made by a small team.

Crucially, there is, I still feel, a difference in attitude between most game players and game makers, in one camp, and the few game players and makers who see games not as entertainment or business in particular, but as rebellion, as manifesto, as interstellar ship, as tool, as thought experiment, as high art, as the new speech, as tomorrow. It is easy when adopting this view to become a little aloof, haughty, pretentious. But I think computer games are also very humbling. Even those of us with the basic skills of playing them and making them must admit that the possibilities for complexity and difficulty are staggering, that we ourselves cannot fully handle this, that we are not even close.

Novels are terrific, and the imaginative absorption of a good novel and a good game are, to me, and always have been, extremely similar. It wasn't until I was a teenager that I noticed movies could be as epiphanic as books and games; it took watching some art movies carefully. But a novel is unlike games in one way. A novel, no matter how challenging, is a one-person operation, on both the reading side and the writing side. Of course, the readers will discuss amongst themselves and read reviews, and the writers will get trusted opinions and work with editors, and we can read out loud together if we want. What I'm saying is not perfectly distinct. But a game is almost never the first way, a one-person operation. A game takes a village to make and a village to play.

-

‡ Jane Austen wrote about this same social phenomenon around novels in the 19th century, and I'd love to reference that when I find it again. Plays, novels, movies, comics, music, even logic have all been seen, in their time, as dangerous, immoral wastes of potential. This may not be all bad: maybe a few enterprising souls take the challenge to prove otherwise in the best way possible. Maybe Jane Austen's stories benefitted because she didn't want to be seen as immorally wasting lives. Ah, ok, finally! It's in Northanger Abbey and has been dubbed her "Defense of the Novel." Just two lines from it, but there's more:

"And what are you reading, Miss --?"

"Oh! it is only a novel!" replies the young lady, while she lays down her book with affected indifference, or momentary shame.

mercredi 18 décembre 2019

Tug of War

Competitiveness makes you less sympathetic. When you are competitive, you are motivated by winning itself. This means that you want to believe that when you win, it is a good thing, a thing you deserve, even though winning in a competitive scenario means someone else lost or otherwise didn't win, and moreover, succeeding where another fails inherently means they must be at some sort of disadvantage. We don't like to admit these things, but they are perfectly true. We should be careful about how we fall into competitiveness. It's an important force, even a critical one. But it blinds us in a few ways. We ought to see clearly, sometimes, at least, what they are.

mardi 29 octobre 2019

Patch

As a system of coordinated action, capitalism has strengths and unstrengths. For example, take the early success of Warren Buffet, glistering icon of dedication and good business. Very young, he decided he was going to be rich. Then he followed through, learning how it's done wherever he could. By 16, he'd saved up $5,000, which in today's terms is more like $60,000. If he could do it, why can't everyone?

In a predominantly capitalist system, much of something could get in the way that isn't fair at all. What if you have an illness that's very expensive to treat? Now your growth (or "growth," if you are a skeptic) as a citizen is impeded not only by the afflictions of the disease, but also by the afflictions of poverty and missed access. If it takes money to make money, and your funds are all absorbed away by something you can't control, then losing money means you lose even more money, and with it, chances for success, and with those, your potential to contribute to society.

Surely that isn't optimal. Most of us in my generation, and perhaps a vast majority around the world, agree this is not optimal. Not optimal, you might say, but it's the fairest system we have that works. And you aren't right if you say that. It's already quite apparent that mixed economies, when set up wisely, work as well as or better than purely capitalist economies.

If imperfect capitalism works better than capitalism, then capitalism isn't the best system. If capitalism isn't the best system, it seems likely that imperfect capitalism isn't the best system, either. Where do we go from here?

Capitalism drives cooperation by competition and competition by cooperation. When the same rules apply to everyone—a tenet of free enterprise—then on one level, the system of rules is fair. Yet anyone who has played a board game knows that applying the same rules to everyone doesn't guarantee playability, balance, or even fairness itself. And everyone who has followed a high-profile court case knows that, at least in the United States of 2019, the rules do not apply equally to everyone.

How can we keep (and improve) the cooperative, competitive, resource-managing, and motivating features of capitalism without succumbing to its many catalogued ills? If the server of the state has a bug, what is that bug, and how do we fix it?

vendredi 27 septembre 2019

Which if?

We earthlings don't like uncertainty. We usually like imagination. My suggestion: rather than depend on a show of confidence, which ultimately is rather empty or illusory, to get attention and credibility where there's real uncertainty, use imagination.

Rather than say, "We just can't draw a firm conclusion based on these numbers and this proposed mechanism," you can suggest alternative explanations that could hold water for all we know. A vivid alternative can appear positive where a statement of uncertainty would appear negative, even though they are basically the same. As a culture, we ought to embrace uncertainty more as the critical spice it is. Until then, though, use imagination?!

jeudi 26 septembre 2019

Plicō, plicāre

Is the cat in the box alive or dead? I have the simplest solution to the paradox. The cat makes decisions.

Free will is part of the universe's drive toward entropy. When I was in high school it didn't make sense, as it doesn't, logically, by most any analysis. But now I think it makes a shadow of a sliver of a sense of sense.

Consciousness is not an illusion. If it's an illusion, who is being fooled?

What does it mean to fool an agent without original agency? Why would this entity need to be fooled to prove to it that it isn't what it isn't? By the subjective experience of consciousness and will, which must have evolved as a capacity and must exist in physics as a phenomenon, the universe gives us the means to avoid realizing the truth, which is that this subjective experience is false and we are glorified pinball machines. Why create a subjectivity only to fool it, when there needn't have been a subjectivity at all? And why did consciousness and the experience of active will evolve before any conception of determinism, as would seem extremely likely, given the apparent consciousness of animals?

That doesn't make any sense either, you know.

Let's assume that we are all computer code, and that the code crashes when it looks at itself and realizes that it's deterministic. If it's deterministic, then it doesn't have to do anything; it can just wait for fate to move it. Let's say that logical moment crashes the code, much like dividing by zero. Ok. Ok? The simplest solution here is to evolve ways to keep the code out of that pitfall. You don't need consciousness and an illusive feeling of free will for that. You don't need any feeling of feeling at all. You can just go right on running the code, with the modification that it isn't allowed to divide by zero—or become omphaloskeptic enough to falter.

The illusion of free will is an unnecessary solution.

mardi 17 septembre 2019

Across and Opposite of Barrier

The other day I was thinking again about Marshall McLuhan's "The medium is the message" after reading a really good article on the birth of information theory. And I believe I understand the metaphor, but it's easy to misunderstand. At the risk of kneading the obvious, here we go...

Just to pluck up a stray petri dish of an example, the first new Star Wars came out, and (who, me?) I loved it. The Force Awakens! My one big complaint wasn't that it rehashed old plot-liner (very close to a reboot of A New Hope), because I felt that fit. It's a trilogy of trilogies. There will be some recapitulation, otherwise it'll become too amorphous. A poem has stanzas; Star Wars now has a reboot built in.

No, I found the movie thrilling. And this redundancy aspect was a statement, too. New makers, same spirit. If a lot was the same, a lot was different. Anyway, that wasn't my complaint. Nor was the deus ex machina of the Millennium Falcon appearing early on. That said something like, "Ha, gotcha. You didn't realize the Millennium Falcon was part of the Force, did you? It is. Fate will not explain itself to you always." The plot hole improved the experience in meta. No, my biggest complaint was that certain little moments had become de rigueur. When the trilogy of trilogies was first outlined, did Mr. Lucas have any idea how many scenes would involve a Jedi (or Jedi to be) in a fight, arm outstretched, verging on vanquished, light saber pathetically far off, wobbling? It wasn't story parallels or special effects or the unexplainable, but these little tics that were now shopworn. You aren't rehashing old story when things like this happen. You're... a genre.

And that's what resolved the irritation for me. I read another interview with the guy who wrote the script with the director, and he talked very openly about the work of writing genre stories. He'd decided that Star Wars was a genre, and he was thinking about it in exactly those terms. Ah, I thought. Ok. That was intentional.

The next thing he said was what was most interesting. He said that genre doesn't tell you what story to tell. It doesn't tell you what your theme is, your point, your message. You can write a strict genre movie (book, song, etc) about absolutely anything. And that's the beauty of genre. It's like the form of a poem. A sonnet could be about anything you want.

In other words, the medium isn't the message. Right?

It is a metaphor. Your eyes are not two shining suns. Your eyes are biological material with lenses and photoreceptors.

And this unrolls tendrils especially when you think in Claude Shannon terms about what a medium actually is. A medium is something like a sheet of paper. Papyrus is one medium, vellum another, tapestry another, woodcut another, flattened and bound wood pulp another—all closely related. Sure, a sheet of paper could suggest all kinds of ideas to you, and there are ways you cannot repurpose a sheet of paper. But there is so much you can represent on or with a sheet of paper that you are almost unconstrained. The sheet is genre. You can make it about anything you want.

It's worth agreeing that medium, genre, series, form, and format are different ideas. But this actually feeds into the larger claim. If genre allows just about any message, then medium certainly does. McLuhan is right, of course: choice of medium (and genre, series, form, format, etc) is part of the message and imbues it. And the appearance of a new medium changes society, revealing natural hierarchies and possibilities previously unknown. But we also need to agree that all these nouns work as information channels in the brass tacks mathematical sense outlined by Claude Shannon. Genre—say "historical fiction" or "dubstep" or "televised golf"—is like a wire. It's a narrower wire than medium—say "podcast" or "magazine" or "plasticine."** But they're both wire-like, media, tubes of aether. And both work just like wires to carry what wouldn't be there otherwise, which brings options to sender and receiver.

**If you want to call these more expansive names like "recorded sound" and "print" and "sculpture," then so much the better. That strengthens the argument.

jeudi 12 septembre 2019

Expression

For all that new tech gets old, computing is universal. It doesn't turn into a raisin and then topsoil. It's here to stay past the morning. It always was here, but in the last century, humanity has discovered the informational equivalent of electricity. And that isn't "big." That's got more unseen matter than a galaxy cluster.

It's the same thing I like about early, silent movies and improvisational music. There's this aura of elemental invention. The silent movie makers are more limited, more hobbled than anyone who came after them, technically. Yet they know anything is possible. They'll do weird things like try an entire feature-length film with characters but no words, and more amazingly, it'll succeed even by today's narrative standards. They'll slap a color on the projector in the cinema to show you, ponderously, that it's daytime, evening, or nighttime, despite black & white footage. They'll have a person walk into the theater, sit down at a piano, and make up music while watching the film with you.

This is crazy shit. None of it's realistic. Even the acting is wildly unrealistic. Most people find it unbearably hammy. But given these films had no sound, the visual acting had to carry what was missing. This was a functional adaptation. They could have thrown out their hitchhiking towel and said, well, guys and girls, we just can't put on a performance like the one next door they're doing with Agatha Christie's play, so what's the point? They could have gone home. But they believed anything was possible, and so you have silent films with the whole range of acting from subtle to ridiculous, and even at the most unbelievable end, the ridiculousness is often helping to convey the message, and it becomes part of the aesthetic, like the unrealism of claymation.

I don't know how to convey that sense to others. Maybe it's something I gravitate to, and I can't convey it.

Sit down, or stand up, with any musical instrument or soup can, and make music. You don't need to be a musician. Make music. Explore the possibilities of sound.

We think a person needs to earn the right to speak freely. Before you're allowed to paint what you want, you'd better paint what your teacher asks.

And I think that's antithetical (as an attitude) to discovering the possibilities of paint. Oh right, yes of course, you can paint what your teacher wants. That's fine. And you'll learn something. I'm an educator and believe in education. But what I'm saying above is what possibility and creation are about.

Maybe I have no idea what I'm talking about, or this is just garden-variety ego (who am I to talk about this anyway?), or I'm expressing a feeling and pretending it's logical when it's a feeling.

Computers allow us to explore choice in a way that has never been possible before.

Computing isn't just an aid to this or that. Computing is a medium—perhaps the greatest medium aside from the physical universe and the stuff of our minds—of choice.

Information is defined in several key ways: an alphabet soup of symbols, probability, choice, surprise.

It's often said that quantum mechanics is an ad hoc set of equations cobbled together from experimental evidence. It's said so often it's become an old chestnut. But since 2001, several theorists have shown that you can "rediscover" quantum mechanics from a few axioms (rules), usually just two or three, having to do with information and its properties. The key notion I took away from that article is that quantum mechanics, while it seems forbidding and weird, is actually just a generalization of probability theory. It's a mathematical structure that happens to mirror what's going on all around us laceratingly more closely than we would have any right to expect.

Quantum mechanics is also about choice. And it's also computational.

Isn't this exciting? To me, it's exciting... it's maybe the most exciting thing going, and I wish I knew a better way to express that.

lundi 12 août 2019

1-0

Every process we know can be boiled down to data and transformation (which can be represented as data), and once you allow the transformation-and-data to work on itself, you have the core of a processor that can do anything known.

Feedback. Iteration. Recursion. Growth.

This process can even create and split dimensions. See infinity categories. See fractal geometry, geometry that results from these feedback (growth) processes, points and lines and planes branching out to cover more dimensions than just their own. A fractal is a pattern with fractional dimension: it's somewhere between one solid number and another solid number of dimensions. It's 2.473D, or similarly not a whole number. It's stuck growing, what you get in between the one and the other.

If "stuck growing" isn't demented or plain wrong, it sounds a little like time...

When people talk about "seeing into the matrix," that isn't a bad analogy. A matrix is an excellent example of data and process in one bucket. A matrix can be seen as a data dump. It can also be seen as a thing you multiply by in order to change a system. And we already know this idea: code is data (1s and 0s) and it does work (changes 1s and 0s). 1s and 0s aren't just practical icons or a false dilemma. Almost a century ago, it was shown mathematically that 1s and 0s can do everything that 0-9 can do, and moreover can do everything math symbols and any information can do. That's heavy simplicity. It was shown shortly afterwards that this applies just the same with quantum mechanics and non-deterministic systems. Amazing when you think about it? Your experience of thinking and feeling could be representable as 1s and 0s, even if the world isn't deterministic.

Information, in fact, is defined most basically as "surprise." If you fully expect it, it doesn't surprise you and isn't informative. If it isn't what you expect, you're surprised and may learn something. Information. It's a measurable quantity, not just subjective. I would even argue that information, surprise, the feeling that we didn't expect this, is the best proof we have that the outside world and other people exist. We often say it can't be proven logically that the outside world of objectivity exists, we have to take it on faith, and maybe so. But for me, surprise works. If I'm surprised, I definitely didn't come up with it all myself in this moment. Information tells me there is reality outside my present self. And if there is reality outside my present self, there is reality outside myself.

Multiplying by a matrix doesn't do everything that the process up top does. The real "matrix" is both more than and less than a matrix. Simpler but harder to understand.

For all that laptops/phones/etc are elaborate devices that no one person fully understands anymore (really truly, and I think this should still be kind of shocking even when you "know" it), this piece inside, what you do with the 1s and 0s, is amazingly powerful and elegant. It's complex in the best sense: simple yet intriguingly very difficult to fully wrap your head around. And that isn't because you aren't smart. It's because it's mathematically proven impossible to fully wrap your head around it. Just like we can't write out π in a finite number of digits. And, similarly, like mathematics tells us we can't predict the weather for long: we have a brief window of decent approximation (2 days currently), and beyond that, our best calculations (and the best possible calculations) rapidly get far worse. It will still be true with quantum computers, and our brains aren't besting this either. We expect our AI to surpass our intelligence in all areas eventually, but not to predict the weather much better. Barring a colossal shift, we will never be able to predict the weather in a year well, unless the weather transforms to something else, something like very still water, a frozen atmosphere. This isn't because our algorithms, processors, programmers, or meteorologists suck. It's because, well, math. The chaos mathematician in Jurassic Park explained it: you can't predict which way the water drop will go.

If we knew all that first paragraph above meant, I don't think we'd need the universe or evolution or our lives to find out. We're here because it can't be fully known to anyone what that process will be like in advance. It has infinitely many potentials, infinities of infinities. We are just a few of the experiments that result.

Obviously there are unknown unknowns and this could all be tossed into turmoil by some new discovery. (Information. That would mean the outside world exists ;) But we already knew that.) And the idea I started with here probably covers more than any of us appreciate. It covers anything we can simulate, much of which we won't be able to follow ourselves. It covers all known laws of physics and science. It's a lot. And maybe I haven't phrased it well or perfectly, but I'm trying to share what I hope I do understand at least a little. The basic thing above is the insight I got from an assignment long ago simulating a simplified CPU in C code. It was a great assignment, and I was very lucky and privileged to be given it. The insight I thought I got from it was in fact the intended point, what the assignment was supposed to teach, but it took me years to fully realize that. And that's the first paragraph. It's what all computers, computation, and known processes can be boiled down to: data, structure, and process are one. Or at least they can be interconverted by energy in an instant. That's what a CPU is. It converts these things into each other instantly when powered with electricity. And when that idea swirls in on itself, when there's feedback (like microphone feedback) that results in some kind of growth or regulation process (think of the piercing high pitch from the speakers which nevertheless levels out at a certain painful volume), which it can in a CPU, you get Frankenstein lightning, or something a lot like it.

That's why Lisp, a programming language invented in the early 1960s, is still trendy (in its many modern dialects). It's the language that most clearly depends on and exposes this "simple" aspect of all computation. Is it the best language to write an app in? Maybe, maybe not. Probably not, to be honest, though it'll work and some people prefer this way (and Clojure in particular gives you access to everything that has been built in Java, easily, from within the language, so it's very viable currently). Even the AI crowd have kind of given up on the vision that immersing themselves in this level of purism will help them create a sentient being or something. An entire commercial operating system popular with AI researchers in the 70s-90s (Genera, aka the Lisp Machine) was built in and for this language that constantly shows you how data and process are one. You can't write a single line of Lisp code without staring at that. The system functioned and still does. It does everything any computer can do. Just because a thing is true doesn't mean it'll show you the next step on your path. But will it teach you something? Yes.

What maybe amazes me most is that such a simple idea can also handle non-determinism. When we say "every process is computational," we are not necessarily saying "every process is deterministic."

Do I understand what I just said? Not really.

But my core interest is in choice. Games are the art form of and about choice. That's what got me into the idea. What is choice? What is will? Does a choice mean anything out of context? (In my opinion: no.) How does context affect choice? Where does an element of choice that does not come from context come from? And so on. These are interesting questions when you want to communicate with choice as a central idea. Everything we say or do involves choice. Even everything we sense hints at choice. It's what we are. So what is it?

If the universe only went one way, I personally don't believe I'd have any business being here and going through with this. In my opinion, my presence does something other than give one of the characters in a movie their own predefined internal dreams which are actually felt. The actually-feeling is somehow instrumental. It's necessary. It has to be here. It does something. For every force, there is an equal and opposite force. The universe applies all these experiences to me. That's imposed on me. I cannot escape experiencing, and most of what I experience follows all the regular laws of physics. So what's my role? What is the equal and opposite force? Does the entire universe outside me feel in some way comparably to me? If this is going to be equal and opposite, and the universe is so much bigger, then where is the balance? What do I have that the universe doesn't? How are we equally matched?

The universe informs me: gives me a signal, one from many possibilities. And I inform it: give a signal back, one from many possibilities.

Maybe Newton's equal-and-opposite idea doesn't apply to everything. Maybe this is in no way equal or opposite. And maybe I'm predetermined, a spectator, a seat in a rollercoaster moving, allowed to feel but not to change the path.

And I use Pascal's Wager on that. If so, then whatever my opinion is, it's the opinion I was always going to have at this moment. And so I haven't done anything wrong, couldn't have done any differently. In this case, there is nothing I can lose by adopting the wrong opinion that I wouldn't have lost anyway (by adopting the same wrong opinion). If this is the correct view, the total risk of making a mistake versus any other possibility I'm considering adds up to: 0. There's only one path.

On the other hand, if this is not all mapped out in advance, if there are choices given to me, if I can reach different endings from here, and by different paths, then I stand to lose everything I could possibly lose by ignoring that reality.

Pascal's Wager was originally applied to God and the Devil and Heaven and Hell and belief and doubt. In that context, I do not believe it works. There are a number of criticisms, including that you could alter the form of the reward/threat to pretty much whatever you want, and thereby get people to do whatever you want. It ends up being a bad argument that doesn't work as intended.

However, I believe the use of the same basic structure of an argument in this case does work. It isn't that I know I have free will, or that I know the world is non-deterministic. It's that there is everything to gain from making the bet that it is, everything to lose from mistakenly declining it, and nothing to gain or lose from making the other bet if it does win the day. So unlike Pascal's original wager, it doesn't take the form of a threat, a carrot and stick that could be tweaked to manipulate. It's just point-blank reason.

mercredi 31 juillet 2019

Quoting lambs or is that lambda

There's this assignment I did in high school. We simulated/emulated a computer processor chip, a CPU, by writing C code to do exactly what every part of it did on the level of individual 1s and 0s. The processor wasn't real. It was the "MIC-1," a fictional chip that has never been produced. But it was real in the sense that the design worked the same way. It could have been produced. It was just too simple to be worth producing commercially. An exercise. But that exercise may have taught me more than any other assignment in high school.

It taught me how binary numbers do work on a chip, how they move things around. How they combine and juggle information. How the operations and the data worked on are both strings of 1s and 0s, and how that makes sense. Structure and movement are one. Data and calculation are one. Both actions and things acted upon and moved are just ons and offs. Switches. It was like directing the flow of water around a maze... with water... down forking river-railroads, like a splitting train. A branching ferry of water-electricity 1s and 0s. The shape of the maze at each moment was what it did in that moment. Its final answer now could be its shape later.

There were two other lessons. First (or second of three, really), I completed the assignment, but I never got full credit for it. Right as I was getting ready to turn it in, very happy and proud that it was working, I made a mistake on the command line and overwrote my source code. I put ">" instead of "<". If you've used Unix much you know what I mean. There was no recycle bin to save me there. I lost it and had to go back to a very old version, and scramble to make it semi-acceptable. I don't believe I've ever repeated that mistake. And I've lost almost no data in my life unintentionally. And I tie this to chance. The only big example that comes to mind was when my hard drive was destroyed by an EMP from a freak electrical surge from a distant lightning strike which caused an electrical fire in and under the building. My laptop was unplugged, but the pulse through the air killed my hard drive. And I only lost a small fraction of files that were in a folder that wasn't backed up. I have very often not lost my data, and I've always reminded myself that this is mainly because I'm lucky, not because I'm smart or doing everything right. There is a key truth. Knowing what luck is by now has been a big reason my data loss hasn't been much worse, but also luck is a big factor itself. This lesson here with my code helped, and I think anxiety helps. It's similar with driving. I'm still alive because of luck: or, rather, largely unexplainable probability so far. But I'm much worse at driving than at saving data, so that's a reason I want to segue to walking and public transit again, sometime soon. (I know, I seem to be rambling, but these things are truly connected and I could elaborate the connections more but will leave them to your imagination.)

The third lesson was the best. Right at the end, when I was finalizing the code that I was about to destroy, something happened. All of the code came together in one line. One line of code tied the entire virtual processor together, all the little pieces, the shifting channels of information, the storage and retrieval from registers and from the slower memory of RAM, the arithmetic operations, the comparisons between two things. Everything occurred, finally, in one line. That was the moment. It was so dumbfoundingly simple that I felt this had to be the main insight of the entire assignment. While I wasn't sure, I had that moment anyway: almost a eureka moment. And it worked. The chip functioned, I fixed a couple little issues, and a few minutes later I'd deleted it all by mistake.

Wasn't this realization, this "simplification" or "big picture" just coming from writing C code, code written to be easier for humans to understand than 1s and 0s and transistors and registers? Was it just how I had written the thing, and someone else could have written it differently? Was it a real insight, or just a cute line of code, a long string of the function of the function of the function of the function of the function of... It was pretty long, but the entire line was just function composition. Maybe I'd just written it that way, or I'd been encouraged to write it that way by the outline of the assignment. Maybe it didn't really mean anything.

And the funny thing is, it's taken 20 years for me to realize, for certain, that yes, this was absolutely a core insight about all computers. And not just all computers, but all physical processes in general. It can all be seen as function composition. I hesitate to go into detail on this, because I don't even know where to begin and I barely know what I'm saying myself, but if you do want to know, go and read about Alonzo Church's lambda calculus. It's... exactly what I saw a few moments (maybe an hour, actually; I think I'm embellishing and shortening that part, but hey I'm telling you) before I deleted that code. And it wasn't a coincidence or how I happened to have organized my code. No, it was the nature of all computers in the world, and all the computers and physical processes that are theoretically possible.

That's a pretty good assignment, huh?

And so I should have known. There have been many opportunities to make the connection, and I did. Somewhat. Partially. I thought I got it. For example, there's Lisp and the whole "functional programming" movement. But it was only the other day that the whole ton of bricks fell on me. Yes, no, yes, that was in no way a coincidence. The power of the parenthesis. It was even bigger than I imagined.

The class was called Computer Architecture, and it was taught by a middle-aged, overweight woman who was unassuming but sharp as a razor. She made a passing reference to all those function compositions afterwards, so I had one lead that it actually was important. But she left it to our imaginations and our curiosities to find out.

The best lesson is finding out, but I'll try to put it into normal words.

It goes back to algebra and functions. You send a number in, you get a number out. Right?

So f(x) = x*x becomes 36 when x = 6. If there's always just one number coming out (36), not indecision about two or three or four numbers that could be the answer, if it's always just one number coming out, then that's a function. That's basic determinism. And it's basic calculation.

And when you learn this, you can represent all the math you did before that moment in algebra, all the pluses and minuses and timeses and so on, with functions that add or subtract or multiply or whatever. It's hopefully an insight most people get from algebra, that a function is a "machine" that could do any of the math they know so far. Each function is one particular "machine" with one way of working. When you punch numbers into a calculator and hit "=" and get an answer, you get one answer. What you just did there was a function on those numbers.

Well, the insight is actually that this covers a lot more. It covers the rest of math, too. It covers everything that can be calculated, and that means all deterministic processes. It even, amazingly, covers non-deterministic processes. It covers quantum mechanics, including the purely random, non-deterministic component of it.

When you write your function, f(x) = 20x - 4.2, or whatever it is, you're writing a little program, a scrap of code. You send something in by replacing the x. Let's try 17.

Ok, so x = 17. And so f(x) is now f(17), which equals 20(17) - 4.2. I don't really care what that number is. It isn't important. But for completeness, it's 335.8. And I didn't need a calculator for that, or even paper (being good at arithmetic is a third of my job now, I practice), but the fact I can do it and a calculator can do it is not really a coincidence. 

In Alonzo Church's idea (it's called "lambda calculus" or "λ-calculus," but as for the significance of the name, he once returned a postcard that asked this question with "eeny, meeny, miny, moe," so, really, don't worry about it), this idea of applying a tiny machine to a specific number coming in is reduced to its absolute, purest, most distilled logic. We can do this with our "f(x)" notation of parentheses or without, but the parentheses help us visualize the "boundary" of each tiny machine. You "bind" a number, 17 in my example, to another symbol, we like letters, x in my example. What we're doing is one teensy tinesy moment of memory. That right there is all of computer memory. It's all binding one thing (here, 17) to a symbol/space/slot that stores things (here, x). Data and a gap for data. That iota of memory is conceptually enough to cover - and represent - all computation.

x = 17. Memory. f(x). Process.

It seems bizarre, doesn't it?

But isn't this too simple - what if we want to work with 4 different numbers, or 15 variables containing numbers?

Good question.

When we want to work with a bunch of iotas of memory, different numbers assigned to different variables, how do we do that? We make a teensy tinesy machine for each, a function that will "bind" each one to a variable, and we put them in a chain, one sending its output as the next one's input. It's a little assembly line.

The function f(x, y, z), for example, which we can understand as a machine that works on points in 3D space to give you some corresponding piece of information about each one, can be rewritten as f(x, f(y, f(z))).

f(x, y, z) = f(x, f(y, f(z)))

We don't need a new concept at all, we just need to remember that these different functions could be different from each other. Each "f" here might mean something different, a different process from the others.

In algebra we'd say something like:

f(x, y, z) = g(x, h(y, i(z)))

Or even:

f(x, y, z) = g(y, h(z, i(x)))

Or 

f(x, y, z) = a(y, c(z, q(x)))

The letters don't matter. Just remember the three variables are (or could be) different, and the three functions are (or could be) different from each other. A function or "machine" with many inputs can be expressed as many simpler functions or "machines" on single inputs, and you don't lose any functionality (no pun intended).

But don't even worry if you're confused yet. The core idea is that we don't need a new core idea. We can just keep packaging the same idea of a function, the same kind you learned about in algebra, with one number going in and one number going out.

In fact, we can even write the numbers that way. For example, 0 can be f(). 1 can be f(f())). Those empty parentheses, by the way, "contain" nothing. We call this nothingness "the empty string." It's the twin of 0 (a symbol, agreed?) when you're talking about strings of symbols - say, that unwritten term paper or novel, or the code you're about to write to calculate a cell in Google Sheets but haven't actually written. No symbol written. Blank page. Blank canvas. No code. Empty space. 0 is a number with no magnitude. The empty string is just nothing written, but it's a math concept. Small difference, but see it?

Ok.

2 can be f(f(f()). And so on.

There are even ways to write +, -, etc, as pure functions of functions.

If it helps you to see it more clearly, let me rewrite those this way:

0 = f(nothing)

1 = f(f(nothing))

2 = f(f(f(nothing)))

Etc.

You get 0 when you put nothing in the machine. When you feed that 0 into the next machine, you get 1. When you feed 1 into the next machine, you get 2: or two machines since the 0. Two assembly line steps. Think of the function as telling you how many steps led up to it. Or you can imagine it as looking inward, counting how many functions are inside it, held within its outer parentheses. (Incidentally, I mentioned above that we need to remember the f(x)'s may not be the same function, and I replaced some of the f's with other letters to illustrate, but in this case, it's all the same f(x). We don't need to worry: it's just one function working on itself.)

We can make all the numbers this way, and actually it goes deeper. There are many ways to make all the numbers with this concept, and this is only one. It simply illustrates the potential.

We already talked about how each function has its tiny iota of memory, right? Each f(x) has an x it's responsible for keeping in mind. And so when we do f(f(f(....))), we're actually building up both memory and process. And so maybe now it makes sense that f(nothing) is a symbol for nothing, ie, 0, and it's a little less than f(f(nothing)), which is 1.

It can actually be argued that every definition of numbers and operations depends on this concept, just in different ways. Data and process. Function.

Alonzo Church realized this independently of Alan Turing at almost exactly the same time, a little after 1930. What I've just outlined above, "Church's virtual atomic math machine" you could say, is equivalent to a universal Turing machine, even though the definitions of the two concepts look extremely different. They look different, but their functionality is identical. One (Church's) has functions of functions of functions, the other (Turing's) has you imagining an infinitely long spool of magnetic tape with a read/write head that can move back and forth according to some logic using a finite number of internal states: reading, writing, overwriting, or erasing 1s and 0s on the tape, and either doing this forever or at some point stopping and leaving its final result as a string of 1s and 0s on the tape. Every programming language (all the ones that are "Turing-complete," which is most of the good ones: it just means they can in theory calculate anything calculable, even if the computer might take a long time, or run out of memory space) lines up exactly with Turing's infinite tape and read/write head vision, which is also functionally identical to Church's idea above with functions of functions.

A key piece of this I haven't mentioned, but which the MIC-1 CPU in C assignment demonstrated, is that actually you only need a finite number of functions to get up and running. It's a small number, 5-10. That function composition I wrote at the end of the assignment was not infinitely long, but what it ran was a computer. Turing machines that are fully capable have a finite number of internal states, and correspondingly, real-world CPUs only need a finite number of defined machine operations. And correspondingly again, a finite set of rules of logic is your Swiss Army knife for all logical reasoning. (This is another mathematically proven result, one from Kurt Gödel that actually inspired both Turing and Church. It's closely related.) There are probably many ways to think about and visualize this same universal machine concept, ways we haven't imagined yet. Most of the coding languages look radically different from either way, and are much more workable. The Lisp family of languages draws on Church's functions of functions idea directly and elegantly, and that's why their code is famously stuffed with parentheses... and why some people actually like that. It goes right down to this core way to think about data and process.

Note: I don't think I've defined Church's lambda calculus precisely enough here that it would equate to a universal Turing machine yet, and I'm not an expert. Also, and this is an honest mistake, there's an inaccuracy where I talk about breaking functions with many inputs down into several functions (named by different letters) on single inputs. The equations I write with f(x, y, z) aren't quite correct. They hint at how it actually works; I'll try to fix it later. (Done: haven't changed the above, see below.) But I've described the idea as well as I know how.

Now if you're curious, the error comes down to the fact lambda functions have no names, but each can store a single piece of information in a named variable. This is why different notation was needed. It does mean, specifically, that a function can take in one function, do some transformations on it, and send out a different function as an answer, which then can be run elsewhere on some other input. Look up "currying" if you'd like a better explanation.

Church's notation gets at an idea that sort of combines these ideas:

f(x, y, z) = f( g( h(z) ) ), with y = h(z) and x = g(y)

f(x, y, z) = f(x, g(y, h(z)))

As far as I understand, it can't be represented fully in normal function notation (probably why we have lambda calculus and all these programming languages, huh?), but it's the same idea, just thought about and used in a more expansive way. 1) A complicated function can always be decomposed into simpler functions, and 2) functions can process other functions and even work on themselves and give themselves and each other as answers. While a function is waiting for all its inputs to have arrived (like guests for Thanksgiving), you can consider the welcoming of each arriving input (or guest) as a function that takes the number into account by memorizing it (seats him/her/hir/them at the table) and then returns to watching for remaining inputs (straggler guests). Decomposing a strict seating pattern into an open-ended, fluid arrangement allows us to build up infinitely complicated dinners from extremely simple Lego-like bits. Remember, a function doesn't only do things. A function can also modify a function that does things. And the "atoms" that make the molecules and mountains and continents and planets of functions are these tiny units of memory and process. Everything reduces to correlations working on correlations. It seems too basic, but a lot emerges. That's the gist of it.

And that's the best I can do! Admittedly I could probably do better if I understood better!

It seems almost as if we have made this more complicated, but the core insight is that we've actually made it simpler, and now we can use this as a basis for writing a computer language - or computer hardware - or any kind of hardware - that will do anything you want. Anything you've ever seen a computer do and in theory infinitely more. Anything you see happening around you in the physical world can be encoded and worked with this way. That's zany.

mercredi 24 juillet 2019

Winning and Losing and Designing

All design is the interaction of rules. If what you make doesn't seem to have any rules, it does. The page is a certain size. You use a certain font. You spell things this way, not that way. Do you break grammar? Where do you break a line? When does your character break rank? Where will the eye go? What emotion are you expecting? How would you classify the people who might experience what you're designing? Would you? These are all rules, and design is their interaction.

Some see design as a vague but large subset of art. I see art as a large and colorful subset of design, and I see almost everything, even those things unconsidered, as in some sense design. If this is sparking in you ideas of intelligent design, of anti-evolutionists, perhaps that is by design. If it isn't my intention and it happens, then my design has led that way. Do you see what I mean? Design is going to happen whether it works the way you want or not. One way of taking life by the horns is accepting this and stepping up and saying, there are certain ways I want design to lead, and I am here, and I'll work with you.

Winning and losing is defined by context you choose. You could be winning the game but upset that you're failing a higher standard you set for yourself. To another person you could look like a winner - you're shining in one way - or like a loser - you're frustrated, discouraged, perhaps even despondent. You could be losing the game but happy because here you are, and you meant to play this game, or you're interested to be learning from a good player, or now you get the rules, or whatever it is. Likewise, an outside observer could consider you to be losing - you're behind on points - or winning - you're having a lot of fun and making other people laugh, or maybe you just have a quiet smile and seem peculiarly unruffled by how badly you're being trounced, and who doesn't at least a little bit admire being unruffled? You think winning and losing is objective, but it's all the context you choose. There is nothing else.

dimanche 23 juin 2019

Does Universal Evolution Have to Be Random All the Way Through?

When I think of the early earth, early solar system, early universe, I think of embryos. These formation processes happen because of logic, laws of physics. Are these necessarily much different, conceptually, from genes? As far as we know, the laws of physics could easily have been different ones, and elsewhere, outside the bubble of this universe, there are other universes with other laws of physics. So I think of cells and reproduction. Maybe that's too anthropomorphic, or earth-life-centric, but what I'm saying is that a superstructure like that makes more sense to me than pure randomness. Can universes create universes? We already have very suggestive evidence they can. Our own computer simulations verge closer and closer on new universes. Even our minds were doing that for millions of years before. It seems there's a natural trend toward mirroring and recreating and tinkering with universe. In my opinion, that's why we're here.

In my opinion, black holes are real universes budding off of this one.

Do you think maybe they learn something from what falls into them?

Do you think future humans, or aliens, will be able to tweak the formation of black holes, or the rules that the universes inside operate?

We know that evolution can work by pure randomness and natural selection. But we also know that artificial selection, breeding, is possible, and further than that, we know that synthetic evolution, downright genetic engineering by conscious design, is technically possible.

So why not with universes? If universes form using the simplest learning algorithm we know, evolution, why wouldn't they, also, be able to benefit from the results of more complex learning?

What is the single biggest thing a universe would "want" to know before creating new universes?

It would want to know what this one is like, how it's doing. If you run an experiment, the experiment is pointless unless you can measure or draw something with it. Maybe our consciousness is a gauge on how good this universe is and what could be improved.

That's basically my philosophy on the "religious" level of things I can't possibly know myself, but that might be extrapolated from what I'm seeing around me.

In my opinion, there's a difference between the virtual reality in my mind and the physical reality around it, and there's a difference between the informational reality in a simulation and the physical reality around it. Maybe we can bring the two closer and closer together, but I just have this feeling you need a black hole to power a new physical reality. Black holes have more energy than anything else we can observe (other than the Big Bang), and the length of time they exist is comparable to the lifespan of this universe as we understand it. They have informational properties that continue to perplex physicists, but from the outside it looks as if they should have the highest quantity of information possible for the physical space they occupy. To me that sounds like an incredibly energetic superconducting supercomputer implosion. It looks like exactly the kind of thing a new universe would need. What probably happens inside is that spacetime stretches and rips and so much energy is released that basically everything goes beyond melting and new space, time, matter, energy, etc are formed. The energy that falls into a black hole has nowhere to go, so instead of just releasing new matter and light when all those particles smash together in the center, like in a particle accelerator, it creates new space and time as well. But we can't see that from out here because of the event horizon. To put it another way, if a Big Bang happened inside a black hole's event horizon, we wouldn't hear about it, because the news would never reach us.

To put it more familiarly, a black hole is a natural particle accelerator big enough to create not just Higgs bosons and other particles, but also new spacetime fabric. Spacetime fabric has informational and computational properties, laws of physics, as we see in our own universe, and so that might explain "where all the information goes." It goes into building a new computer-universe.

I'm not the first person to say this by any means. Physicists have been writing papers on this "black holes are universes" idea every now and then for decades, though the particular version of the story I've written above is my own. I'm not a physicist and don't need anyone to take me seriously, so I can free-associate.

It's a world view with a lot of guessing, but it makes sense to me.

lundi 17 juin 2019

Keeping the Bigger House

There are many ways society could operate. Whichever way it does operate, that set of conventions, rules, practices is biologically speaking an environment. Unless the system is extraordinarily comprehensive, some people will thrive wonderfully and others will falter or die. This is not necessarily because some people are overall superior and others inferior. Much of it is simply because some people naturally align well with the "artificial environment" of society's rules in that era, and others don't align well. In times past, and in some ways still, if you were a woman, there were so many ways it would have been drastically more difficult to thrive, through no actual fault or inferiority of your own or of being a woman, but rather through the fault and inferiority of the conventions, rules, practices of that era.

This notion generalizes. Increasing numbers of us understand both rationally and intuitively, from experience and education and intelligence and open-mindedness and empathy and compassion and just listening, that this principle of diversity versus conformity covers many genders and ethnicities and traits and persuasions. What the core realization means is that in any human era so far, if you are very successful, that is always, inevitably, in part because traits you did not choose line up well with the conventions, rules, practices of your time. You are at an advantage compared to others, and it is impossible to calculate, or perhaps even put a solid statement to, how much of that advantage is "fair" and how much is "unfair."

This is one reason progressive taxation makes sense. To the extent you are successful—and in our era this usually means financially—the rules of the social game are clearly in some kind of alignment with you. If this is difficult to agree with, think of it another way. There are rules that, were they changed, would reduce or nullify your success. You are benefitting or even depending on those rules. It is unlikely those rules are benefitting everyone. It is likely some will find them unfair, inadequate, even destructive. A person who feels perfectly at home working in upper management at Exxon is probably very well-off financially, but this is partly because fossil fuels are legal; if fossil fuels were illegal, that income would either dry up or turn illegal. The same person may be able to find another job that pays just as well, or may not. If they can, this is arguably because of other rules, conventions, practices. It's easier to see what I'm saying in the negative. Take away what your success depends on, socially. It didn't have to be there. The social environment could be different, and someone else would prefer it that way. And so paying tax for a system that's working very well for you makes sense, because you didn't create that system that's working very well for you.

I prefer not to use terms like "fair share," because they can be subjective. The logic I'm presenting is, I believe, more indestructible than some particular wording chosen to make an impression. The logic rests on one assumption: the laws could be different. They could. They have been different already. They are different from town to town, state to state, country to country. I'm basing logic on indisputable fact, because that's how logic works best. To put it in perhaps more concrete and immediate terms, a billionaire might not be a billionaire in another country. They might have been murdered, they might have died of typhoid. They might have taken out a loan to start a company and been robbed of it and never gotten another loan, and put in prison for not being able to pay the loan back. Or maybe education in that country would have made the population aware that what the prospective billionaire knows how to make and sell isn't so good for them. What I'm saying is not hypothetical.

Now I don't want to minimize anyone's efforts or contributions or talents or planning or good judgment or anything else. We need these, you need these, I need these, and I appreciate, strive for, and admire these when someone's doing them better. What I do want to do is speak in a kind of basic, scientific, almost mathematical way about the structures in society that don't necessarily have to be how they are. These are the kinds of structures that change over time, even that we agitate to change urgently, because some of us, most of us, notice ways that society could probably be working better, whether that's on a small scale or a large scale. By recognizing that every social convention, rule, or practice has already evolved into place, was another form of itself and earlier than that didn't exist at all, we free ourselves from the notion that the way of the world is fixed, permanent, immovable. The laws of physics seem to be. The laws of humanity seem very much not to be.

Who are you allowed to kill?

No, really, I'd like you to answer out loud. Who are you allowed to kill? The answer to this question tells you something direct, something about law and tradition. I'll give you a moment.

We eat "calamari" and "pork," yet it would be unconscionable in our culture to eat a dog.

Why?

Could the difference be traditional and emotional? I've chosen pigs and squid (being close siblings of octopuses) because their intelligence and even empathy are comparable to a dog's. If we favor dogs because they fawn on us, well, we must admit that we have bred them to fawn on us, so that's ethically problematic.

Even the most basic law that everyone across cultures can agree with, some variation on "Thou shalt not kill," is widely disregarded, and lawfully. You can smash a mosquito. You can wash a spider down the drain. You can agree to have your pet put to sleep. In most US states, you can still abort a fetus. In wars, killing seems to be not only tolerated but encouraged; "seeming" is important, because soldiers who wantonly kill civilians are responding to that seeming. Serial killers are still put to death in many places, and as you look around the world, there are many more reasons seen in cultures as acceptable or necessary conditions for putting someone to death. Killing is sometimes needed for self-defense. No one mourns cancer cells that are cut out, blasted with radiation, or poisoned. For economic survival and gain, it's usually seen as fine to cut down trees, and it's fine if this means animals depending on them die. I say "fine" intending a pun, because usually there aren't even fines. By and large, there aren't many good laws against driving an entire species to extinction. Many laws seem to support heightening the chances of humanity's extinction in the next hundred years or two. This one rule "everyone agrees on" around the world is not exactly a strict and perfect rule, is it?

So who are you allowed to kill?

What's the best rule?

The particular way it's expressed makes a difference, and the conditions around it make a difference. And those details will favor some living beings and disfavor others. And whatever the exact, more detailed form of the rule is, it'll be a rule humans came up with, one that was unspoken, unfollowed, even once unthought, and then it existed in different forms, and could exist in yet other forms, some of which may be kinder to human society and life on earth.

Whatever the rules are that make you possible, that make your success possible or likely, or impossible or unlikely, could have been another way.

This is a basic truth I think we must agree on if we are to be sane and always moving, at least a little bit, toward a better, healthier, more equitable, and more sustainable civilization.

We need to think freely about what can change. Thinking about changing anything does not harm anything. It's thinking. That's what's good about it: it's in a bubble. What you do in thinking is a little, personal sandbox experiment. You can derive insights without moving materials around or threatening anyone with the wrong kind of changes for them.

Many people seem to think that thinking—and its cousins speaking and conversing and debating and making and experiencing art—seem to do nothing, and therefore are a waste of time. Incorrect. All are strictly necessary ingredients of social and global progress. And much of this value is exactly their virtual nature, their safe, sandbox quality. We become insulted when we don't like the way another person is thinking, but this disrespects the biggest value of thought: it's in a sandbox, it's experimental, it's processing, it's simulating a path to anticipate mistakes and avoid them if possible. There's nothing wrong with turning up mistakes in simulation mode. That's exactly what it's for.

We should think differently about thinking, but let me put that aside. What I'm really fighting is status quo bias.

Status quo bias is pernicious. Big changes do need to happen first as little experiments, preferably. It is far more dangerous to gamble with millions of lives than it is to run some careful local experiments and then try to scale them up if they're successful. Status quo bias is helpful in preserving what works. And so it deceives us. It has us thinking that there's something better about tradition itself. It has us thinking there's something worse about saying or doing something that seems weird. It has us convinced we've all got to follow all the rules at all times. It even has us paranoid about the unspoken rules, which are unspoken because the unspoken ones are the most contingent, and by remaining largely unspoken, can change readily when the circumstances change. Status quo bias, respect for and pride in tradition, makes it harder for us to see the truth in what I've been saying above. And it's one of the biggest forces holding humanity back. Status quo bias is more dangerous than big corporations. There is nothing necessarily wrong with a big corporation. In a more equitable world, we could imagine that there would be nothing wrong with voting using money. But status quo bias, left unchecked, will always have serious downsides.

That is, until we have found the perfect society, or even one that's good enough. And we absolutely have not. And if you are content with it, I am not. I do not feel the present era is even close to something I would feel content with myself. The world of today is not good enough, not because it isn't amazing, stunning, and truly awe-inspiring, but because it is unfairly and unnecessarily destroying lives everywhere we look. These lives are not just human, not just animal. They include future lives. That is not good enough. Yes, everyone does die, but no, conventions, rules, and practices that unfairly and unnecessarily destroy lives are not good enough. Does my reasoning sound circular now? It is, a little bit, in the way I've presented it, without the endless examples that could be found, but I don't think it's really possible to disagree with it. That the present state of affairs is not good enough is self-evident. And if it's good enough for you, then I'm happy for you. This isn't about resentment. This is about destruction and changing conventions, rules, and practices in the face of it, to alleviate the destruction, to preserve life at large, to make it possible someday to settle on other worlds without the guilt and embarrassment and failure of having destroyed our own.

We do not really deserve to live on another planet unless we can garden this one beautifully, which to me means allowing lots of wild spaces and figuring out what to do with the different forms of life so that we may coexist. If we can't coexist with ourselves, how would we coexist with aliens? Asking about aliens may seem idle today, a sandbox question, a question in a bubble. But maybe we eat "calamari" and not dogs because cephalopod intelligence is the most alien intelligence on earth, compared to our own. And it is just that traditionally we do not recognize or care about this. If we survive, one day this question will not be in a bubble at all.

In some ways it isn't in a bubble and never was. When we think and discuss, we run a kind of experiment that, over time, can turn surprisingly constructive. Suspend disbelief with me for a few sentences. Maybe now is always a good time to solve problems we see in that bubble, when we pause and draw a circle around us for a moment and stop acting, and think; when we trace possibilities, imagine, fantasize, hypothesize, use counterfactuals, play Devil's advocate, support the underdog view, offer what hasn't been mentioned yet, discuss, argue, go into science fiction or empathy or compassion or art or science or experiences or ways to change rules that we considered permanent and effective.

You think I'm not being serious because I'm talking about playing with ideas. You think the most difficult problems are solved by refusing to goof off. You think there's something wrong with imagining changing an important law. The evidence on how effective solutions are found suggests otherwise. If the problem has already been solved, you ask an expert or perhaps metaphorically go to a library. That's a serious business. If the problem is unsolved, though, you probably have to loosen up and play. If you're feeling too moralistic and standard and full of shoulds, you'll miss what's wrong with morality, standards, and shoulds.

You still think that's small, don't you?

What if nothing is bigger?

And what if playing frees you up to think, with a little less distress and avoidance, about serious problems that do require more of us to think and collaborate? Alfred Hitchcock made horror movies, but on set, he called them comedies. When he and the cast felt stuck on a problem, he'd point out that they were being too serious, and he'd tell an irrelevant but entertaining story. People who worked with him related that after this, the group would mysteriously find it easy to solve their problem. Creativity researchers today understand that this is actually normal. They would expect it. When people have severe illnesses, they joke about it. I saw it regularly in months of chemotherapy for cancer, when I was in chemotherapy myself. When I went in for surgery, I was scared, but no joke from friends or family was wrong. At funerals, laughter can be uplifting. This isn't all frivolous. Some of it actually solves problems, or gives us the strength to approach them.

We likely won't vaporize earth, but it's easy to imagine—realistically imagine—scenarios that render it so inhospitable that we wouldn't go here in a space suit.

These are basic housekeeping concerns we cannot ignore as a species.

lundi 22 avril 2019

The Case for One Global Health

When capitalism works well, participation in each transaction is voluntary. Where it breaks down, participants feel the transactions aren't voluntary, or aren't fully voluntary. Someone paying and someone taking payment as compensation should always have a choice; allowing us traders to choose among a reasonable number of informed options, including the option not to exchange now, is what makes a market coherent.

There are times when choices are spurious. For example, the choice between one health insurer and another is not constructive. What matters to a patient more than anything is the choice of doctor and treatment (along with their availability and quality), not the choice of insurer. Indeed, a diverse market of insurers fragments the pools of doctors, treatments, and patients. Even when looking to the immediate goal of insurance, the more fragmented the pool of the insured, the less the costs can be smoothed out across the population. The worst health insurer is a small, local one with few doctors and dictated treatments. Such an insurer has little choice but to pay out as little as legally possible.

Do you see the simple idea? Some options increase freedom. Others secretly decrease it. We need to focus on finding and maintaining options that increase freedom. Sets of options that ultimately reduce choices and coerce traders should be eliminated.

I have no particular prescription for how many health insurers would be the right number, other than that, based on the above, one would seem to be the best number to start with, and if one isn't enough then it could be increased until it's enough.

We don't have forty thousand internets. We have one. (The local instances of the internet (intranets) that are not connected to the global one may work the same way, but they are not easily confused with the real internet. If I navigate to Wikipedia and Google and Amazon and get nothing, but I seem to have an IP connection active, it's fair to assume I'm not connected to the internet.) Having forty thousand internets would not be better than having one. It would be a dramatic downgrade. It would reduce options for navigating and sharing information to a sliver of what we enjoy now.

So why not have one health insurer? If one isn't enough, try two. If two isn't enough, try three. Having one insurer, or very few, would seem to maximize the number of doctors available to everyone. Then we can focus on paying medical staff for eventual outcomes and quality of care, rather than for the technological and capital intensity of the treatments.

There's nothing magic in what I'm saying. It seems mathematical, but surely it's abstract and short on detail, and I could be deceived. Is the abstraction true to life?

The Faux

Any time someone lectures you about "the real world," they're making things up. What's immovable is a question you should be answering yourself. "The real world" is not what exists materially before us, without humanity. It's what we collectively create. As a map of history would show, you are here. It's your turn.

The WWI-era philosopher Ludwig Wittgenstein wrote:

"Es ist offenbar, dass auch eine von der wirklichen noch so verschieden gedachte Welt Etwas—eine Form—mit der wirklichen gemein haben muss."

Translation: "It is clear that however different from the real one an imagined world may be, it must have something—a form—in common with the real world."

And also:

"Um zu erkennen, ob das Bild wahr oder falsch ist, müssen wir es mit der Wirklichkeit vergleichen. Aus dem Bild allein ist nicht zu erkennen, ob es wahr oder falsch ist."

Translation: "In order to discover whether the picture is true or false we must compare it with reality."

We are here because we change what's here. Even our seeing is a bit of change.

Reading the above, my brother asked, "Without reality, what are you calling truth or fact?"

And Wittgenstein asked the same thing. But before I return to him, I want to talk about relativism. Post-modernism struggles immensely—and by ensnaring itself in post-modernism, current feminism also struggles in one way I hope we can easily alleviate—with what was expressed so well in the Japanese classic movie Rashomon: our accounts of life all differ, even our experiences of the same event. Yet we still do best to recognize, to admit as a more-than-provisional assumption, a postulate, a precept, an axiom—to take as a given—that external, material reality is also there. We are often surprised by it, by the external, because we did not create it. That is a kind of proof, an empirical one, or as close as we're likely to get, that we cannot be solipsists: surprise is. Surprise is proof there's an outside.

Just cipreyes.

Surprise is information that doesn't come from us. Not the conscious us, and so not the core us in that moment. It's what we listen for and attend to. It tells us reality is out there, not all in here. Maybe that reality resides in our own brain tissue, tissue whose activity isn't part of our consciousness. Often it goes deeper into the distant and unknown and unfamiliar. Others introduce entire unseen realms to us. Many of those exist even without the others guiding us. Objective reality is the common denominator of all our subjectivity and experiences, the reality that precedes their reality and is in turn influenced by their reality. Modern physics contests this in very small but universal ways that are, I believe, fashionably misconstrued by the humanities into all kinds of false relativities. Subjective and objective are both "real" and of spectacular importance, but in different ways. We care about the subjective through empathy and compassion and art and expression. And when we say the subjective is "real," we are aware that it informationally and energetically exists. How you feel is precisely how you feel; that, like you yourself, exists in the universe.

Feminism's emphasis on turn-taking, which is beautifully and disturbingly illustrated in the conflicting testimonies of witnesses in Rashomon, allows us not only to illuminate our inner worlds and see each other's, but also, we should admit, to triangulate in on what's actually happened and examine together what's likely to happen. Wittgenstein asked the same thing about reality—the same thing as feminism and post-modernism, and my brother and the parable of the elephant and Rashomon and every courtroom, and the Persian story of a trusted adviser who tells everyone in a dispute that they're right. The radical philosopher wanted to know how to call something "truth" or "fact" without reference to the outside. Is that possible? Is it just subjective?

For a while, he was obsessed with tautologies and contradictions, statements that logically cannot be false (the number 357 is the number 357) or cannot be fully true (the Titanic is unsinkable; it has sunk). These examples show that for two particular logical categories, you can tell whether a statement is true just by looking at it. The structure of the statement, to some extent, rather than its content, tells you whether it works. No one gets a lot of traction disputing conclusions about statements like these. 357 is 357, and it's also anything else 357 is, like 101 + 256. The Titanic can't have been unsinkable. Maybe an insight or pattern could be extracted from them.

Eventually, he gave up on that. Most statements can't be reduced to one or the other, tautology or contradiction. That isn't necessarily obvious without following the links of word definitions, but maybe it feels easy to believe. Here is a similar idea, a surprise that would seem obviously false. All logically solvable problems can be solved by a relatively simple piece of code, which was written in 1959, only 12 years after Wittgenstein's death. It relies on one core feature of logic. To simplify a little, it only takes one special rule of logic to solve all logical problems. That's pretty unintuitive.

At least, until I supply context about binary numbers: we know that sense information, like all information, can be reduced to binary numbers, and binary operations are generally very simple, yet they can accomplish anything a calculation can accomplish. Still, all logical problems would include all present and future mathematics, everything any piece of software or machinery could ever do, and perhaps anything any of us could ever figure out without simply guessing. This extraordinary ability of one single logical rule is, I'm told, provable (there are different accounts of how many problems can be solved this way in theory), but in practice most problems, even with today's computers, either cannot yet be formulated clearly and specifically enough for the program, or else it would take more than the lifetime of Earth or even the universe to produce its answer. This program, in its original form, is no longer in use, but its relatives have been given jobs simulating human problem-solving and automatically verifying mathematical theorems. It and its creators are widely credited with kicking off the field of artificial intelligence.

So a search for simplicity here, earlier, in the cacophony of World War I, in statements made by humans with words, wasn't necessarily foolish and naive, though it was unusual. Arguably, it's always brave to challenge the obvious in a search for better principles.

This philosopher who began writing in the trenches goes on to say:

"A priori knowledge that a thought was true would be possible only if its truth were recognizable from the thought itself (without anything to compare it with). In a proposition a thought finds an expression that can be perceived by the senses. We use the perceptible sign of a proposition (spoken or written, etc.) as a projection of a possible situation."

It seems perfectly unsurprising and not worth saying, once we slow down enough to absorb his Captain Obvious meaning with this line (his writing is not all so easy at all, and I confess I haven't read much of it). But he was challenging and documenting the obvious in search of better ways. He wanted to state the irrefutable. He was trying to record what must be, and branch out from there with logic, like an oak growing from an acorn. We can't really describe anything without referring to a situation, in other words to some potential for objectivity. The words themselves are meant for ears. Ears sense sounds. Sounds are energy waves in molecules. We mutated to speak because carrying and throwing world objects in sounds helped us survive.

When we assert what happens "in the real world," we are painting a picture. It's a kind of stereotype of the world. Sometimes that picture looks so realistic that we forget it's a picture. We forget how deceptive even a photograph can be, and how limited it always is. When most of a population is metaphorically forgetting the difference between a photograph and the entire future world, systematic errors can infiltrate apparently tough-minded views and practices. These become self-fulfilling and self-perpetuating prophecies. We fill prisons with non-violent citizens in a War on Drugs that still doesn't fix the issue. In the real world, you've got to be tough. We subsidize coal instead of renewables. In the real world, coal is what works. We elect a demagogue via the Electoral College, an institution created to prevent the election of demagogues. In the real world, glorious tradition is smarter than the people, and the country would fall apart without the Electoral College.

Realism is practical. Faux realism is the opposite.

It's worth spending time and effort to distinguish one from the other—some of us need to be doing this. Why not most of us? Is that a fantasy? Is our bravery for adventures in realism improving or getting worse? Where's the data? How many of us are checking the facts and the logic, and how are we doing at that, anyway? Almost counterintuitively, the difference between realistic, fauxistic (hehe, I couldn't resist twisting the word), and impractical often comes down to effort. How much effort are we willing to make, ourselves, to develop ideas that are exceptions to the apparent rules? Effort is an ingredient in the practicality recipe. But we'd better not assume we know the extent of that ingredient just from how we feel, or how motivated the people around us seem, or what impression we get from the news. Political will can appear suddenly. It was already there, wasn't it? Was a brick wall in the way? Perhaps it was only perceptions of missing "realism." In the meantime, a realistic change that isn't developed won't look so real, even if it's practical and sustainable.

Realism or faux realism?

A true story can sound made up, and fiction can be utterly convincing. How do we know? Are we going to trust a gut feeling? Shouldn't we be scientific about this?

A woman can't become president.

To me, that sounds plain crazy, but it's still accepted by many. Why not? Let's look at the empirical probability, so far, that this statement will hold up, taking the simplest approach: 45/45. Of historically observed US presidents, all 45 have been guys, and all 58 elections have gone to guys. 45 out of 45, or 58 out of 58, take your pick. That's 1! In probability, 1 means certainty! It's math! History repeats itself! All that experience! However, now let's state the equally obvious. US presidents are not the only presidents, other nations have female presidents, and eventually this trend is almost certain to change here as well. I won't even get into the various reasons why we might specifically need a woman to be president for once, or at least a non-cis man.

A non-Christian can't become president.

These are bets, not realities.

Recognizing that they're bets is the first step.

So far they've held up in one country, but the future is not the past.

Both ordinary examples are faux realism. They sound compelling. Cold hard truth. Like it or not, reality doesn't care how you feel. But these rules aren't what they claim to be. They're historical patterns, actually, not laws of nature, and not even rules. In all likeliness, other factors will soon be more influential than the factors making these trends as consistent as they've been.

Here it only takes a moment to see that these are more trend than fixed truth. In many everyday situations, though, no one makes even that minimal effort. As we get to know the ways of the world, we think more automatically about the familiar. It becomes more of a feeling than a thought, even. We just know. This tendency to think we know from experience, and stop thinking, is called "automaticity" by psychologists, and it's one of the biggest strengths and vulnerabilities of experts. And it's a reason newcomers can sometimes be right in a big way, and naivete isn't always bad.

These non-expert strengths have been given names like "beginner's mind," lack of "functional fixedness," and simply "openness." Anyone can notice something and report what they see. These democratic strengths are a reason that groups allowing all members to take turns and speak up, even newbies and people seen as the least competent, become more intelligent groups, groups that solve more difficult problems better. We value democracy highly for a solid reason that can now be measured. It works better than anarchy, aristocracy, or dictatorship. What we desperately need is to keep searching for the most effective democratic methods. We can't stop. We're not there yet.

Whether we're experts or not, each situation is unique. Oh yes, we'll recognize patterns. Some will be subtle but crucial. Some will be blazing red herrings. Some, we will have to depend on others to tell us about. Events with multiple actors quickly get complicated. Without warning, it will take so much more effort, then, to separate reality from "real world" opinion. Often we don't bother to try. We don't even realize we aren't bothering.

That's important to know and remember, isn't it?

And then there's this effect that's so common that anything else becomes strange and wonderful. Everyone is so determined not to run into any brick walls that they'll feel sure that no one else should be trying to knock down what looks like a brick wall, either. They don't just bet that it won't move; they often want to police their bet that it won't move. It's normal to hear people shaming each other about "the real world" (each word in this usage, "the," "real," and "world," strikes me as ironic), even when the supposedly unavoidable truths are clearly trends and bets specific to the time and place. It doesn't seem to matter. The photograph is the entire future world. Didn't you know that? Any difference doesn't matter, because there isn't one.

There are different trade-offs to different approaches. If we know which approach we're taking and what its strengths and weaknesses are, we're more likely to choose a good one for the situation. We take a turn. We let someone else take one. And they let someone else. And... Now we've completed the circle. Let's admit that trends and bets are critical in life, just as vision is critical—even though it's imperfect and can be hacked by optical illusions. But let's call trends and bets what they are. Let's not call a map a territory.

You are here on a map. And you are out in the territory. And you change the map you express, and you even change the territory. Right now, you're breathing and radiating heat and changing it. Just your heat changes the world around a little bit. And you are not alone, and the map is not all yours, and the territory is not really yours at all. We share a piece of it, but it exists. We cannot capture it. Our words do not replace it.

Fairness is Participatory

Some revolutions that have come out of studying games: probability theory (which leads to statistics, then all modern experimental science), game theory (clarifying economics and evolutionary processes; may also have helped defuse the nuclear arms race), and substantial parts of artificial intelligence. I simply do not believe that games are not important!

It isn't as if the insights above couldn't occur otherwise, but I notice a pattern over the centuries: games are useful!

For one more example, the idea of fairness itself is virtual. Are we all identical? No!!!

So, hold on, hold on. Let's pretend for a moment that we are interchangeable. This is a hypothetical space we're in now. We are not interchangeable, or only in some ways and partially. This is a strong abstraction: there are people in your life you would not allow to be replaced with a random stranger. And imagine how long your workplace would buzz along effectively if you all kept getting swapped out with people off the street. But supposing we were interchangeable, then what should the rules be, and how do things balance out?

That's precisely how a game works: the rules don't care about your name, genetics, or personal history, and you could step out and someone else could take your seat. The rules are tuned to be playable for many people. And that's also the beginning of fairness. Seeing different scenarios as the same and different people as interchangeable requires a hint of make-believe: all scenarios and all people are unique. Fairness can be seen as a flavor of virtuality built into most of us, a half-baked and fallible instinct, one that depends critically on a hypothetical, on unrealities of sameness, on falsifiable abstractions.

It also happens to be very useful in society. Over time, civilization tends toward fairness. The idea is most useful after we understand it. Fairness is a thing we co-create to encourage participation in a better society.

jeudi 28 mars 2019

Types of Reality

There are different kinds of reality.

Sensation. (Measurement.)

Memory. (Record.)

Meaning. (Interpretation.)

Expectation. (Prediction.)

Trend. (Law.)

Externality. (Objectivity.)

Universe.

I imagine these as roughly concentric circles, innermost to outermost. We can know the inner ones most exactly, but we can access only the smallest dot of reality directly from the center. As we move out through the circles, we pass through layers to larger stretches of what's real. The picture gets bigger each time. Every step is fallible. Outside all the circles, we have what actually exists regardless of us, yet it also intimately includes us. The universe contains not only all matter and energy, but also all information, which includes all interpretation and experience.

This is kind of a spur of the moment thing, so maybe the circles could be adjusted.

If you open this image in a new browser tab, you can read it better.

Life on the Shrinking Preservation of the Social Contract

My mother likes to say that the government doesn't make money. This makes sense from a certain standpoint. We pay taxes somewhat under duress, and it's difficult to fire a government when the job isn't done right. They aren't selling their services on an open market.

From another standpoint that acknowledges her objection—and I mean this as a pun—the statement may be wildly inaccurate. You'll see what I mean by the pun in a minute.

Let's imagine something very normal: buying milk at the supermarket. The transaction pays the supermarket and the farmer, and also the government through a tax. Money was made. Who "made" the new value? It would seem to be the farmer most of all, yet the farmer doesn't create milk. The farmer takes it from cows. In fact, the farmer probably pays someone else, or buys a machine. Although there is plenty of work behind the words I just said, the farmer depends entirely on the cows for milk. The cows are also raised and cared for—ideally well, but sadly often not. Either way, there is an outline of a reciprocal relationship. Cows, meanwhile, don't make milk from cow. Cows make milk from grass. This grass doesn't grow from itself, either, but from soil and sunlight. We could step further back into how the sun pushes out energy, but let's stop there.

No one "makes money" alone. That would be impossible. Everything is a transformation of matter and energy, and everyone takes a cut of the energy's influence. Governments provide a stunning number of services and may be the biggest providers of services in the world. In this sense, at least, the government "makes money."

The more interesting practical and ethical question is about opting in and opting out. Let's talk about something strange and unfair and privileged about me for a moment, something I can thank my mother for most of all. As of today, I have three citizenships: American, British, and French. The nice thing about the latter two is that they also carry EU citizenship, so, technically, if I could speak a local language and I was feeling brave enough and competitive enough, I could work in any of those countries like anyone else there. In many of them I could even vote quite quickly, being a resident. So for me, even though all 3 passports are currently expired and need renewing, there is this wonderful sense, for which I am enormously lucky and grateful, of having options.

This is something I would like for every person on Earth. It also comes back to the question of government services. There is research now saying that "big government" actually correlates with happiness and healthier societies. For decades "big government" has been a deep insult on American television. But whether the research has the whole picture or not, we all do intuitively understand, I think, the value of efficiency and effectiveness. If big socially generous governments like those using the Scandinavian model do help support healthy and happy societies, that is not because they are big, but because they are effective and efficient and relatively uncorrupt. They may be "big" in the sense of ensuring many services, but they are not "big" in the sense of being wasteful or bigger than they need to be.

But let me return to the original question and my point. We often rightfully object to services ensured by the government that we do not ratify or even approve of ourselves. The United States split from Great Britain not, perhaps, most of all because of gross humanitarian misconduct, and not even because of an expensive tax, but because of the idea of a tax combined with the insult and impracticality of denying colonials representation in Parliament. Had representatives been in Parliament with influence, they would have debated the merits of a small tax on tea from India to fund the war in India, and whether they won that debate or not, the tax would not have broken into a revolution. Not that year, probably not that century. Eventually, yes. The core dispute was over representation and services, especially at great distance.

Half a century later, or a little more, Henry Thoreau refused to pay his US tax, not because he wanted to stir up trouble, but (if we believe his own words) in spite of his desire to be accommodating and a good citizen. The consequences of this act, of his articulate case for "civil disobedience" and his invention of the phrase used by Mahatma Gandhi and Martin Luther King, Jr. and so many others, were felt around the world and are still felt, but we haven't collectively figured out, strangely enough, what this might actually mean for tax.

The fact is that by living in a country and assuming its citizenship and paying its taxes, we pay for membership in a club and the services it provides with the membership dues and subscription fees of taxes. This is at heart inescapably a social contract.

The trouble is that we are not free to shop around, and so the country of our birth absolutely monopolizes us. When I say "free" I do mean at liberty. You may acquire a citizenship in another country, but rarely easily or quickly. I became an American citizen at 13, having lived here for nearly 10 years. English is my native language. It took about half that time, from entering Kindergarten and being laughed at for my British accent, to learn to speak with an accent more or less indistinguishable from a native's. You can still hear my British accent in my American accent if you listen, and my American in my British, but that's because I speak both all the time and they bleed into each other. If I focus on one or the other for a few weeks, surrounded by others doing likewise, the noticeable influence of the other disappears. As I type now, I'm thinking in a British accent that is not particularly British sounding, in the sense that it is very neutral and unexaggerated. It's just how I learned to speak. Language partly defines us, and it's one of several factors that impede shopping around for the best government "club."

Switzerland is an outlier. It has four national languages. One reason Switzerland's highly capitalist, directly democratic, and relatively socially supportive system functions well is allegedly that, the country being so small, citizens have been able to move a few miles from one canton to another to shop for the best services. In response, the services—the benefits of "club" membership—have improved across the board.

If, like Thoreau, you or I were to go and live in the woods, then we would benefit from few of the overt services of any government, though we would, in a way impossible to calculate, benefit from its protections.

This is where I invite you to consider the terms of subscription to government services, and whether a person should be compelled to belong only to one government. In my opinion, the answer to the second question is "No." My multiple citizenship does not make me superior or immune, but it does make me less likely, I think, to see war as a practical solution, or economic hegemony over more naturalistic countries and societies, or ones still industrializing or trying directly to post-industrialize, to be excusable. We pay so little for the foreign goods we buy so cheap not because they are worth little but because we have the bargaining power to pay little. Multiple citizenship makes this realization bone-deep rather than simply intellectual. That citizen-of-Earth feeling persists, even, I think, if your countries are all "first-world" ones.

We need to make the phrase "social contract" meaningful. A contract is no contract if you are not at liberty to decline to sign it. At present, hunter gatherer societies have little to no protection against the commercial, environmentally degrading encroachments of companies spanning multiple nations. But how many of the employees of those companies work at several companies? How many of them have several citizenships? Or only the default one of Earthling, like my mother when she left Communist Czechoslovakia and the Soviets took her passport? How many of them have time to see what they do to natural landscapes? How many of them have been asked whether they would like to sign the "social contract"? The choice was made for them and they behave as if they have no choice.

If hunter gatherers were protected, if their roaming territories were set aside for them as part of an inviolable contract, then we would have custodians of wilderness. We would have another reason to preserve wildlife: humans are living there and want to live there. It would only be another step from that agreement to realize that animals too can decline to sign our contract, yet should be allowed to live. If you are not at liberty to decline to sign the social contract, it is not a contract.