mardi 23 février 2021

There's a famous expression, Garbage In, Garbage Out. GIGO. Every programmer knows it.

But it applies to real life, and the wording is deceptive.

What GIGO means is that even a small error in reason - whether that's in the logicky part or the factual premises it works on - can destroy the entire argument and its conclusion.

So, if you want to apply GIGO to everyday life, the first thing to know is that Garbage In could look absolutely right to you. A small flaw is what makes it "garbage" and ruins all the logic.

The presence of many big flaws could be worse, and that's the image conjured by the expression. But actually, many big flaws could be no worse in consequence than one tiny one. That's logic for you, unfortunately, and why devices glitch out - and why every programmer knows GIGO.

We don't all apply it elsewhere in life. That's an additional skill.

But it does apply, and does it!

You've often heard about people who, well, everything they're saying is reasonable, but you just know they aren't right. It's given as evidence that logic doesn't work the way it's supposed to, and you can be "too reasonable."

So, any time this feeling is on point, if you look closely enough, you'll find it's because there's something subtly wrong with the logic. Maybe, for example, they're saying "always" when the reality is "almost always." You've probably heard of "black swan"-type errors? We can all rely on the common-sense fact that there are no black swans, or real-estate always goes up... until, well, we encounter the reality that there are black swans, and sometimes real-estate goes down, and so on. The shock of "black swan" events comes from mistaking "almost always" for "always." For most people most of the time, that difference is negligible, and they may not even know it's there. But for logic, it can make a gigantic difference. When "black swans" are disruptive, it's a simple example of GIGO. And that's it. No more complicated than that. (Usually "black swan" implies a long-tail probability distribution, which can itself carry just as much import, but that isn't relevant here.)

The black swan effect falls under GIGO, but to hear either phrase, you wouldn't know it. "Swans are white" doesn't sound like garbage. But that's a perfect example of the G in GIGO. (This isn't meant to be a statement about race, by the way - but on that note, prejudice ranges from obvious garbage to subtle garbage, and technically is all Garbage.)

Anyway, it isn't that hard to fix: "These swans have white feathers. Some others have black feathers. We have no data on other feather colors yet." You could add, "Swans were once believed all to be white." And you could give numbers and geographical regions. But the process of making this less Garbagy is a little more precision and a lot more openness to data. This is why logic and a feeling of absolute certainty do not go well together. Good logic comes from breaking down absolute certainty - and thereby standing the best chance of acquiring it.

-

Most of this is obvious. The thing most laypeople don't quite fully appreciate is that a grain of sand can wreck a spaceship when you're working in logic. Of course it might not, but it can. And more importantly, the danger doesn't go on vacation when we talk about everyday life, or when our goal is only to be sure we know what we're saying. The grain of sand might not destroy the spaceship. But in logic (and in arguments about and descriptions of energy and matter, which is logical), there isn't really a limit to how much a little thing can mangle. Anything in the same system, the same set of contingencies, is vulnerable to a little slip.

We all, of course, have plenty of experience with little mistakes that turn out to be superficial. Oh, sure! Absolutely! (See my previous post.) Just don't let that lull you to sleep. All too easily, it can. All too dangerous, it does. Don't forget: the Shuttle Challenger blew up because the temperature tolerance of one rubber gasket was off, and so it got damaged by frost, then tore when it was needed most. In the context of billions of dollars, the lives of the crew, and many launches, that's pretty much a grain of sand.

The Mars Climate Orbiter crashed into Mars because a programmer used the wrong unit somewhere. That's GIGO. But it isn't some detail of coding. It all but is coding. To use logic, you must deal with its Achilles' heel constantly. You can't write more than a line of code without those tiny things trying to catch up with you. And again, and more importantly, it isn't specific to code. It's specific to computation and by extension reason in general. It's specific to making any kind of conclusion or prediction. So - that's big, right?

The point isn't to intimidate anyone into thinking they can't use logic right, just because they don't have specialist knowledge. You can use logic.

Actually, it's kind of a one-two punch:

1) SIMPLE. Make the logic as simple as possible, so you can see it all easily and hunt down any errors creeping in (at the edges or in plain sight).

2) UNASSAILABLE. The facts that you rely on must be unassailable. Not probably right. Not everyone knows this is right. Unassailable. You need error bars on that - preferably. Ideally, you want to know how that fact was verified. And you want to welcome criticism of the fact, not shun it. You want to know any potential weakness. Your facts are your rubber gaskets.

If you do 1 and 2 really well, you will reason better - by which I mean more reliably, more accurately, making better predictions - than almost anyone you know.

The best way to know things for sure is to start with unassailable facts, and use simple logic carefully on them, checking it all repeatedly. You listen carefully to all feedback. Others don't tell you how to conduct your own mind, but they do tend to give useful information if you let them.

From there, you can branch out, get exploratory, speculate, have fun, etc.

The base allows that. You need the base. 1. 2. Then have some fun.

I dunno. It sounds too simple, or elementary, or something.

You'd be amazed.
Our minds are extremely probabilistic. They are good at that. They are not too good at certainty. We think they are. That's why we need computers, in a sense. Computers reason shockingly more reliably than humans, within their established scope. We underestimate quite how big that difference is. A typical CPU goes through more cycles in a second than the heart in a lifetime of beats. In a minute, that's another 59 lifetime+s, and typically, zero errors are made. Imagine 60 extra-long lifetimes doing math constantly and making no errors. (That first minute is just a taste. There isn't good public data on CPU errors, but CPUs are so accurate that they don't have built-in error-correction the way memory and hard drives do. If there were many more errors than this ballpark, that couldn't be true. Various kinds of marketplace ensure it. You'd run the same simulation twice and get different results. That's an experience people tend not to have at all. If the output is different for the same inputs, that typically means there's random number generation in the simulation, which means the two runs aren't precisely the same.)

Unless you stop to think about it and poke around, you have no idea how much better computers are at certainty than we are.

On the bright side, we are great at probability. We do need to remember that the bets our minds place in constructing an image of reality are colored by a wide variety of biases. But an excellent way to do this is to recognize that the kind of thinking we do is betting. It isn't too competitive in the domain of certainty. Knowing this makes immediate differences: for example, it's quite tough to hate a group of millions of people if you realize your opinions, feelings, and conclusions might be ill-founded.

What our brains do really, really well is heuristics (aka rules of thumb, guidelines, pro-tips), and overall the process falls under Bayesian reasoning/statistics. To put it all briefly, in logic, you aren't allowed to reverse IF-THEN statements. Sometimes the statement reversed is true, but whenever the reverse is true, that is its own new fact, a surprise that you would have to specifically know. It's lethal to presume THEN-IF also holds. In everyday life, though, we are constantly speculating about THEN-IFs. We do this so routinely, and find it useful often enough, that we rarely if ever stop to realize that what we are doing is not logic and very far from certain. Bayesian reasoning is a way to make informed guesses about when THEN-IF might also be true. And that's what our brains do 24/7. In terms of daily living, most of we think we know is an informed guess. But we cannot, if we want to be really sane (beyond the group "common sense" handed to us, which can also lead us astray), confuse the conclusions of this kind of reasoning with actual certainty.

Obviously it isn't all hopeless, and it's unwise to believe nothing and stand for nothing (nihilism and absurdism are forms of sophomorism, anyway). Combine critical thinking (including skepticism and knowing that THEN-IF isn't logic) with open-mindedness (which includes what I call positive skepticism: "It's possible" or "You never know" or "I could be wrong" or "Breakthroughs happen" or "Everything isn't always as it seems" or "There are more than two sides to every story"). This in no way at all prevents you from seeing that B is much more plausible than A. The mistake is shutting out uncertainty as if you've now conquered it. That rarely if ever happens.

Think of driving a car. Do you choose the right position to put the wheel in and leave it there? Or do you keep your hands and arms supple and responsive to the perpetual stream of updates from your eyes, etc? That's objectivity.

Even musicians use objective flexibility, or the suppleness of objectivity, in search of an optimal performance, believe it or not. According to top experts: to play violin better, stop tensing up. Relax your body as much as possible. Let it be ready to respond sensitively to the slightest thing. Tenseness is a presumption. You have to undo it first, then do the work of responding. So lose the tenseness, and respond to input. Respond to conditions. Don't resist new information, but process it, realizing that your process is heuristic, is a series of bets, is not much like a CPU. That's objectivity. That's making best use of humanity's gift.

jeudi 18 février 2021

When I was younger, I was a determinist. Some would call that a fatalist: I believed that whatever happened was what was going to happen anyway. It was all equally fate. We experienced ourselves making choices from an insider's perspective, but that didn't mean they didn't happen step by step according to causality and the laws of physics.

The reason I'm not a fatalist is that there's nothing to lose. If I'm predetermined from birth to death, then I can't help it anyway: so if I believe in free will and that somehow harms me, I could not have done otherwise; those were always the fixed rules of my game.

If I'm not predetermined completely, then in some sense I stand to lose anything I could possibly lose by getting this wrong and thinking "I have no choice." I stand to lose anything and everything choice might offer, including my life itself.

A lot depends on what we actually mean by "I," but that's a rabbit hole I don't want to get lost in right now. My organism can have a choice the same way an NPC can have a choice based on how you're playing. "Having a choice" doesn't mean "non-deterministic," necessarily. Your FitBit could "have a choice" to wake you up with an alarm that depends on your current biometrics - that is, on whether it calculates this to be a good time or not. By human standards, that isn't a choice, because it's deterministic. But from some definitions of the FitBit's "I" (if there is one), that's a choice anyway. It can't know your biometrics in advance, so it's waiting for your input, which when it arrives will inform its action. The process of that informing and that action could in theory "feel like something," even if it can only flow one way that time. The FitBit is in some sense "surprised" by your biometric data, as it didn't know it in advance: that's the definition of information, and of surprise, which are now often defined as the same thing.

I find the idea of all this experience - some really painful - very wasteful if we don't actually, in that experiencing, gain the power to co-invent the course of nature.

If I have nothing but determinism in me, what do I gain for my pain?

Learning, awareness - but I could learn from data without experiencing that. So why experience it?

I'm convinced we feel something because we are participating in the creation of the universe.

Creation, not just unfolding.

mercredi 17 février 2021

When I write, I'll start out with some basic sense of direction. As soon as I get to a point where I'm not sure how to proceed, and I know this is experimental - or else I see something I want to fix, but I'm not entirely sure the new version keeps everything that did work - I highlight it all, copy, create a bunch of lines above it, and paste. Then I continue.

Sometimes I never do this. Things I post on Facebook often are not thought out much and I just put up what I wrote (and then second guess it all, and probably alter a bunch of little things, and quite likely delete the post).

Other times I do this over and over and over and over. There are poems I've written with hundreds of copies like this stacked up in a file, with random numbers of blank lines between them.

So far I have left no extra copies of this thing, though I've altered the paragraphing above.

There is a point. I'm not just pseudo-bragging about method, and it isn't just about bragplaining, either. This method I described is actually in some ways very frustrating. I started it - I'm not sure when - but it's now basically an unbreakable habit, no matter how imperfect I find it. For example, I dislike that I use a random number of lines between versions. Shouldn't I use a specific symbol to mark the transitions, and a fixed number of blank lines? I've tried. Many different variations of that. Nothing sticks. It's all too much effort. When I do this, I'm being completely impulsive. That's how I write - impulsively. Then I worry a million different ways. And I'm proud of some of it, and then embarrassed that I'm proud of something that isn't after all very good or appealing. That's kind of humiliating, to realize your best is dreck. But you have to. Or you can. I choose to.

The point is for years I've been trying to figure out a better method, and specifically a piece of software that would support it. Yes, I could use Track Changes in some app. Other apps have Snapshots. There are version control systems used by programmers but also appreciated by some writers and other content creators.

I wish any of that felt right to me.

I want to be able to survey the history without friction, all the versions. And I can. But I also want to be able to clear away that clutter instantly or navigate versions in a less linear, more functional sense. That is, when I edit a poem, and I'm copy-paste-editing new versions vertically in a stack, at any given time, I'm working on a particular part of the poem. Something bothers me, or some idea is tugging at me from out there and I'm trying to bring it down to earth and human letters. So a string of versions will be about that part. But then later - who knows where in the file, or when - I may be working on that same part again. There are poems I've edited here and there for 2 decades, some even a little bit more (though not as much, as my first poems didn't use this copy-paste-edit approach in a vertical stack - for those I usually have one or at most two versions). If I want to work on another version years later informed by all the previous versions, it becomes very difficult to survey the different options I've considered for any given part of the poem. Sometimes I'll put a chain of 4, 5, 10, 20 different words or phrases that are options for that part of the poem, all in the same version in the stack. Then the next version, I'll have a bunch more. Or a bunch of options for the next line, or the previous one.

At this point I don't write poems like that anymore, or not most of the time. I've gotten lazy, and more experienced, and much of what I consider I'm happy to leave unwritten, and just pick what I prefer right now, and leave it to time or whatever to see if I want to change it again.

But that belies the fact I do not have any one particular method, and each poem - each thing of any kind I write - can use a slightly different or a very different approach. I do still write strings of alternative words, sometimes. And years ago, I'd often use fancy parentheses and brackets and something like regexes to indicate different options and branches and whatever. That is, if I use this word, I should use this whole phrase, or maybe this whole phrase plus this other word, but if I use that word, then - you get the idea. You can sort of indicate that kind of thing with backslashes, parentheses, asterisks, brackets, perhaps symbols for AND (I generally didn't go that far), etc.

The reason I did that is because editing language is fundamentally not linear. Options are connected to other options in sporadic, unpredictable, long-distance, unobvious, intuitive ways. So I was grappling with that as I considered different options.

My point is I'd like a text editor that works with how minds actually work, or, anyway, more like how I work when I write. And I think the friction is that our minds do not work particularly well in terms of flat files, trees, snaphots, and linear histories. We're very webbed internally.

I keep trying to design that text editor, at least in my mind. A few times I've tried to write out my requirements.

I want to be able to navigate the alternatives for any piece of writing that has an edit history, and in a way that makes a lot of sense, and unfurls as you poke around - but that also makes available the sequence of complete versions (snapshots) in their original order. And I don't want to feel that my writing is trapped in the particular app that does this, and liable all to be destroyed by some bug in it.

-

A group has recently been experimenting with poems that kind of play themselves. You watch the poem type itself out the way the poet originally worked through it all. This is pretty amazing to watch, and at first I thought they'd gone and done what I wanted. But it still isn't, quite. Their system gives you a scrub bar to pace through the movie of the poem as it's written. But that isn't yet a context-aware, branching navigation system as I imagine.

There are two impetuses. One is to improve the editing experience, and maybe improve the final works that way. The other is to present another kind of poem, in which composition or reading sequence becomes part of the expression. I like the idea that a poem could be (quite literally) different words to different people.

What this suggests is that a better tool for editing would also be a tool for presenting interactive poems. That is, you *could* reveal your editing history as a navigable, branchy webwork. *Or* you could use those same tools to compose a webwork that is all intended fully as the poem. In that scenario, every alternative would get equal weight, and readers could choose their way through according to their personal affinities. One easy option could even be to ask them, through their navigation choices, to write out a complete, set version of the poem. In that sense, the reader would become the editor at the same time, and the final version would be their miniature publication, shared, of course, and primarily, with the original author. That's a way to relate to someone that most of the time most of use do not use or even really have access to, though technically now we do.

mardi 16 février 2021

Coding has a way of making people feel stupid, and that's absolutely true for me. And I think it's related to the challenge of logic in general, but not the same.

We have this ingrained sense that we should begin at the beginning and read through text.

Unless you have a very strange mind or an incredible amount of experience and some very particular goal, this is absolutely not what to do with code.

I can't tell you how many times I've tried and failed to find an entry point in my own code so that I could get back into understanding it and making progress with it. And I can't tell you how stupid and incompetent and hopeless this makes me feel.

So - I want to tell you a secret: It isn't about being smart.

When I went from writing that code to feeling how I just described, I did not actually get dumber, or forget how to do this.

And I've gotten lots of good information - which really helped me - about this stuff from people who seem decidedly average in terms of intelligence (an opinion, of course, and I don't mean that in any sort of judgy way - people have so many kinds of value).

There are several tricks that I can't fit in one post. Maybe 3 essential ones. But the biggest on the topic of feeling stupid - not even understanding your own code, for example, or anyone else's, and feeling completely stuck - comes from the fact that we want to read code the way we read an article. That isn't right.

The way to "read" code is to change something and hit Run.

That sounds too simple. It probably isn't quite believable.

If you want, I can link you to an article that helped. It goes into much more detail. But maybe you'll believe me when I say that this is second-nature to the best hackers. Your first teacher - or ten teachers - probably won't tell you. It's something most people who know don't really think to say. But it's common knowledge once you get to a certain level.

I'm not sure why coding is so fundamentally interactive. But it is. It isn't a lecture, it's a dialogue. The other side of the dialogue would be messages from the compiler (when things don't work) and the behavior of the app (when they work well enough that you can play with the result).

(Also, when you don't know how to solve a specific error, you've got other people, search engines, and every question other people have posted online. That's an additional wing of dialogue, if you need it. It isn't what I'm focusing on right now, because you yourself can do more than you think. But don't forget that coding languages are actually designed for people, not computers, and there are many other of these people creatures, and professionals rely on these resources also. Most API specs and manuals are online, and you get to them with a search: that too is consulting other people. So one way or another, asking others is generally necessary, even if you're like me and you like to figure stuff out and don't like to ask for help.)

You get a sense for where things are, what they do, etc, etc, etc by changing and running the code. That creates your mental map of the system. For some reason (which I don't understand either), this is the natural way to get in tune with the architecture of the logic. If people don't tell you, it's because they've forgotten they ever didn't know, or it's just their basic personality to tinker and learn that way. That's "reading." It's how to start. And it's how to get unstuck.

Believe me, you can stare at code for hours, days... even for months on and off... even a year or more. I've done it. Many times. Painfully.

Change something. Hit Run.

lundi 8 février 2021

It's so passé to bash poetry that doesn't rhyme, or painting that doesn't represent something concrete. Some people love both. Why would that offend? (Or, more to the point, why wouldn't people deeply offended by that be the right ones to offend with art?)

Also, if you think a poem doesn't rhyme just because the line endings don't rhyme, let me let you in on a secret. (Now you know.)

Basically every poem has rich sound and image interlinks, or it wouldn't be a poem.

You might find two concepts juxtaposed random, but if someone else sees a lot in that pairing, who's the one lacking?

If everyone finds a thing perfectly random, it's called noise, and no one pays any attention. Generally noise is not offensive, especially when you can opt in or out.

Splatter on a canvas seems about as inoffensive as possible. Don't like it? Keep walking.

OK: you might feel someone else's work should feature here instead. You might feel cheated of an entrance fee. You might rail against the privilege of an established painter who turns in a canvas of solid beige that finds its way to a major gallery wall.

But we have to admit, those are all meanings prompted by the blank in the gallery.

It's understood now that works derive some, much, or all of their meaning from context and interpretation. These works that rile some people up are playing with those people most.

A blank beige canvas in a gallery of great works means something different from what it would mean on a coffee shop wall, or next to a dumpster. The air of legitimacy lent by an artist's name and reputation mixes in with that. It's part of the trick.

And sometimes it's nice to look at a blank space, or relax from trying to process a swarm of intense images, and just doodle mentally with some unconnected dots.

You're welcome. The museum was thinking of you.