There's a famous expression, Garbage In, Garbage Out. GIGO. Every programmer knows it.
But it applies to real life, and the wording is deceptive.
What GIGO means is that even a small error in reason - whether that's in the logicky part or the factual premises it works on - can destroy the entire argument and its conclusion.
So, if you want to apply GIGO to everyday life, the first thing to know is that Garbage In could look absolutely right to you. A small flaw is what makes it "garbage" and ruins all the logic.
The presence of many big flaws could be worse, and that's the image conjured by the expression. But actually, many big flaws could be no worse in consequence than one tiny one. That's logic for you, unfortunately, and why devices glitch out - and why every programmer knows GIGO.
We don't all apply it elsewhere in life. That's an additional skill.
But it does apply, and does it!
You've often heard about people who, well, everything they're saying is reasonable, but you just know they aren't right. It's given as evidence that logic doesn't work the way it's supposed to, and you can be "too reasonable."
So, any time this feeling is on point, if you look closely enough, you'll find it's because there's something subtly wrong with the logic. Maybe, for example, they're saying "always" when the reality is "almost always." You've probably heard of "black swan"-type errors? We can all rely on the common-sense fact that there are no black swans, or real-estate always goes up... until, well, we encounter the reality that there are black swans, and sometimes real-estate goes down, and so on. The shock of "black swan" events comes from mistaking "almost always" for "always." For most people most of the time, that difference is negligible, and they may not even know it's there. But for logic, it can make a gigantic difference. When "black swans" are disruptive, it's a simple example of GIGO. And that's it. No more complicated than that. (Usually "black swan" implies a long-tail probability distribution, which can itself carry just as much import, but that isn't relevant here.)
The black swan effect falls under GIGO, but to hear either phrase, you wouldn't know it. "Swans are white" doesn't sound like garbage. But that's a perfect example of the G in GIGO. (This isn't meant to be a statement about race, by the way - but on that note, prejudice ranges from obvious garbage to subtle garbage, and technically is all Garbage.)
Anyway, it isn't that hard to fix: "These swans have white feathers. Some others have black feathers. We have no data on other feather colors yet." You could add, "Swans were once believed all to be white." And you could give numbers and geographical regions. But the process of making this less Garbagy is a little more precision and a lot more openness to data. This is why logic and a feeling of absolute certainty do not go well together. Good logic comes from breaking down absolute certainty - and thereby standing the best chance of acquiring it.
-
Most of this is obvious. The thing most laypeople don't quite fully appreciate is that a grain of sand can wreck a spaceship when you're working in logic. Of course it might not, but it can. And more importantly, the danger doesn't go on vacation when we talk about everyday life, or when our goal is only to be sure we know what we're saying. The grain of sand might not destroy the spaceship. But in logic (and in arguments about and descriptions of energy and matter, which is logical), there isn't really a limit to how much a little thing can mangle. Anything in the same system, the same set of contingencies, is vulnerable to a little slip.
We all, of course, have plenty of experience with little mistakes that turn out to be superficial. Oh, sure! Absolutely! (See my previous post.) Just don't let that lull you to sleep. All too easily, it can. All too dangerous, it does. Don't forget: the Shuttle Challenger blew up because the temperature tolerance of one rubber gasket was off, and so it got damaged by frost, then tore when it was needed most. In the context of billions of dollars, the lives of the crew, and many launches, that's pretty much a grain of sand.
The Mars Climate Orbiter crashed into Mars because a programmer used the wrong unit somewhere. That's GIGO. But it isn't some detail of coding. It all but is coding. To use logic, you must deal with its Achilles' heel constantly. You can't write more than a line of code without those tiny things trying to catch up with you. And again, and more importantly, it isn't specific to code. It's specific to computation and by extension reason in general. It's specific to making any kind of conclusion or prediction. So - that's big, right?
The point isn't to intimidate anyone into thinking they can't use logic right, just because they don't have specialist knowledge. You can use logic.
Actually, it's kind of a one-two punch:
1) SIMPLE. Make the logic as simple as possible, so you can see it all easily and hunt down any errors creeping in (at the edges or in plain sight).
2) UNASSAILABLE. The facts that you rely on must be unassailable. Not probably right. Not everyone knows this is right. Unassailable. You need error bars on that - preferably. Ideally, you want to know how that fact was verified. And you want to welcome criticism of the fact, not shun it. You want to know any potential weakness. Your facts are your rubber gaskets.
If you do 1 and 2 really well, you will reason better - by which I mean more reliably, more accurately, making better predictions - than almost anyone you know.
The best way to know things for sure is to start with unassailable facts, and use simple logic carefully on them, checking it all repeatedly. You listen carefully to all feedback. Others don't tell you how to conduct your own mind, but they do tend to give useful information if you let them.
From there, you can branch out, get exploratory, speculate, have fun, etc.
The base allows that. You need the base. 1. 2. Then have some fun.
I dunno. It sounds too simple, or elementary, or something.
You'd be amazed.