Capitalism doesn't just work out of the box. It isn't an Apple product! Whether it should be is a separate question.
Complicated but dependable systems need very careful design and constant testing. They won't keep working out of the box if the box holds the original prototype.
Ownership, exchange, currency, and freedom: all are critical in a healthy society. But it's also critical that we persist in crossing out "might makes right." Keep crossing it out as it comes up. Cross it out, cross it out, cross it out. Meritocracy equates to neither might nor financial demand. What it equates to is skill and wisdom in the right place: people doing what they're good at, getting better at what they can get better at, saving and advancing and beautifying lives and society. Demand backed by wallets is a splendid mechanism insofar as it brings this about. But it doesn't always, and it is sometimes profoundly undermining or damaging.
"What people will pay for" is important in a business model, but it is not truth from on high. As powerful as it is, it's still only temporary desire. It's a set of evolved signals responding to beliefs about a person's (and a group's) surroundings. We know that the most popular thing is by no means always the best, but we go on believing that capitalism just works out of the box.
And no, I don't personally believe that having money proves that you know better, and therefore your greater clout (in the tally of demand) indicates proportionally more wisdom. That's another partial fallacy that's only one step behind the more glaring one.
"Money knows what to do with money" is a piece of an answer, nonetheless. It makes decent sense. The founder of Amazon is probably not such a bad person to lend money! What I'm calling a partial answer is actually a principle very closely related to why Google searches are so effective. The PageRank algorithm gives the links from one site, say the Mercedes homepage, an importance that depends on how many pages link to the Mercedes site, and how influential they are in turn. This is quite similar to the way the transactions of a rich person have more influence on society because more dollars are sent to the rich person. Still, we do not make arguments like: "This hit came up higher than the other one in the search results, so I will cite the one that is higher up, because it must be better." We should not be so rote about matters of economy either.
In both cases it would degrade the process. If people start going to the Mercedes page and linking to it only because it's higher up, then it will climb further in the search results for no good reason. And the more this happens, the more overrated the site will get in the rankings, and the less sense those will make. Likewise, should we really give rich people and rich corporations our money, preferentially, because they are already rich? If the reason is only that they are already rich, then to do so will actually degrade the economy.
What I worry has been happening for decades is a series of false dilemmas. Either you are for precisely how we do things, or you are against freedom and against markets and against success and against democracy.
Not really.
Actually, not even slightly.
And that goes on and on in many forms.
While it's reasonable to suppose that people who make it their career to understand, respond to, and perhaps even alter markets know what's going on and how to fix problems, it's also reasonable to suppose that experts have blind spots, just like everyone else. It isn't just reasonable, it's well established that experts tend to have biases that come along with being experts.
Experts will discard some ideas out of hand pretty much automatically. It's part of what makes them so skillful and efficient. But some of what they discard out of hand would actually work, or else with a little tweaking and development it would—and could even work better.
The tendency to get what we might call "too efficient" as you gain skill in an area is called "automaticity." It's a double-edged sword. We need one of those edges. The other... we just need to be aware it's there.
I'm not sure what counteracts automaticity best... or its close relative "functional fixedness," which means making too many snap assumptions about how tools work or could work. I've never been entirely sure there's a difference, ever since I learned about these in some detail in a cognitive psychology class. It's probably fairest to say that functional fixedness is one kind of automaticity. Another closely related term is the "expert blind spot," which appears in the context of teaching. Often a teacher can't see what a student wouldn't know yet, but has to know in order to understand. Not everything we know was ever made explicit, and even if it were, we forget how much we've learned.
A good amount of understanding is intuitive filtering, which can difficult or impossible to put into words, at least until you've done some deep diving and practiced expressing it.
For example, after studying geometry, you know that when you see two lines crossing in a diagram, you can assume that they intersect at precisely one point and the lines are perfectly straight and extend infinitely. All of those are completely non-obvious assumptions you have to learn to make. They are conventions about how the diagrams are drawn and interpreted. You had to get used to them. And eventually you'll forget that you learned the assumptions. Similarly, if you read a problem about someone driving 62 miles per hour for 2 hours, you are trained to assume it's exactly 62 miles per hour (not 62.00000000000003, 62.000959, or any of an infinite number of similar values within the margin of error) with no acceleration or deceleration, for exactly 2 hours, in a perfectly straight line. Without the training, none of those is at all obvious, and in fact, all of those assumptions are going to be false. We learn particular ways it's helpful to be wrong. If we're skillful enough at that, we can make excellent predictions. Obvious?
So how do we get past these blind spots as to how things work, or could work? One thought that would look random anywhere but here is that adventure games (ie, interactive stories that unfold through realistic-ish puzzles involving objects and conversations) have always seemed to be a nice exercise. You end up really wracking your brains to see how the few items available to you could be used in ways you hadn't considered yet, and normally never would consider. You basically make believe that you're MacGyver, only it's usually not quite that intense. Nobody lives like MacGyver.
Encouraging newbies (and everyone else) to speak up brutally honestly in safe "Braintrust" meetings works for Pixar and other companies. Then experts are primed both to think out of the box and to listen to feedback from people who, yes, might not know what they're talking about, but then again might have an excellent angle. If you suspect the Braintrust approach only applies where stuff doesn't have to stand up to harsh reality, it also works at Frank Gehry's company—an architecture team famous for bizarre and wonderful buildings that look like they should fall down, but don't. Material suppliers often question them or say it can't be done, but the team are no strangers to being more thorough than the experts in the materials, although they will of course listen. Useful information goes both ways. Take a look at the Louis Vuitton Foundation building in Paris for a typical example. I like to imagine that's standing because of radical openness to feedback.
The public doesn't trust experts and experts don't trust the public, but we must work together well for democracy to thrive. The "how" seems to be the core question that republics try to answer. How do you get people with the whole range of experiences and skills deciding together wisely?
So I'd like you to think about the question as you go about your daily life. What else can or might help with this? How do we make getting past blind spots and hearing and engaging with new ideas more the routine and less the exception in our democratic institutions?