there really is no 'right' formulation and no 'right' answer. These are problems that cannot be engineered.
On the contrary, engineering is 100% about addressing these kinds of problems (not all engineering problems match all 10 criteria, but most match at least some of them). Those people who think the engineering approach to problems is "define it, decompose and scope it, solve it, implement it" - or, as we call it, the Waterfall Method - have mistaken homework problems for engineering problems. They really have no idea what engineering is.
Then there's this:
our biggest challenges are ... issues of communication, coordination, and cooperation. These are, for the most part, well-studied problems that are not wicked.
That's the most ridiculous thing I've ever heard.
Incidentally, although he inexplicably doesn't link to it, Ritter and Webber's original paper "Dilemmas in a General Theory of Planning" is quite readable and well worth the time: http://www.uctc.net/mwebber/Rittel+Webber+Dilemmas+General_T...
A staggering amount of knowledge is being produced and most of it sits gathering dust. We've already reached the point where many new inventions/discoveries turn out to be rediscoveries of something already conceived of, or even thoroughly figured out, in the sixties. This happens all across the sciences, because there is too much relevant information for a single person, or even a single university department, to know of. We need tools to help with that.
My God, as I'm writing this I realize this is the most powerfully insightful thing I've read this year!
I'm not sure the output of scientific positivism can be well-indexed, to the point where anyone can find all the relevant pre-existing work they could build upon in their own: as there is no one big point where something new is discovered, but rather a lot of little facts and confirmed/refuted hypotheses that snowball until a meta-analysis of several studies can actually say something for certain, there's no one thing for a scientific expert system/communications tool to return to you. I expect, to have such a thing at scale, we'd need an actual human-level AI that "read" and understood the significance of every study in every field, and could see all the cross-correlations.
Before the advent of scientific positivism, though, we had a sort of ritualized science, where we would get large groups of people all studying one thing or another, out looking for proof or disproof of whatever particular thing they currently believed, leaping around wildly in hypothesis-space rather than just edging forward and taking whatever facts came along. There would be scientific belief "movements," in the same way that there are artistic "movements." Because it was ritualized, science was able to be made an entertaining talk of at the time, similar to celebrity gossip—everyone would have their own opinion on whether the currently-researched belief held true or not, and would debate it constantly, increasing public awareness of the subject; a "named hero" scientist would later come with a sweeping experiment and prove one or the other group right, and would be heralded by that group and have some unit of measure, chemical element, or heavenly body named after them.
Ritualized science didn't necessarily advance human knowledge "as a whole" very quickly—the iterative assembly-line process we have now seems to work quite well for that—but it did seem to get each new scientific fact thoroughly embedded in the public consciousness, because of what is basically good social game design. Perhaps we need some more of that, some hybrid model where scientists can still be "heroes" with "rivals" in the public eye, entertaining and informing in conjunction and raised up with social status, rather than simply workers for government grants raised up only with citations in journals?
As a screwy tangent: perhaps this could even be a facade on top of current science, a sort of staggered release of scientific knowledge in high-assurance bursts, with the rest of the "development" going on in some scientific "closed alpha" where the public wouldn't be constantly bombarded with overzealous summations every time the ratchet was turned (basically a justification for Yudkowsky's "Bayesian Conspiracy.") When a scientific hypothesis was made theory, it would be handed off on stone tablets to a researcher well-trained in rhetoric with a nice-sounding last name, and they would become, to the public, "the one who discovered the theory of X." Only the conspiracy would know that it was the work of thousands.
I think what he's referring to may have already been proposed by Robin Hanson in the form of "Futarchy," a system of government that uses prediction markets to enact laws:
That said, there are other frameworks built to improve collaborative decision-making. I also like the Persian method: If an idea comes up while the group is drinking together, discuss it sober as well, and vice-versa. Only implement ideas that meet approval both times.
I'm hoping he threw that term out as a teaser for part 2.
The article need not be taken as a criticism of engineering or scientific reductionism. These approaches to problem solving are correct for certain phases of tackling a problem (like implementation).
The problem we collectively have with wicked problems is that they are vastly interconnected, and so many small moving parts rapidly changing that we collectively can not keep up. Even if climate is slow moving, all the parts that affect it are not, and we are not fast enough or smart enough to keep up.
It is our self-righteous stance against nature that helps us survive, but admitting that a problem is bigger than us isn't ... natural.
My belief/hope is that computers will increasingly tackle wicked problems.
This, I believe is a good old engineering problem. The hard (maybe not wicked) problem is getting people to use it. Ah yes, and if you really want to help solve the wicked problems, it can't just be any people but rather those who make/influence important decisions.
We don't know. At some point 450ppm was considered acceptable, but at this point the maximum acceptable level may well be 350ppm (pre-industrial revolution it was around 275ppm).
Right now we're at around 390ppm, so whatever it ends up being we have to reduce emissions from their current levels on at least a per-capita basis if not overall.
This sounds like a lot of meta-level confusion.