It is just pretentious not scientific.
I once applied to ING (the biggest Dutch national bank). They gave me a test like this and rejected me because I'm not responsible enough.
The irony is that I was complimented on my responsibility when I was an instructor on a coding school. This was especially in light of all the other teachers resigning because they couldn't cope with their classes.
I told ING this. I told them I'd work for free for 3 months, so they can try me out. Nope, I'm too irresponsible, no matter that I completed my 4 academic study programs in time with high marks and did extracurricular stuff.
It is bullshit indeed. IMO, the real answer is: it seems that Mettamage is responsible based on his past accomplishments. Based on the questionnaire he doesn't rank his responsibility a lot lower compared to the average person. Time will tell whether he is responsible.
It's easier to fill it out on a test than actually be hard working for a couple of years, so the test is the easy part ;).
Despite the technical interviews went very well I've quit the recruitment process when faced this mandatory step. I choose not to take those tests. Previous experiences show that where such mechanistic approach is in place those places are not worth working for. I tried to discuss around this kind of test, asking for reasons and proposing substitutes with more position relevant ones, but no, rigid refusal was the answer in polite wrapping. Actually they required two kinds of tests, one personality and one generic ability that included one third - ca. 6 questions - of calculating percentages and sum values quickly, in a financial context. Basically adding and multiplying numbers very quickly. Irrelevant but the results are taken into account seriously.
I've been called in to an extra meeting with my future employers because my answers on one of those. I mean, I'm pretty analytical (I'm doing my PhD in engineering this fall) however I didn't give myself top score on that (and a lot other metrics) as I know people who are a lot more analytical etc than me. After working in that company for a few years (and quit) I realize I should have maxed out on many of those metrics, but all in all its so much bias involved in those questionnaires - and I don't think the people working them are realizing that. At least most of them don't.
And that is without going into the skewing of these questions. Yes I can be motivated to do a good job without answering like this job is the most important task in my life behind breathing and in front of eating.
And then this gets used to assign you a low self-confidence score, which leads to your trustworthiness score decreasing!
Offcourse, they could grow a spine and introduce quotas to achieve the same effect, but I guess that is bad for PR.
The problem is that they’re using biased metrics to pretend they’re not violating civil rights laws.
Then again, the power dynamics when a company tried to work with you on them, I can easily imagine that as threatening. After all, my friends care about both my personal growth and our interpersonal growth -- a company mainly cares about how I can most benefit them.
Anyhow, I suppose I'm defending the class of methods (as I've experienced them), outside the context :)
It's similar to unit test code coverage: a great tool to point to areas that might use some attention, but a bad tool to determine whether a commit should be rejected.
Sure many studies have flaws (not just in the social sciences), but what is the alternative that we simply use theories because of their "beauty" (whatever that means)? Shall we start psychological therapies just because someone thought it sounded good, instead of measuring if it works?
Given the title I expected to find an interesting article on what can go wrong if we overly rely on quantitative methods in the humanities. But this article doesn't distinguish between badly-applied quantitative methods and where the limits of those methods are even if they are executed well.
For example:
> We look at instances where the effect exists and posit a cause—and forget all the times the exact same cause led to no visible effect, or to an effect that was altogether different
This just sounds to me like bad quantitative modelling.
There is a huge argument to be made for qualitative research, and there is much-needed criticism of the idea that "hard" methods are more valuable than "soft" methods. I think this article manages neither.
Plus even if all the previous measurements were totally useless, that doesn't mean we should just give up and stop trying to measure soft stuff.
No, it's quite the opposite. Instead of many small underpowered experiments and studies we should be spending on fewer but well designed and run larger ones. (Even if that's naturally harder.)
I can recommend having a look at the book „how to measure anything“ to get a sense of how to apply measurement techniques to „soft“ contexts.
Yup. All psychotherapies were started just because someone thought they sounded good; and are still being practiced because they sound good. There are hundreds of different schools of psychotherapy today. There is a movement toward testing the effectiveness of therapies, but they just show whichever therapy the researcher fancies is the best, and metastudies reveal about equal effectiveness of all the major therapies.
The point is, I think, that quantitative measures are not lifting the scientific status of a field automatically; and can mask the lack of sound foundation.
You've missed the entire point. A sort of conflation is the problem - not by the author, but by people in and around soft sciences and humanities throwing a bit of statistical jazz into their papers and then drawing ostensibly rigorous conclusions which influence social policy.
The reality is that by their vary nature, both soft science and humanities (there is a lot of overlap) cannot be held to the same rigor as, say, mathematics, physics, chemistry. These sciences are pure theory (like gender studies), non experimental (like psychology), and fundamentally unfalsifiable in the majority of cases...but laymen, and apparently government officials, either don't understand or pretend they don't understand - either way shitty policy and legislation is passed and innocent people (society) are worse off frequently.
But you are right about what happens next: you top it off with academically uneducated (or simply unaware of scientific rigour) politicians like Trump (he's just an obvious example, far from being the only one) making calls on different social topics.
And to this approach I say, ignorant and cowardly.
Sure, our intelligence allows us to skip a lot of those processes just like it allowed us to escape our gravity well and explore our solar system, but the jump from CPU to brain is like the jump from moon landing to intergalactic travel. The discrete nature of digital electronics alone prevents them from matching neurons because of sampling, let alone their lack of architectural (i.e. neural) plasticity. It's like trying to weld with a q-tip.
The hubris of computer scientists is now boring; and the journalism around this tiresome.
* A single life can produce multiple iterations of a technology. * Current technology speeds up the development of the next iteration.
If learning, math or otherwise, doesn't destroy your ego, you can do better. There is no shame in vocalizing uncertainty or a feeling of not understanding. Look at Stephen Hawkings - he had the courage to admit he was wrong and was one of his best critics.
Ego has no place in progress - scientific or otherwise.
Even a lot of engineering deals with similar types of uncertainty. We build huge structures out of metal, concrete, and wood on top of soils and other geological materials, often with the assumption they they are all homogeneous with constant material properties through the structure (they aren't). However, we can do this because the variations in properties tend to average out as the material sample gets larger. If we try to make the same assumption on very small scales, we find that there is a lot more uncertainty and our predictions aren't going to be as accurate. We see this in physics and many other scientific fields as well.
Here's a concrete example of a paper [1] that just came out, attempting to impact COVID19 policy, from a very "respectable" set of academics at Yale -- that is based on a flat out fabricated economic model:
We focus primarily on the moderate scenario. That is, our baseline assumption is that diminishing returns play a larger role than accelerating returns (so that α ≤ 1) but not so large that they lead to α < 0. We stress that U depends both on the variation in economic value attached to different activities and on the model governing the disease transmission
Translation: we made some equations that makes the BAD thing BAD and the GOOD thing GOOD.
[1] https://www.medrxiv.org/content/10.1101/2020.05.19.20107045v...
I think the issue here is very similar to the one in predicting the weather. The uncertainty of our models increases so quickly that we can only predict a relatively short amount of time ahead (in the case of weather it's about 1 week [0]). We can see patterns in human behavior, but the uncertainty just grows too quickly for long-term predictions. Small things that are hard to account for can make a big difference in where even large groups of people end up at.
[0] https://www.yourweather.co.uk/news/science/long-range-weathe...
Eg, a free market puts tremendous downwards pressure on profit margins. Identifying that means the powers that be fight against free markets with hook and crook - but it doesn't change the fact that if there is a free market it will find an equilibrium where people are indifferent to starting a new business.
Fwiw, if you're going to call out communism, then you also have to include representative democracies.
given that the study goes back as far as the creation of the Illiad I have serious trouble believing that we have high fidelity data about the social graph of individuals at that point in history.
Pretty much all we have is second-hand accounts to begin with which may themselves be as unreliable as fiction. So on that particular case, I fully agree with the author. That's not actually scientific.
Oxford Economist Kate Raeworth has made the exact same argument about her own discipline and the allure of the ‘hardness’ of maths and physics. The way early 20th century increasingly turned to Newtonian-like mechanistic descriptions of economic processes, reductionist and absurd ideas about ‘human nature’, and extracting Universal laws from historical and accidental correlations.
I do recommend reading the first half of her Doughnut Economics where she makes this case at length, from someone inside the discipline.
Consider the measures prefixed with 'real', for example, 'real wage'. The concept of 'real wage' isn't meaningless, since it captures the relation between wage and purchasing power when inflation is involved. But what about cases where inflation is not involved, or cases where we need to consider interactions between some other factors and inflation? In those cases, the concept of 'real wage' is often impedimental and misleading, a ratio indicating purchasing power would surely be a better choice.
Consider the concept of 'equilibrium'. I can hardly see any empirical foundation of 'equilibrium' when it's invoked by empirical economics research. There are stationary periods of prices. But it's a different story to interpret such stationariness as a state of equilibrium with a mysterious process forcing prices to always gravitate to this state. This interpretation is without empirical foundation and yet its reliability is often assumed a priori in the research.
If you are not persuaded, it's okay. Regardless, I don't believe hypothesis formation is irrelevant.
EDIT: On second thought, my case against 'real wage' above was missing the key. The key is that purchasing power is what matters, and the purchasing power (most of the time local) of stock variables (e.g. savings) is what matters. To adjust flow variables by inflation is often misleading.
The main problem with the role of economics in society is that economics, and often pseudo-economics, are used to influence politics in a biased way.
Economists need to start strict self-policing and disavowing all the dishonest actors in political think tanks. It's difficult to do in practice because some of the dishonest actors get a lot of funding from political interests, but if economists want their discipline to earn respect as a science, they do need to clean up their act in this regard.
The untold reason why humanities now self-describes as 'social science' goes back to Thatcher years in the UK. She was the first to link university funding (and tenure) to research output. Research output was in turn defined by ranked publication and patents. This worked ok-ish in the sciences, but not so well in the arts. The humanities were obliged to ape the sciences in the way they spoke, the way they defined their outcomes and the functions they served. The scientific method is simply not a good fit for the arts.
- The scientific method (and precision of mathematics) applies to all sciences, social and natural.
- Knowledge can be proved only by observation trough empiric means and deductive reasoning.
First of all, the introduction which bashes the paper which applying social network techniques to fiction? If the author had bothered to look it up, they would have realized the authors are an applied mathematician and theoretical physicist. Not humanities.
Then they goes on to criticize political science and psychology as their poster children for the humanities, except these are social sciences, and only "humanities" in a very broad umbrella term. "Humanities" is more often used to refer to disciplines such as history, art, literature. So a complete mix-up of fields.
Third, the assumption that social sciences has a "reliance, insistence, even, on increasingly fancy statistics and data sets to prove any given point" is simply flat-out wrong. For example, of course political science relies on "big-N" studies which try to find or refute correlations between democracy and various other country indicators. But political science also relies heavily on "comparative politics" which is much closer to literature or history in a classic "compare and contrast" aspects of two countries. Similarly psychology has many different approaches taken in published papers and books, some quantitative and others more qualitative.
I could go on and on. But this article is completely ridiculous, arguing against a straw man that simply doesn't exist. It's like the author isn't even familiar with academia. Bizarre.
As for the home departments of the authors, that doesn't change the fact that their work was humanities research. Unless you're saying humanities departments shouldn't be blamed for arguably bad humanities research produced by "outsiders" and published in physics journals. It doesn't appear to me that the publication contained any new applied math or physics.
Correlations are not science by the way, no matter how repeatable they are, they're just statistics. You need to prove causation to be scientific. Big analyses of countries generally can't do controlled experiments so their findings are always dependent on what they chose to include (and not include) in their models as the supposed causal mechanism.
In all seriousness, there some valid criticisms of social sciences, but the article reads like a pop version of them.
You can learn a huge amount from ad hoc observational models of behaviour that aren't based on equations or statistics. You can even use them to make accurate predictions.
I used to know a manager who had an outstanding intuitive understanding of organisational and personal psychology. He probably couldn't have formalised his knowledge, but he had a real talent for getting shit done with individuals and groups, and for knowing exactly the right moment to apply leverage in a negotiation - all without bullying, shouting, or underhanded manipulation.
He simply knew exactly what people would do in one set of circumstances, and how to change their preferences by presenting them with alternative circumstances.
This isn't "science" in a formal sense, but it's certainly a very real form of knowledge. It seems to me STEM types tend not to understand how valuable and effective it can be, and how important it is to have some of this skill if you want to change what people do.
More frustrating is that there is some valid criticism of "digital humanities" for being the cool new discipline that while capable of some great stuff is guilty of all too often neglecting the "humanities" part of the term in favor of just throwing up some graphs.
Nonsense. Whether a piece of research belongs to the humanities doesn't have to do with the credentials of the researchers but with the subject matter. If we genuinely want to say that scientific and mathematical methods are fruitful to explore questions in the humanities (contra the main claim of the article), we have to at least allow this. Or would you say that the paper on the network structures in fictions is a piece of theoretical physics?
See, eg. a criticism of political science in the same vein from a political scientist https://www.chronicle.com/article/How-Political-Science-Beca...
I personally, for example, find that political science went overboard in rational choice theory (borrowed from economics) over the past several decades, which hasn't turned out to be particularly fruitful. (And there are 20+ subfields of political science as well, rational choice being just one.)
But that's simply arguing over the relative usefulness of specific methods, like to what degree TDD should be used in software, or have we gone overboard with microservices.
The original article's absurdly broad critique of quantitative methods somehow taking over generally remains bizarre and completely uninformed.
Social sciences are a humanities. The term social science was invented fairly recently by sneaky academics to leech off the credibility of actual science.
> It's like the author isn't even familiar with academia.
Are you? This issue with "social science" has been going on for a few decades now.
Richard Feymann called social science a pseudoscience a few decades ago.
https://www.youtube.com/watch?v=tWr39Q9vBgo
The name social science was invented for the same reason creation science was invented because real science had such a good reputation and they didn't so they decided to manufacture some credibility by attaching "science" to their fields.
> The name social science was invented by hacks just like creation science was invented by hacks because real science ( biology, physics, chemistry, etc ) has such a good reputation and they didn't so they decided to manufacture some credibility by attaching "science" to their fields.
Do you have anything to back this up? According to Wikipedia's rather extensive article on the history of the social sciences, the term first appeared in 1824, and the discipline was pretty well established by the turn of the 20th century.
https://en.wikipedia.org/wiki/History_of_the_social_sciences
The sort of scoundrels naturally attracted to power will always find cynical use for talent of the kind possessed by Oppenheimer. However, every aspect of such a relationship will be thoroughly cynical.
If you are a future STEM person, understanding this fact will save you a lot of grief. Learn it early.
I think looking for statistical patterns (e.g. in literature) is perfectly good science as long as you are cognizant that patterns merely invite more study an should not be used to reach conclusions, also being aware that patterns might disappear when you expand your data set.
Finally, as someone trained in the physical sciences, I used to look down on social scientists. I no longer do this. At least they're brave enough to tackle a complex monster with the limited tools at their disposal, stumbling and even enduring ridicule from the hard sciences. We ignore the human mind and collections thereof, because it's too complex and prefer the relative comfort of simple, predictable systems. I don't believe that's good.
As for the social and behavioral sciences, another way of approaching it is: if you have a phenomenon, is it better to try to be scientific in explaining it or not? If not, you cede that realm to the nonscientific, with all that implies. If you do approach it scientifically, how do you do that? If your explanation or theory involves some quantity of some sort, shouldn't you then attempt to specify a model of it, and test it against observations?
https://www.timesofisrael.com/duped-academic-journal-publish...
Mathematical analysis of linguistics has pointed to. Irrational patterns later confirmed by genetic analysis.
I don’t mind pointing out the vacuity of what often passes for scholarship. But she didn’t start with a good example.
In hard sciences all inputs to a proof are either verifiable theorems known to be absolutely true, conjectures/hypotheses (in which case the proof becones a conjecture) and seldomly axioms. In soft sciences on the other hand, it is common to construct models quite arbitrarily, in order to try and match empirical results. If however, we would like these models to have any indication of "absolute" truth, similar to the hard sciences, currently we can't or don't.
To achieve this I believe we could do an input analysis of ALL assumptions and try and quantify the aggregated certainty of the model's correctness, even before matching it with empirical data. In this way we could say for example: we have used a model with a predicted input accuracy of 0.82, that matches our empirical results 0.97,p < 0.05. This would then further strengthen and quantify the "standing on the shoulders of a giant" principle.
Of course this is easier said than done and I know this is a bit naïve. Currently no techniques exist to do this as far as I know. There is also discussion to be had about how to interpret model outputs (we now have three variables, how do we relate them? How to calculate this model's output accuracy?) and how to calculate subsequent model's accuracy based on different input accuracies and their inter-relations. This would also require re-building soft sciences all the way from the bottom up (from the most easily verifiable facts first) to be useful and a new science on hypthesized model accuracy calculation.
Anyway, enough hypothesizing thought experiments for the day. Any thoughts?
This is one of those instances where I might care about the case Konnikova was making if she bothered using any quantitative methods to convince me the humanities were awash in quantitative study while qualitative analysis clearly went the way of the dodo. Or that literature programs were churning out students who think network analysis is the best way to understand a text.
In the purely-qualitative realm, it just comes off as pearl-clutching over something I don't think anyone actually believes?
One of the problems is computability, when I try to build statistics on a space of human intentions, then I strongly suspect that this is at least as complicated as trying to build a measurable space atop the set of all Turing machines, and there I get immediately the issue of computability. (For example, calculating the average run time of halting Turing machines.) So, then assuming that one can meaningful build a statistic (just the claim that this is possible) will doom any too formal reasoning, by principle of explosion.
How is that not a science?
You want to make predictions about the past?
As an example they take a network analysis that was done on social relations of characters of fictional works. While the author finds this use dubious, I think it's the contrary. While the researchers might not fully understand the methods, they could very well have a mathematician on hand. What do we do in math if not model real problems of real people?
It might be nice for some people to not know an application of their research but for humanities to find novel ways in which to use mathematical tools is great and should be encouraged. Of course they will miss but they will also hit. We need a peer review where those methods are understood within the humanities and social sciences, in order to not draw false conclusions.
Of course, qualitative analysis isn't going the way of the dodo and the author agrees on that.
I just think the occasional misuse of mathematical models for humanities research is well worth the possible gain. Those problems should follow some rules with a mathematical models, right??? Let's help those researchers instead of banishing them to qualitative methods.
Problem -> so what? (we build a solution) -> real business.
Now, replace "app" with "mathematical modeling," and you'll start to feel the author's gripe.
I do think the author is right to ask - what is the point? So what? What are you trying to do with those mathematical models? What problem are you solving? For instance, we have the hypothesis that the researchers of the British paper posited:
> the relative likelihood that certain stories are originally based in real-world events
Based on:
>looking at the (very complicated) mathematics of social networks
So, we have a tool - that tool is looking at the mathematics of social networks. Does high fidelity between models of social networks predict "realness?" Does a certain model of a social network described in the relationships of protagonists in a book suggest that book's events are accurate historical ones?
No, right? Then why is that step glossed over when the researchers go ahead and start modeling anyway?
So I see science at work, nothing to see here.
[Edit: After re-reading, I'm not sure the parent thinks that numbers should take over the humanities, so my comment may be misdirected. I'm leaving it anyway, because I think the point is valid, even if it doesn't address the parent's point.]
There's something important to be said here about the duality between logic and math, algebra and statistics, classical AI and modern DL, philosophy and science, rationalism and empiricism.
Examples include: 3D modeling of the historic broadway district in Los Angeles, Natural Language Processing of ancient Roman texts, virtual reality's impact on human cognition, etc.
Maybe it is unfair to judge the hole field on a silly paper, in all fairness they write about non existing geometries in physics
Polls are really useful + accurate and are one of my favorite examples of stuff that came from social science research.
Experience is all we have.
At the bottom of all of these bouts is then “demarcation problem”. What is a science, what isn’t a science?
I prefer Paul Fayerabend’s view: if it is useful To somebody somewhere - it is a science.
That will not tell you what the book is about, though.
how to solve the repeatability crisis in social sciences: throw out bogus research and get rid of quacks
Caricatured dumb humanities major: "That's what I love about math, there's no one right answer!"
More importantly though this article conflates humanities with social sciences pretty badly. This is quite insulting to sociology and even more so to economics and anthropology. Physical antrho is pretty serious science.
There are limitations to social sciences but those are not the same limitations of literary criticism.
Similar problems have been demonstrated in a host of fields, mostly the biomedical sciences. To take one prominent example, HN has been plastered with articles about COVID studies of dubious quality.
You can perform perfectly good science outside of the stem factory. They dont have a monopoly.
But given the scientific rigor ive seen in the arts I would appreciate if they maintain a clear and separate distinction from stem.
A "liberal education" originally meant that those who pursued it were free. They weren't pursuing an education of mere techniques, which was for slaves. Even today, there is a place for learning things that don't have a direct economic impact, as part of becoming an educated person.
That's from the more idealistic side of me. Now here comes the cynicism. Why? Because people still want to major in them, so that they can say that they have a college degree without having to major in something rigorour. And those people pay tuition. And the colleges like getting paid.
Sure, but we shouldn't assume that college is the appropriate place to do so or that the way colleges teach the humanities is effective. If your supermarket forced you to do aerobic exercises before entering, I'm not sure a good justification for it would be "well, aerobic exercise is good for you and not everything is about buying food."
Because that's why colleges exist...? Historically universities were not the workforce mills they are today. You did not go to a university to help you find work. Stuff like business/management, engineering, medicine, etc really shouldn't be part of universities.
A classic full university was supposed to cover the four historically major fields of study - theology, medicine, law and philosophy (which includes all the modern subtypes of PhD's e.g. physics, math, biology, etc). Three of these fields were pretty much designed to prepare students for the specific needs of knowledge intensive work (clerics, doctors and lawyers) and only the philosophy studies were less practical.
(I'm sarcastic of course. Sciences and humanities are both cool and good and contribute to a better understanding of the world around us, and one wouldn't be much without the other.)
For example, to me it seems like American politicians have way less empathy than European politicians even though European politicians have studied way less humanities than American ones. So my belief that studying humanities helps you deconstruct the human experience and see us as robots, hurting your empathy.
I do however believe that studying humanities makes it easier to answer what the tester wants you to answer on empathy tests, since you now understand those tests better.