I agree completely. This is something we should be cognizant of.
This hits pretty close to home.
I'm so fed up of getting paid to potentially make founders rich. Or to be a small cog in a gigantic machine on a slow decline. I'm also unemployable because I can't buy into the corporate BS anymore. And where I am, there doesn't seem to be design/dev jobs that actually want to make a difference. It's an economy problem.
The startup thing seems to be the best way we go about solving problems in the world today. But if you happen to _not_ be at the right place at the right time, meeting the right people, poof, it's gone. I can't imagine that an advanced specie would operate this way. We should be focused on solving problems, instead of being focused on escaping the rat race, to then be able to solve problems.
I'm glad I am not alone seeking purpose. There goes a point where you're technically advanced, you have itches to fix things and all you see is the broken economy of consumerism and "let's give kids video clips and smileys, derp".
Then what do you mean? You described exactly what some of the largest, most successful FOSS projects (Firefox, KDE, Gnome, Libre Office, FreeBSD) are already doing.
> Let's group together smart people wanting to make a difference and have a hit list of things we (people) actually need.
Well, the FSF maintains a list of "high priority Free Software projects" that need help, but it's strongly colored by the FSF's politics: http://www.fsf.org/campaigns/priority-projects/
And we've reached that point when the creatures (the firms and technologies built by those people) of that culture are diverging from ideals of the culture.
But the above comment is inherently empty - any successful system will eventually expand till it reaches a barrier of complexity which cannot be overcome on its own.
Figuring out what to do is the challenge.
The things that worked were coders having free time to spend on interesting projects. But I suspect, that we've better understood the value of coder time, and the major firms are now paying the correct amount to keep coders busy.
The market BS is a good thing for coders in the short and medium term. People who understand finance and strategy are willing to pay what it takes today, to own a chance at being richer tomorrow.
If you have a neat hobby? communities will help you get better at it. Maybe if its really good, you can convert that into a product/firm and possibly a good exit. If that happens you won't have to worry about it ever, and you'll be that thing which is respected among your peers - a succesful serial entrepreneur. You would have done the hard thing (product creation, team management, finance, successful exit).
In a group of people who respect ability and excellence, its hard not to think of the guy who did the harder job as meritorious.
In short: I don't think theres a market solution for a new market normal.
Its easier to figure out you are cut of a different cloth, recognize the market dynamics for what they are, and make time to build whatever it is you want to build.
Eventually a lot of other coders are going to come to similar realizations (provided the cultural bubble online isn't too distortive)
> "We should be focused on solving problems, instead of being focused on escaping the rat race, to then be able to solve problems."
Sometimes one person's problem is another person's solution, and vice versa; this is where so much of politics comes from. A lot of people are invested in making sure certain problems stay unsolved, or even unacknowledged.
This resonates. How would it work in practice though? Many people want to contribute, but they are stuck as they have to provide for their family.
Also, how do we know what to work on? Someone linked to FSF projects - are there other lists, places we can go to, to find actual tasks/projects to work on? Unless there are incentives (not money/fame, but seeing our efforts put to good use/getting feedback etc) people would lose interest. It would be awesome if we can curate such a list - pharma, food/agriculture, mental health etc. And break down this list into smaller, manageable tasks. I guess many people can find 5-10 hours a week to contribute.
...but in the meantime, here's an obligatory and shameless plug for donating to the Internet Archive[1] (tax-deductible in the US), or better yet making a recurring monthly donation so they can more accurately forecast revenue for the year, or better still getting your employer to make a nice big donation to this crucial bit of Internet memorybanks.
And as for Archive Team, we're always looking for a few good geeks.[2] Run an instance of the Warrior on spare cloud servers, or help patch and ship code at GitHub.[3]
[1] http://archive.org/donate/
For the most part, archive.org is not rushing in to save stuff that's about to be deleted.
Instead they are crawling the web 24/7, patiently maintaining a historical record.
Check out http://oldweb.today
It is amazing
Parent mentioned both the Internet Archive and Archive Team. You're right about the Internet Archive, but "rushing in to save stuff that's about to be deleted" is a pretty apt description of most of Archive Team's activity.
http://archiveteam.org/index.php?title=ArchiveTeam_Warrior
It would deeply unethical for me to point out that you could also run the Warrior on free server space that your company might not notice, kind of like the karmic inverse of a bitcoin miner. Deeply unethical. So I won't mention it.
I've written about this before, and even right now I'm not sure where I stand exactly, except that tweaking the algorithms to compensate for bias is definitely not the right answer: if you look at the mirror and don't like what you see, you don't draw on top of the mirror to accentuate the result! You go on a diet!
I liked the idea of data gardening, but the thought of going-to-communities is daunting. I get tired even thinking about it.
Regarding living beyond walled gardens:
> Publish your texts as text. Let the images be images. Put them behind URLs and then commit to keeping them there. A URL should be a promise.
But people already do that! The question now is to turn to why people do otherwise. I personally do not understand the reason people say, post long blogposts on Facebook, but I do understand for services like Medium.
For example, I'm extremely tempted to write on Medium because it provides the network effects of readers clicking on tags to read next. So the question is how do we democratize that?
So the question is how do we democratize that?
Commenter wtracy has already linked to the FSF's list of High Priority Free Software Projects... From there, look into what they have to say about free wifi (and in particular, but not limited to the OrangeMesh package):
http://www.fsf.org/campaigns/priority-projects/free-software...If a convincing case could be made that the benefits to National Security outweigh the costs to the copyright cartels, I'd be willing to bet that young secondary-schoolers would have a blast with a decently designed curriculum that includes a working student-to-student mesh-network as one of its goals.
I mean, right now one can pretty freely go write up a blog using self-hosted wordpress, octopress, pelican, hugo or whatever. But choosing that over Medium can sometimes mean a lot more work to put in. But if we can democratize the ease-of-use and the good bits of Medium/Facebook/Twitter... Maciej's end statement about "using open standards, write text in text, images in images" would have been achieved.
The problem is that corporations now create a significantly more compelling version (in most criteria - UX, UI, etc) of the Free and Open versions out there.
Also, it's funny how the net changes, how unthinkable it is to have a social network that doesn't slice up people's data and use it to advertise to them now compared to how anti-advertising LiveJournal was back then. Not convinced it's a change for the better.
Don't be afraid to pivot.
Also I don't necessarily understand Ceglowski's stance on why we shouldn't use deep learning and should avoid surveillance on the web. I don't take issue with becoming a datapoint in Facebook's web of people because nothing bad has happened or can happen from me giving Facebook my data. When most people speak out about the data that's being collected about Facebook and Google users they say they're "worried about what could happen" but then never list any bad things that they're actually afraid of. The speaker falls to this issue too. Ceglowski says:
>I worry about legitimizing a culture of universal surveillance.
But then never explains what bad could happen from legitimizing that culture. Maybe I'm completely missing the point of the talk? Please explain what I'm missing if I'm actually missing something.
With regard to the dangers of surveillance, I've made a sustained argument about this in other talks. It boils down to the data being collected having great power to harm people if it is ever put to malicious use, and a lifespan that exceeds that of institutions we know how to run. My beef is not with the surveillance alone, but with the combination of surveillance and permanent storage.
On the regard of data talking into the wrong hands, I take issue to this argument because it's not a unique problem to personal data collection. Any data could be hacked - bank information, address, whatever. But that doesn't mean we don't use the internet for banking and etc. It means we try to make systems that are difficult to hack. It seems like you'd want data collection not to happen on websites like Facebook and Google, when hacking isn't a unique problem to those websites.
The author's concerns over machine learning are well-founded. The best option I've been able to identify to ameliorate some of the concerns is focusing on the population that will be suppressed. Once the model returns the desired recall / precision, drawing samples from the excluded population with a rigorous acceptance standard can help validate whether you've simply built a model around your biases. Couple that with allowing an opponent to validate a randomly-selected sample and you've cleared up a lot of the uncertainty in the model.
It's not perfection, but perfection is a very difficult standard.
However, if there's any winner take all built into the system, there's a strong incentive to not even acknowledging dissent.
And that scale is exactly the state of the internet. There is so much data available to study and understand, that we absolutely need better tools, like machine learning or whatever we want to call it, to help us keep up. Shit's moving faster than our human perception can handle, especially for those who didn't grow up with the internet.
Yes the data analyctic tools we have right now are premature— like fast food to our productized minds— but they will improve rapidly, as our taste for quality improves.
But sure demonizing the things you don't like is one step on the path to learning what's truly valuable.
My go-to example is machine learning police enforcement direction, often used as a counter to racially biased policing. This works in any city with a historical problem of racial bias in policework. We give the algorithm all the data we have from the last 60 years of policing this city. Patrol schedules, incident records, arrest records... everything. The computer magically tells us where we should focus our efforts. To the police chief who paid for the system, and especially to the media reporting on it, it looks like a computer is making the decisions without bias. Hooray!
Of course, anyone who's ever worked with machine learning can spot the problem. The data set was generated by racially biased policing. That bias will be reflected in all the records: more arrests for race X, more patrols scheduled through their neighborhoods, more incident reports from those areas. So when the algorithm says "increase patrols in this neighborhood," or "look for people who fit this profile," it is simply synthesizing the patterns from 60 years of racial bias. So the police in LA have a real problem: their "unbiased" computer program is telling them that their criminals look like black people, and they should increase patrols in Compton. So they do, and that data only takes the data further from "un-biased" reality. In fact, the police "black box" is only pointing out a history of racially biased policing. We're relabeling it as recommendations for future behavior.
https://www.chrisstucchio.com/blog/2016/delayed_reactions.ht...
You might also be interested to know that a variety of studies have shown that policing is not particularly biased. Arrest statistics and the like correspond pretty well with NCVS and similar crime victim surveys.
http://slatestarcodex.com/2014/11/25/race-and-justice-much-m...
Scale differences can and often do lead to qualitative differences.
Individual (or aggregate) human researchers are not hooked up in huge services to make inferences and deductions automatically about billions of people.
Besides those machine learning tools, beside the huge data sets, are programmed in their general framework by human researchers, and are given weights, constraints, and fine-tuning by them, so they have both kinds of biases.
>But sure demonizing the things you don't like is one step on the path to learning what's truly valuable.
So, kind of like disparaging via a straw-man a speech that offers detailed argumentation?
Yes they (we) are. It's the same data set. TV, movies, papers, internet videos et al. is all the same biased, labeled data that is being fed (watched, listened to etc...) to machines. You automatically make inferences and deduce things about people based on labeling and training of your brain. You're constantly fine tuning by getting new weights about things through interactions with others and media.
See this (somewhat technical) article where I go into explicit (simulations in numpy) levels of detail:
https://www.chrisstucchio.com/blog/2016/alien_intelligences_...
The best analogy I've come up with for the non-technical is that algorithms are like humans trying to draw inferences on octopus society. Some octopi might have bias against some other octopi, but it's the height of octopusthromorphism to to expect a human to reproduce that bias.
And it's not surprising that data itself contains some biases from the humans creating it. Suppose police are asking machine learning where more crime is committed - there will be a feedback loop. Where are they currently making more arrests? If they spend more time there, the bias will be exaggerated.
The op correctly gauges how we should be cautious. Your post, I'm afraid, is misleading at best.
[1] https://www.google.com/amp/s/www.technologyreview.com/s/6017...
1. Enough knowledge about the structure of the bias to be able to devise a model for it.
2. Some measurements from which to fit the model, with errors that are uncorrelated with the errors in your original data.
These things are not always easy to obtain, even in relatively mundane settings. It is also a distinctly non-automatic procedure - it requires someone to decide that a bias exists, to model it, obtain the relevant data, and fit the bias correction model, all before they can begin to obtain unbiased (or probably just less-biased) measurements.
You are right that machine learning gains bias from the humans that created it, but unless they managed to transfer 100% of their biases to it, it will always have less bias.
The problem, I think is one of self-selection.
Consider two hypothetical social networking websites - Friendface and FaceSpace. Friendface's userbase are mostly white users, while FaceSpace catered mostly to urban, black populations. And it would make sense too - you would only join a social network if your friends are on it. If you're white, chances are the majority of your friends are also white. And vice versa.
So Friendface is a lot more active on their ML front. The problem is when Friendface releases their data - because they're more active on the ML front, and ML scientists love to not have to collect their data, what happens is more and more models are trained on the Friendface data and more and more models are being optimized based on Friendface data. Apparent "structural" racism happens. Tumblrinas all pounce on it as if it were the biggest oppressive struggles of their lives.
A very cute thing to imagine in this scenario would be to imagine FaceSpace suddenly got good at NLP, and open sources their statistical language model. Recall that FaceSpace users are more likely to use AAVE in their communication, so what do you think the statistical language model would be?
In the original article, Maciej mentions "going to the community" - using crowd wisdom to handle these sorts of thing, and preferring to use open standards as opposed to silo'd standards (like writing your blog post on facebook... why??!!). While that sounds like a good idea, like I've mentioned in my other comment, it also sounds tiring as hell.
Firms act rationally (more or less)... ML is driven by huge companies with huge datasets. Why would they need to prune external datasets when they could just do their ML research with a few SQL queries?
Um, I'm sorry, but unsupervised learning and deep learning are not the same.
In other words, terminology can be used to make precise, meaningful distinctions, or it can be used to embellish.
Deep learning refers to a particular type of a particular learning technique: Specifically a neural network that has many hidden (intermediate) layers. Deep learning can be used for either supervised or unsupervised learning.
Which is the point he was trying to make.
Just because you lack ability to understand nuances of something does not makes it "garbage".
Reminds me of the phrase "graduate student descent" for training neural networks...
I've been noticing more casual dismissiveness towards grad students lately. They are certainly often treated as the grunt laborers of academia, in areas where career prospects are downright stupid. I generally feel it would be more productive to at least pretend that they're being trained to be independent, aggressive researchers in their own right, though.
> to at least pretend that they're being trained to be independent, aggressive researchers
But that is the issue, isn't it-- it would be pretending.
but seriously, as a grad student, absolutely no one gives us respect. not our peers, our bosses, or society. why would you expect some random on the internet to do better?
So true.
What's not clear to me is why companies who don't seem to have any need for machine learning team (i.e. a subscription box company) are looking to hire one.
Surely part of this can be pinned down to the hype associated with ML that may well die out, but the proliferation of these tools doesn't bode well for Maciej's dream of a weird, creative, and interesting internet.
Companies that run on subscription literally live and die by their churn rate. It is both feasible and reasonable for a subscription box company to hire someone to use machine learning to build a predictive churn model. That may seem trivial to you but that's the reality behind those job posts.
Using machine learning on the other hand is a safe bet. It is much easier, I would assert, to write machine learning code to organize data than to curate a community of humans to organize data. The ML approach will do pretty good even if it isn't the best, which is why it's what everyone is switching to.
Keeping with the author's example, is it easier to organize erotic fanfic with a computer, or enable a community to do it without spiraling out of control?
People tend to move towards the more mall-like areas of the Internet due to spam and abuse that they don't want to deal with. This can be low-level stuff, or (as in the cases of Kreb himeself) sometimes the attackers get out the big guns, and you need to run for cover.
And that's why we're hanging out here, after all, and not in some unmoderated forum. And even here, post on certain subjects and conversation quickly degenerates.
I think we do need a wider variety of spaces to hang out, though. No set of rules works for everyone. And if you do want 4chan, you know where to find it.
That's an amusing comparison, given how much of Krebs focuses on offline ATM skimming, copying credit cards at point-of-sale terminals, hacking major retailers's CC databases, and using stolen cards at retail and mall stores to cash them out...
I sounds like he's saying ephemeral content is worthless and should be shunned.
I, and hundreds of millions of others, disagree. You want a bland, awful, boring society? Easy: make everything you do stick around forever—like a promise. And then watch the world self-police as the lifeblood drains out of it.
You'll get…Facebook. No thanks.
For example, here is three quarters of a PETABYTE of historical American newspapers: http://chroniclingamerica.loc.gov
It's already been mentioned, but this guy needs to get out a bit more.
The internet is a city. There's the specialist shops (HN), the bustling malls (Reddit, YT), the shady back alleys (4chan, 8chan etc.), the historical districts (Usenet, Archive.org), the cafes (IRC, ICQ, Slack, etc.). To their credit, the author is more knowledgeable than most, however.
I see so many dismiss the internet as just Facebook, or YouTube, discuss trolling as if it's a single phenomenon, and it's a recent thing, associated with Social Media. So many think that there's an internet culture: there isn't: there's a set of almost infinite numbers of overlapping, interlinked cultures. I can even map out the origins and historical influences of a few. There are even a few who think that social media sites are good forums of discussion. The poor sods: the Usenet was a better discussion forum than Facebook ever was, and the Usenet's not that great.
If you really want to see what the internet is like (that isn't advice for the author: I'm pretty sure the mall analogy doesn't encompass his internet experience, and is merely an analogue I find odd), explore. See it all, in all of its weird, wacky, zany, jokey, serious, offensive, manic, smart, stupid, brilliant, insane glory. I promise you, you won't be dissapointed.
People ask me why I'm not on social media. It's because social media is boring. Unlike Reddit, 4chan, and the rest, not much interesting happens. Unlike HN, I'm not likely to be intellectually stimulated, or learn something new. Unlike static sites, I don't get to see that kind of wild creativeness that personal webspace tends to invite in hackers, nerds, and others who know what makes the web tick. I don't want to see what you ate, I don't want to see your cat, I don't want to hear banal details about your everyday life. I want to hear something intersting, new, and original. I want to hear the next Ze Frank, or Tom Ridgewell, or Simon Travaglia, or Steve Yegge, or RMS, or PG, or Ryan Dahl, and you can bet I won't on a site with a signal:noise ratio that high.
People also ask why I'm fascinated with the internet. My response is, why wouldn't I be? It's a catalogue of decades of human creativity and interaction. It's open mike night at the largest club in the world, which is also a discussion forum, and a shady back alley, and a convention. It is - to borrow and butcher Sir Terry's words - like being blindfolded and drunk at several different parties at once.
But, in what it rapidly becoming the sign-off on my incoherent, long-winded ramblings that are really only tangentially connected to the topic at hand, maybe I'm just totally mad.
EDIT: tried to clarify that I wasn't trying to insult the author. Not my intent, but it seemed to come off that way. It still does, but less so, and I prefer not to edit my old content too much. Also, I just checked out pinboard. Pinboard is amazing, and I am impressed.
Basically, don't take this as anything more than a tangential, incoherent ramble started by an analogy the author used which I found unrepresentative. Because that's what it is.
I write most of my HN comments in the spur of the moment. As a result, they're often inaccurate, idiosyncratic, poorly explained, or just weird. If anybody asks, I ususally try to clear up any confusion.
This isn't necessarily a good idea, but if I thought too much before I spoke, beyond a cursory look to see if I'm violating the rules, I'd be to afraid to post anything interesting, or anything at all beyond polite agreement with everybody, which is so very dull, don't you agree?
2933 votes and countless interesting discussions later, seems to have worked our okay for me.
The idea that the internet is a city might have been true 10 years ago, but it is definitely not true anymore now. The default response to "I need an X" is "just do it on Facebook", and there are entire swaths of content that just don't have a place anymore on the web, "thanks" to ever-increasing enforcement of arbitrary moral guidelines and growing monoculture.
If the internet were to be described as a city, it'd be a gentrified city where most of the artists have long been chased away by ever-increasing rents.
I wonder if the author truly understands "Machine Learning", what are his qualifications? A degree in Art History, and some "programming experience" aren't very assuring. E.g.
>> "The names keep changing—it used to be unsupervised learning, now it’s called big data or deep learning or AI"
WTF?? The author should enroll in a beginner Machine Learning course on Udacity or Coursera before making philosophical statements about fields he has zero clue about.
It seems the only skill the author has is piecing together meaningless arguments that appeals to average HN users incapable of distinguishing between informed opinions and pseudo-scientific rants. Hell at least bad graduate students have to give examinations, read papers and make original contributions that get peer reviewed (otherwise they fail/get-kicked-out/drop-out). Not like this guy who does not understands difference between "supervised" and "unsupervised" machine learning, yet feels comfortable in making "prophetic" statements about machine learning.
Also
>>> "These techniques are effective, but the fact that the same generic approach works across a wide range of domains should make you suspicious about how much insight it's adding."
What does he means by "same generic approach". If we assume he is implying specific algorithms then we have a good "No free lunch" theorem that shows that a single algorithm is not effective across all domains. Now if by "generic approach" the author mean "machine learning" in general then its as ridiculous as saying
"Mathematics is effective, but the fact that the same Mathematical approach works across a wide range of domains should make you suspicious about how much insight it's adding."
The entire article is filled with "truthiness" and "feel-good" statements, which fall apart on closer examination.
Unsupervised learning: Learning without a set of labels.
Big Data: Collecting / using large amount of data.
Deep Learning: Complex, multilayer representations which perform better than shallow/linear representations.
AI: Artificial Intelligence, an overarching subject or grouping of subjects involved in building intelligent systems.
Can you imagine someone talking about space exploration while making a statement such as
>> "The names keep changing—it used to be black holes, now it’s called radio telescope or reusable launch system or Astronomy"
Thats how ridiculous the original statement is.