Here is an interesting article about the cultural divide and how it can be mediated:
https://www.defenseone.com/ideas/2018/12/divide-between-sili...
Edit: I’ll add, my generation clearly remembers 9/11 and the response to it. I think it’s not a lack of awareness of the current mission but _because_ of awareness to parts of it that many people write off the military.
Disclaimer: I am a Google engineer, but with no work relationship to their AI efforts and my opinions are my own.
They run stories about how the military bombs some civilians while fighting jihadists, but not about how those same jihadists routinely round up and torture children, kill scores of locals, etc.
Outside of Iraq, which fuck Bush, can you name something about the "War on Terror" you think we'd be better off not doing?
I think it's a really shallow view to say it's better to not bomb a few civilians... then leave them to be preyed on by violent locals instead.
I'm a millennial, and didn't enlist at 18 because I didn't want to go to Iraq (and still don't support starting that war), but I'd have gone to Afghanistan, and as I get older, I find myself much more willing to support the military.
I suspect I'm not alone in that, either.
It's not just your generation. I'm in my 50s and this opinion isn't rare in my age group either.
I work for Google, opinions are my own.
I agree. However I think in the case of Google and most other companies, the issue is a little more.. awkward? Besides the fact we have offices in other countries, we also employ many people in the US from other countries. Obviously those people may have very different feelings about working on a project for US national security. I don't think I've been on a single team at Google which did not have people from abroad.
I honestly think the US military would have more problems with having foreign nationals working on a military project than many of the foreign nationals themselves. I think it's difficult to get a security clearance if you're not a citizen, and it may even be hard if you're a dual citizen.
I would think that naturalized US citizens would have the same interest in US national security as native-born Americans. It's a little racist to suggest that they don't, and it would be a little disingenuous for them not to.
_Most_ people regardless of country of residence or origin want a peaceful and fulfilled, free life. You can never please everyone but arguing from basis of 'fear of the other' is a bit disingenuous and leads to these literal arms races.
That's not to say defense spending isn't important, but it has to be stressed with the word 'defense' and with an inherent bias towards security and safety. (I don't believe technology products are inherently neutral; as a result of design they are better or worse suited towards particular usages, and that needs focus too)
This is all fine and dandy, but some of the most powerful militaries in the world are run by authoritarians who optimize for their own interests rather than what's good for their population. Russia's people generally gain nothing from Putin's belligerence, yet the belligerence happens. The CCP rolled the tanks on its own people; do you really think they'll hesitate to roll them on foreigners if it suits their interests?
I don't worry about the aggregate opinion of Russians or Chinese; I worry about Putin and the CCP.
The military industrial complex will put tens of billions of dollars into backing new start-ups that will cooperate over that time. Those companies will receive favoritism from the state over time, to Google's detriment. That will include technology transfer and regulatory favoritism (AI will be a regulated industry 10-15 years from now, and will be regulated forever thereafter). For every worker in Silicon Valley that refuses to do military work, the Pentagon will find 100 outside of Silicon Valley that are more than happy to do so. Numerous large companies up and down the tech chain will cooperate as well, including Microsoft, Amazon, Intel, IBM, Oracle, HP, Dell, Cisco, TI, Micron, nVidia, and dozens of others. The contracts they receive over time will be a lucrative part of their business (it already is in many cases).
It's difficult to say this isn't a net moral good, as things currently stand. Competent tools put to immoral ends aren't somehow more moral than incompetent tools. The only fix is to change the ends.
I would consider that an indefensible position.
I disagree but am open to being persuaded otherwise.
I don't consider national defense to be an immoral goal. I believe it's unrealistic for there to exist a world without weapons. That being the case, I believe it's important for us to constantly improve our military capability otherwise we will be left behind.
https://taskandpurpose.com/google-china-artificial-intellige...
https://www.cnbc.com/2017/12/13/alphabets-google-opens-china...
Building secure civilian products, countering corporate espionage, and building products bæused world wide, all probably play a bigger part.
As do diplomacy. Disagreement between the bigs powers haven't been solved with military power for a long time.. why would that change?
We have acting leaders at many levels of the military, and a president who talks the big talk but probably hasn't ever deal with a real crisis where the other side won't fold or de-escalated something ever. He's got no reasonable advisers left.
Edit: Not clear why this is getting so many downvotes. If the government is not trustworthy, people won't want to cooperate with the government. Demanding that they put aside their ethics for the sake of patriotism/national security/etc., while ignoring their legitimate concerns, is pure propaganda.
I find it just a little bit hard to take them seriously if they feel this is the second most significant group in arms control last year.
I feel that it is the responsibility of every moral and conscious agent to oppose dark patterns and negative trends within their place of work whenever possible, and while it is easy (and apropos) to accuse Google of perpetrating malicious patterns, I think we ought to laud and publicly encourage internal currants that oppose that trend, not smirk at them or deride them for not doing enough.
Development of smart munitions, better sensors/intel, and targeting precision has reduced the scale of military operations, entrenchment, and collateral damage. I think that was a form of technological disruption that was overall for the better.
There's a valid counter-argument that making war smaller and easier will lubricate the willingness for politicians (and the public) to enter into war, or maintain a state of pseudo-war. That is certainly a drawback.
You are not wrong. Alibaba, Baidu, etc. work heavily with the Chinese government in this area.
This zero-sum, jingoisti, outlook on the world has caused most damaging and bloody wars in history. World policy is not a zero sum game and every human does not have to help kill other humans for the world to be at peace.
The danger with AI in particular is that eventually the technology gets trivially accessible. You don't need to hire specialists, just wait a bit and download the library and get the how-to book from Amazon or Barnes and Noble.
There are also instances of successfully preventing technology from becoming commonly used in warfare. Nuclear tech is one, but not a perfect fit because it requies huge investments. But chemical weapons are quite comparable to AI, in that any chemistry grad could create such weapons, yet they have been used far less often than mere feasibility would suggest.
IMO this is the only one that matters in the list. The Pentagon will get another US Based Tech company to aid them in the pursuit of new weapons because there is just too much money to ignore it. Meanwhile peace talks between South and North Korea actually reduces the chance of a thermonuclear war between two nations. We can only hope that other nations follow suit.
Realistically we're not going to solve the problem of nation states wanting to protect their interests halfway across the world, which'll among other things mean killing some "combatants" from a drone.
But we can hope to do things like improve targeting, and a reduction in civilian casualties or collateral casualties. Right now the "AI" is some group of 20-somethings sitting behind a computer in Nevada, what if we trained an AI instead, and could e.g. hold legislative audits on what that software was configured to target?
> what if we trained an AI instead, and could e.g. hold legislative audits on what that software was configured to target?
You are much more optimistic than me.
If those audits ever happened, they would be held in secret, and have very different goals than most people would consider moral.
Wow the faux ethical reach-around they've giving each other over this is comical.
If any of that isn't completely true yet, it will be.
[1] https://www.scientificamerican.com/article/23andme-is-terrif...
Unless your real argument is "Everyone's evil so don't bother trying".
Of course that doesn't mean you have to join google to make a difference, but pretending that you have to not be working at Google to be helping change things is just silly nonsense.
It has an effect, you just need to have a large mass of people to make it work.
That's not even the OPs argument, nor is it some "ole" argument. You have both put words in their mouth and framed it as some classic well-known fallacy of which it is not.
The OP is making the distinction between "voting with your feet" which takes real commitment and has immediate effects versus "signing a letter"[1] which involves nothing more than a few seconds of your time without having to leave your desk.
If Google has trouble attracting talent due to matters of conscience it directly impacts its abilities to build new services as well as improve existing service in order to increase revenue.
[1] https://www.nytimes.com/2018/04/04/technology/google-letter-...