Anyway long story short, most of these people do not really understand why they need all this rocket science to manage < 500 internal users. One of the new buzzwords I am hearing these days is mostly related to bigdata and machine learning. One of my managers came to me and asked me why dont we integrate our product with hadoop it will solve the performance problems as it can handle lot of data.
I am frustrated by the industry as a whole. I feel industry is simply following marketing trends. Imagine the no. of man-hours are put into investigating technologies and projects dropped mid-way realizing the technology stack is still immature or not suitable for at all.
People want theirs apps to be made with Visual Studio (BTW, FoxPro was part of the package).
So they ask: "In what is the app made"?
"In Visual, Sir."
Done. End of story (like most of the time, obviously some times people are more dangerous and press it ;) ).
----
The point is not focus in the exact word but in what the people know the word will give to them.
So, for example "Big Data". The meaning for us matter zero. The meaning to some customer is that it have a largeish excel file that with his current methods and tools take too long to get results.
So. Do you use "Big Data Tools"?
"Yes Sir."
And what about use Hadoop?
"We use the parts of big data tech necessary for solve this, and if we need to use hadoop or other similar tools that fit better with your industry and that use the same principles will depend in our evaluation. Not worry, we know this"
Or something like that ;). Know that worry the people behind the words have help me a lot, even with people with WORSE tech skills (damm, I have build apps for almost iliterate people with big pockets but only witch cellphones as references of tech!)
And the anecdote about the largeish excel file that was too big and take too long? Yep, true. And was for one of the largest companies in my country ;)
My work lands me in a number of different conferences in non-software industries. This is true for all industries. Its just that ours has a faster revolving door. That, in addition to a low barrier to entry (anyone can claim they're a web developer), leads to a higher degree of this madness. Its just part of human behavior to seek out, and parrot, social signals that let others know you, too, are an insider.
Personally, I have to avoid a great number of those gatherings, since the lot of them are just a circlejerk of low-density information. If I pay too much attention to those events, I catch myself looking down my nose, and since that isn't productive/healthy behavior, I avoid landing myself in a place where guys with buddy-holly glasses and <obscure-craft-beer> argue which Wordpress plugin is the best.
Unfortunately I have to agree as a developer. My job is to make a fast, reliable, stable product but at the same time I'm questioned the tools I use by people who don't have any knowledge but heard the latest trend.
But sometimes it's also very easy to please people. Big data: just insert 10M records in a database and suddenly everyone is happy because they now have big data :|
Take your .war file, drop it onto JBoss. It deploys across the cluster in a zero downtime manner, isolates configuration, provides consistent log structure, cert management, deployment. You can deploy dozens of small war's to the same server and they can talk to each other. Load balance across the cluster automatically based on actual load. Run scheduled jobs and load balance the scheduled jobs themselves. Allow them to be isolated and unique within the cluster.
I may not like Java as a language, but from an infrastructure standpoint Java was basically Heroku long before Heroku was Heroku. The infrastructure is just...solid. The downside was that the XML config stuff was just messy.
I mean, it's great to have this new tech and all, but when you're trying to build something to last some years, sometimes it's hard to filter the crap between all the buzzwords. It just reinforces the thought that smart people should just leave this field entirely or search for other fields of knowledge (or business) where our knowledge of programming can be made of use.
I'm 35 now, but I'm starting to realize that I will not have the patience to keep with all the crap just to be employable.. There are some areas where being old and experient is valuable. Philosophy, science, psychology, teaching, etc., are maybe some of them, but this industry is definitely not one of those areas. It makes me think that what I'm building now will some day be completely wiped out of existence..
"We could store gigabytes of data on the clients without having to pay for servers"
Unlike the natives, however, who simply wasted some time building extraneous fake runways, in the Valley people are royally screwing up their own core architecture.
I'm old enough to find this more humorous than frustrating.
Bigdata and machine learning are also hot word. But they are clearly modern engineering. Consultants exist to explain the best way to achieve modern best practices to people without the appropriate background. If someone asks about "Why no Hadoopz plx?", either explain the other technology used instead (maybe spark, storm?) or explain that the scale is small enough for Access to handle. That's a consultant's job.
'twas ever thus.
Computer science is not a real field.
That seems excessive"
A 100 times yes. We tried to split our monolithic Rails app into micro-services built in Go. 2 years and many fires later, we decided to abandon the project. It was mostly because the monitoring and alerting were now split into many different pieces. Also, the team spent too much time debating standards etc. I think micro-services can be valuable, but we definitely didn't do it right, and I think a lot of companies get it wrong. Any positive experiences with micro-services here?
A small team starting a new project should not waste a single second considering microservices unless there's something that is so completely obviously decoupled in a way that not splitting it into a microservice will lead to extra work. It's also way easier to split into microservices after the fact than when you're developing a new app and you don't have a clue how it will look like or what the overall structure of the app will be in a year (most common case for startups).
The thing is, you need a massive investment in infrastructure to make it happen. But once you do, its great. You can create and deploy a new service in a few seconds. You can rewrite any individual service to be latest and greatest in an afternoon. Different teams don't have to agree on coding standards (so you don't argue about it).
But, the infrastructure cost is really high, a big chunk of what you save in development you pay in devops, and its harder to be "eventually consistant" (eg: an upgrade of your stack across the board can take 10x longer, because there's no big push that HAS to happen for a tiny piece to get the benefits).
Monolithic apps have their advantages too, and many forget it: less devops cost, easier to refactor (especially in statically typed languages: a right click -> rename will propagate through the entire app) and while its harder to upgrade the stack, once its done, your entire stack is up to date, not just parts of it being all over. Code reuse is significantly easier, too.
The more dramatic effect was on a particular set of endpoints that have a relative high traffic (it peaks at 1000 req/s) that was killing the app, making upset our relational database (with frequent deadlocks) and driving our Elasticsearch cluster crazy.
We did more than just split the endpoints into microservices. We also designed the new system to be more resilient. We changed our persistence strategy to make it more sensible to our traffic using a distributed key-value database and designed documents accordingly.
The result was very dramatic, like entering into a loud club and suddenly everything goes silent. No more outages, very consistent response times, the instances scaled with traffic increase very smoothly and in overall a more robust system.
The moral of this experience (at least for me) is that breaking a monolith app into pieces has to have a purpose and implies more than just move the code to several services keeping the same strategy (that's actually slower, time consuming and harder to monitor)
In my experience, any monolith that can be broken up into a queue based system will benefit enormously. This cleans up the pipelines, and adds monitoring and scaling points (the queues). Queues removes run-time dependencies to the other services. It requires that these services are _actually_ independent, of course.
I do, however, avoid RPC based micro-services like the plague. RPC adds run-time dependencies to services. If possible, I limit RPC to other (micro) services to launch/startup/initialization/bootstrap, not run-time. In many cases, though, the RPC can be avoided entirely.
Yep. We already had a feature flag system, a minimal monitoring system, and a robust alerting system in place. Microservices make our deployments much more granular. No longer do we have to roll back perfectly good changes because of bugs in unrelated parts of the codebase. Before, we had to have involved conversations about deployments, and there were many things we just didn't do because the change was too big.
We can now incrementally upgrade library versions, upgrade language versions, and even change languages now, which is a huge win from the cleaning up technical debt perspective.
It makes sense for some thing. We run a webshop, but have a separate service that handles everything regarding payments. It has worked out really well, because it allows us to fiddle around with pretty much everything else and not worry about breaking the payment part.
It helps that it's system where we can have just one test deployment and everyone just uses that during testing of other systems.
I've also work at a company where we had to run 12 different systems in their own VMs to have a full development environment. That sucked beyond belief.
The idea of micro-service are is enticing, but if you need to spin up and configure more than a couple to do your work, it starts hurting productivity.
The thing is though, the Elixir feed checker has its own database table that tracks whether it's seen an episode in a feed. And when there's a new episode it sends an API call to WP to insert the new post. The problem is that sometimes the API calls fail! Now what? I'll need to build logging, re-try etc. So I'm thinking of making the feed checker 'stateless' and only using WP with a lot of query caching as the holder of 'state' information about whether an episode has been seen before.
To sum up my experience so far, there's something nice about being able to use the right tech for each task, and separating resources for each service, but the complexity--keeping track of whether a task completed properly--definitely increases.
The advantage though is that APIs (system boundaries) are usually better defined.
Perhaps one should use the best of both worlds, and run microservices on a common database, and somehow allow to pass transactions between services (so multiple services can act within the same transaction).
The non-web world has been doing this with message queueing for about 15 years. Maybe more.
That said, in places where it doesn't make sense we didn't try to force it. Our main game API is somewhat monolithic, but behind it we have almost 10 other services. Here's a quick breakdown:
- Turn based API service (largest, "monolithic")
- Real-time API service (about 50% the size of turn-based)
- config service (serves configuration settings to clients for game balancing)
- ad waterfall service (dynamic waterfall, no actual ads)
- push notification service
- analytics collection service (mostly a fast collector that dumps into Big Query)
- Open graph service (for rich sharing)
- push maintenance service (executes token management based on GCM/APNS feedback)
- help desk form service (simple front-end to help desk)
- service update service (monitors CI for new binaries, updates services on the fly - made easy by Go binary deployment from CI to S3)
- service ping service (monitors all service health, responds to ELB pings)
- Facebook web front-end service (just serves WebGL version of our game binary for play on Facebook)
- NATS.io for all IPC between services
...and a few more in the works. Some of these might push the line of "micro" in that they almost all do more than a single function's worth of work, but that level of granularity isn't practical.But don't get too caught up on the "micro" part. Split services where domain lines naturally form, and don't constrain service size by arbitrary definitions. You know, right tool for the job and whatnot.
I wouldn't, however, just "do microservices" from day one on a young app. But usually that young app has no idea what the true business value is, i.e., you have no idea what down time of certain parts of your services really means to the business. That's the #1 pain point we're solving: having mission critical things up 100%, and then rapidly iterating on new, less stable feature designs in separate services.
You should, however, keep an eye on how "splittable" everything is, i.e., does everything need to be in the same DB schema? Most languages have package concepts, which typically align (somehow) with "service" concepts. Do you know their dependencies? That sort of thing. Then, the later process of "refactor -> split out service" is pretty straightforward and easy to plan.
I don't really like that model applied to everything, but eh now you are kind of forced in a hybrid approach - say, your macro vertical plus whatever payment gateway service, intercom or equivalent customer interaction services, metrics services, retargeting services, there are a lot of heterogeneous pieces going into your average startup.
but back on topic, what Docker really needs now is a whack on the head of whoever thought swarms/overlays and a proper, sane way to handle discovery and fail-over - instead we got a key-value service deployment to handle, which cannot be in docker and highly available unless you like infinite recursion.
I'm currently working on a large refactoring effort along these lines. The end goal is to create a modular, potentially distributed system that can be deployed in a variety of configurations, updated piecemeal for different customers, and integrated by our customers with the third-party or in-house code of their choice using defined APIs. We aren't typical of the other examples, though, in that we do literally ship our software to our customers and they run it on their own clusters.
a good example of this that I've used in production at my current $dayjob: dynamic PDF generation. user makes request from our website, request data is used to fill out a pdf template context which is then sent over to our PDFgen microservice which does its thing and streams a response back to the user.
Ah yes, the 'let's have decentralised microservices with centralised standards!' anti-pattern. It results in lots of full-fledged, heavyweight, slow-to-update services, which also have all the problems of a distributed system. It's the worst of both worlds.
Although I personally had to deal with some monolithic monsters that I wished were split into smaller services.
IMHO. You need a lead with a clear vision that drives the effort. Too many leads will create chaos.
Well, there's your problem - you need a monitoring microservice and an alerting microservice! Well, those may be too coarse by themselves, but once you break them down into 5 or 6 microservices each, you'll be ready for production.
To answer some questions: yes this is obviously poking fun at Docker, but I also do really believe in Docker. See the follow-up for more on that: https://circleci.com/blog/it-really-is-the-future/
In a self-indulgent moment I made a "making of" podcast about this blog post, which is kinda interesting (more about business than tech): http://www.heavybit.com/library/podcasts/to-be-continuous/ep...
And if you like this post you'll probably like the rest of the podcast: http://www.heavybit.com/library/podcasts/to-be-continuous/
> -It means they’re shit. Like Mongo.
> I thought Mongo was web scale?
> -No one else did.
It's so incredibly true, and I laugh (and cry, b/c we use Mongo) at this section each time I read it. Also, this gets me every time:
> And he wrote that Katy Perry song?
- So shared webhosting is dead, apparently Heroku is the future?
- Why Ruby, why not just PHP?
- Wait, what's Rails? Is that different from Ruby?
- What's MVC, why do I need that for my simple website?
- Ok, so I need to install RubyGems? What's a Gemfile.lock? None of these commands work on Windows.
- I don't like this new text editor. Why can't I just use Dreamweaver?
- You keep talking about Git. Do I need that even if I'm working alone?
- I have to use command line to update my site? Why can't I just use FTP?
- So Github is separate from Git? And my code is stored on Github, not Heroku?
- Wait, I need to install both PGSql and SQLite? Why is this better than MySQL?
- Migrations? Huh?
Frameworks, orchestrations, even just new technologies -- these are great if they actually make your job easier or if they make your product better. Unfortunately, they often do exactly the opposite.
Nooooooooooooooooo. Everytime someone says "service discovery" a kitten dies (Except for consul, that's the biz).
I really dont have any idea why the people are are so excited about "docker" all the things.
I don't know if you understand what Docker really is when you say something like this: "Run only one process in one brand new kernel", the kernel is shared between containers, that's the whole idea, you package the things your application need and be done with it.
The current problem with containerization is that there are no really good or understood best practices, people are still experimenting and that's why it's a big moving target and, consequently, a pain in the ass if you need to support a more enterprise-y environment. You will need to be able to change and re-architecture things if the state-of-the-art changes tomorrow.
I agree with your sentiment about going overboard on "docker all the things", that's dumb and some people do it more because of the hype than by understanding their needs and using a good solution for it but I think you are criticising something you don't really grasp, these two statements:
> "Run only one process in one brand new kernel"
> you have a kernel in your hand, why the hell you will run only one process on it?
I'm not trying to be snarky, I really recommend you doing a bit more of research on Docker to understand how it works. Also, Docker doesn't make it a pain in the ass to upgrade apps, quite the contrary if you do it in some proper ways.
One process per container is perfectly fine. In fact, that's the common use case. There is absolutely nothing wrong with it, and there is practically zero overhead in doing it.
What you gain is isolation. I can bring up a container and know that when it dies, it leaves no cruft behind. I can start a temporary Ubuntu container, install stuff in it, compile code in it, export the compilation outputs, terminate the container and know that everything is gone. We do this with Drone, a CI/build system that launches temporary containers to build code. This way, we avoid putting compilers in the final container images; only the compiled program ends up there.
Similarly, Drone allows us to start temporary "sidecar" containers while running tests. For example, if the app's test suite needs PostgreSQL and Memcached and Elasticsearch, our Drone config starts those three for the duration of the test run. When the test completes, they're gone.
This encapsulation concept changes how you think about deployment and about hardware. Apps become redundant, expendable, ephemeral things. Hardware, now, is just a substrate that an app lives on, temporarily. We shuffle things around, and apps are scheduled on the hardware that has enough space. No need to name your boxes (they're all interchangeable and differ only in specs and location), and there's no longer any fixed relationship between app and machine, or even between app and routing. For example, I can start another copy of my app from an experimental branch, that runs concurrently with the current version. All the visitors are routed to the current version, and I can privately test my experimental version without impacting the production setup. I can even route some of the public traffic to the new version, to see that it holds up. When I am ready to put my new version into production, I deploy it properly, and the system will start routing traffic to it.
Yes, it very much is the future.
I'm pretty docker ignorant. I think I get it in concept. I manage >150 web sites (~15,000 pages total) that are php based with eXist-db and oracle (overkill but forced to use it) for database backends. My team develops on mac os x and pushes code to RHEL. We have never had a compatability problem between os x and RHEL except for some mgmt scripts in bash that were easily coded around.
Big data to me is a 400 MB apache log file.
I go home grateful I don't have to be in the buzz word mix.
I do read a lot about technology and over time that informs some changes like using apache camel for middleware, splunk for log file analysis yada dada...
I have had bosses that brought me buzz word solutions that don't ever match the problems we have. I hate that but right now I am not in that position. My boss leaves technology decisions to us.
Least you think we are not modern at all we do use a CDN, git and more.
Some days I get anxiety from reading HN, feeling stupid. Some days I get a lift from HN from reading articles like this one and the comments.
I am so glad I'm not in the business of chasing technology.
I read both articles a year ago and it really helped me grasp the whole container movement.
"-You think that’s going to be around in 6 months?"
Isn't reputation a thing of beauty?
1) Small teams (~1-5 people) trying to seem "big" by working at Google's scale.
2) Heroku's prices. We are currently (successfully so far) migrating a small Django project from bare Amazon EC2 instances to ECS with Docker. Even using 3 EC2 micro instances (1 vCPU, 1 GB RAM) for the Docker cluster we would spend ~8 USD/month/instance. With Heroku the minimum would be 25 USD/month/dyno. That's a 3x increase in expenses.
It's very possible to take advantage of technologies like containers without getting too caught in the hype.
And 25 and 75 are bogus numbers, what if we start running 10 instances?
Because I've seen my share of nasty "legacy" automation but, surprisingly, I still think a good set of well thought-out shell scripts written by someone that understands what's being automated still beat modern tools, even when the person doing the automation is the same.
I don't quite know why this is, but there's something timeless about shell scripts. I've also seen shell script automation survive for a long time unattended and with zero issues. Not so with some of the modern tools that are supposed to be all unicorns and rainbows.
You can still easily set things up so it's a git based deploy which is hands free after the initial push.
Now you have a single $5-10/month server that runs your app's stack without a big fuss. Of course it's not "web scale" with massive resiliency but when you're just starting out, 1 server instance is totally fine and exactly what you want.
I've ran many projects for years on 1 server that did "business mission critical" tasks like accepting payments, etc..
When I see titles like that (despite the fact that it was intended as sarcasm), I think to myself, e.g., "I bet at least hundreds of people who scrolled past it thought it was sincere, and now they will have this subconscious 'Heroku is Dead... Docker...' thought at times when deploying projects. Maybe they'll even check out Docker. Maybe these hundreds of people will represent a tipping point of sorts for Heroku->Docker migrations, because one of them will write a really great blog post about it, and it will receive thousands of views..." (alternate endings of the same thought continue to be brute-forced for a few moments).
Along the same vein of thinking, back in 2008 I had this "realization" that Google could control the world by simply showing results based on headline titles (e.g., a search for "Obama" during the election could have resulted in articles / results whose titles have words/phrases whose presences are positively correlated to lower or higher stress levels, assumptions, other emotions, etc., resulting in a net positive or negative sentiment, respectively, about the subject of the search query, all while simply scanning the results to determine which one to click).
This would be true for an average BuzzFeed-consuming-crowd, which -to my knowledge- isn't the case here.
Any of the proposed problems that containerization was supposed to fix are already fixed by using proper configuration management. In almost all cases so far, people yammering on about docker and containers (and CoreOS), it ended up being their idea of configuration management, because they didn't have any in the first place.
Say you want to fix your 'problems' with setting up servers, how about doing it the right way. You will need deployment services, regardless of containers, VMs or bare metal. You will also need configuration management services, and monitoring. Containers and special distributions solve none of it, knowledge to run systems is still required and not actually fixing your problems and layering stuff on top of it doesn't actually help.
Get something like SaltStack or Chef, and configure the out of everything. It doesn't care what you're running on, and actually solves the problems that need fixing.
Heroku is great, and free for small services. On the other hand, a highly-available kubernetes cluster is going to set you back at least $100 per month, which is just too much for small startups and side projects before they take off.
I think I'm going to forget everything and head towards http://serverless.com/. No Heroku, no Docker, no micro-services, no servers. Just everything running on AWS Lambda and DynamoDB. And everything static in S3 behind Cloudfront.
Or maybe just Firebase. But I really am tired of managing servers.
Maybe the problem is AWS.
Disclaimer: I work at Convox.
I have to use ECS for caching (I am not happy about it)
Builds might fail due to the custom docker version/compilation
You can mock docker, but people are using it in one way or another and you should support it properly.
Having read the article back then (and reread it now) it seems like it's still relevant. Maybe we'll have to add the year qualifier after a while when AWS lambda becomes "the way".
https://circleci.com/blog/it-really-is-the-future/
But there is, as the author notes, truth in the satire.
Read it, it's a lovely 5 minutes piece of writing.
At least you understand the author's intention. I would be worried if some non-technical people took the title literally...
is there ANY way i can spin up a server, add the ssh keys to some configuration file somewhere and just "docker-magic push" and have my rails application running ?
or do "docker-magic bundle exec db:migrate" and have that command run on the server.
Or push a Procfile with worker definitions and have the PAAS automatically pick it up, add it to supervisord/systemd and run it ?
There is, however, still a hump to get over in installation -- you need to learn what BOSH is, install BOSH, then install Cloud Foundry with BOSH. In the long run, for a production deployment, this is what you want. But it certainly doesn't feel that way when you just want to kick some damn tires.
If you just want to tinker, you can try PCFDev[0]. It's a fully-functional Cloud Foundry installation in a single VM.
Disclosure: I work for Pivotal, we donate the majority of engineering on Cloud Foundry.
[0] https://pivotal.io/platform/pcf-tutorials/getting-started-wi...
Edit: yes, I know we ask you to signup during the PCFDev install. I hate it too. We have to for export compliance, it can't be avoided.
In a very specific case, Heroku is the best solution for my problem. Sounds like it is for you too.
http://nickjanetakis.com/courses/scaling-docker-on-aws
It covers using RDS, ElastiCache and also handles load balancing your app + much more.
You could basically substitute all these backend buzzwords with "Webpack", "Grunt", "Gulp", "Requirejs", "React", "Angular", "Ember", "Backbone", etc. and it would have same effect on the readers--they think you're an annoying hipster.
People seem to underestimate just how powerful modern machines really are. And I don't get why people seem to think it's hard to deploy simple web applications. Just write a 4-line shell script that rsync's, runs whatever DB migrations you may have and restarts the thing.
Since they upped the cost of their small tier, I moved to Digital Ocean and installed Dokku, which gives me that Heroku-like deployment experience so managing my (admittedly very small) website isn't that much of a hassle.
And you automatically get things like auto-scaling, database auto-provisioning. easy debugging and more.
Disclaimer: I'm Boxfuse's founder and CEO
http://discuss.joelonsoftware.com/?joel.3.219431.12
(Factory factory factory factory.)
With tools like Rancher http://rancher.com, you can already see things moving in that direction. Next step is rancher-as-a-service.
When it comes to developers, I think open systems will always prevail in the end (it's just more flexible).
I write stuff in Scheme. I'm a hobbyist, there's no reason for me not to, and I love the language. The apps I write are sometimes single-threaded (or coroutine-based) monoliths. But I only have one machine available for me, and the things I'm writing are fairly simple. It's good ENOUGH. And Worse really is Better[1].
1:and I truly mean that in the Gabriel sense. As in the New Jersey model. Not any other way.
Serious question though: I would absolutely love to have an introduction on how to use Docker to deploy one or two web applications that use a typical amount of backend services, say some sort of database and a redis server. All of this would probably run on a single VM (whether Amazon, DigitalOcean, Linode, ...) and you mainly use Docker to isolate the applications from each other in terms of the environment/dependencies that they need.
How do I do this with Docker in a way that gets me an easy deploy process? (Or maybe the question is actually, should I even do this with Docker?)
reader implements and gets massive bill for personal blog hosting
"Am I doing this right?"
> So I just need to split my simple CRUD app into 12 microservices, each with their own APIs which call each others’ APIs but handle failure resiliently, put them into Docker containers, launch a fleet of 8 machines which are Docker hosts running CoreOS, “orchestrate” them using a small Kubernetes cluster running etcd, figure out the “open questions” of networking and storage, and then I continuously deliver multiple redundant copies of each microservice to my fleet. Is that it?
> -Yes! Isn’t it glorious?
> I’m going back to Heroku.
I would have never considered Docker containers unless artifact preservation/isolation and deployment issues hadn't forced me to look toward a solution.
But if you're a CTO with a startup with 10+ server-side developers and plan to hire at least as much in near future, suddenly all these dockers and microservices actually make sense.
So, unless you'll start conversations with _who_ you are and _what problem_ are you trying to solve, of course the other side will seem stupid.
As a consultant, I often get asked those kinds of questions: "Should we use X?"
Whether it's programming languages, databases, operating systems, whether it's Chef vs Puppet vs Ansible vs Docker vs Whatever, it's a question that comes up a lot.
I generally answer it with "What are your team good at? What have they used, what do they know well?"
There are always exceptions to the rule, but in general I encourage people to play to the strengths of their team, rather than recommending Technology X because it's shiny and bang on-trend.
Can someone explain to me the advantages of Docker compared to Jails?
Hitler uses Docker: https://www.youtube.com/watch?v=PivpCKEiQOQ
By the way, why all the downvotes to the parent?
This is quite frustrating for both people who are aware of those issues and trying to fix them as well as the people missing out on the real advantages of such technologies.
This reminds me of similar sentiment around virtualization and cloud computing later in my peer group:
Some sold VMs as security feature and people focused their criticism on that, without understanding other advantages like quick/self-service provisioning of systems. Later one, cloud computing was trivialized as "it now just somebody elses computer" which completely ignored advantages like no ramp up costs and the ability to problematically manage your systems life cycle.
PS: Considering every new thing a fad probably also makes you consider 'hadoop' the latest shit in big data processing and assume today's tech companies hipster are fighting over wordpress plugins. (Like, really?)
1. I have a much better understanding of what's happening behind-the-scenes
2. For most small startups, you should seriously consider the time (and therefore, cost) of investing in your own infrastructure.
For point #1, I think understanding your options and how they benefit your company is essential for you transition from a small -> medium -> large size company. The paradigms you learn by virtue of researching the new technologies might end up being applicable in other parts of your development process.
On point #2, I partially regret not deploying to Heroku, seeing where our system became stressed, and optimizing. Attempting to scale for things you don't know about yet is tough, and can lead you down a path of wasted time and money.
exactly. I mean look, if you have a lifestyle business that's only going to support 5-10 people, it's totally a waste of time. if you have some hope of scaling this is the way to go. I get it, just use Heroku. It's easy and convenient. If you're planning on a billion dollar exit, this way is way better.
> I need to decide if i believe my own hype?
yeah. sorry.
Microservices often hit the same database. You want to be able to split up the database. Not just into shards, but into distributed nodes.
And by doing this, you split up the whole stack.
Having monolithic app does not make it bad. What makes it bad is not having proper modules with proper interfaces.
SOA comes handy when you want to distribute your workload, so now we have proper modules but those modules needs more computing power, so split them up into boxes and pay the pain for managing that, because you have no option.
When I wrote that article it was largely focused on the potential for Docker to create a bunch of Heroku competitors as well as a simplified development experience across multiple languages.
The businesses aren't there yet although a ton are trying. The local dev experience has not materialized yet either outside of native Linux due to performance issues with volumes that only a 3rd party rsync plugin have come close to fixing.
I still use and advocate for Heroku pretty heavily for just about any non-enterprise environment.
It's a constant balancing act. Too flexible, it becomes overwhelming. Too constrained and you sacrifice a bunch of the perks of using Docker.
The conclusion I've come to is the only way to do it is to be unashamedly opinionated about keeping things simple for the average user. Otherwise you end up having that exact conversation
Have a look at PCFDev.
Disclosure: I sit next to the PCFDev team and use it in my dayjob.
When it comes to micro service, it would be interesting to know simple things like what kind of services were created, how large are, how communiction is handled, how large team(s) behind the service etc.
For some companies these are of course trade secrets, but sometimes opening things up might be good marketing. An example is Backblaze with their very detailed descriptions of their storage pods.
https://www.infoq.com/presentations/microservices-comparison...
I find them much better than walls of text.
Getting Back To Coding http://www.drdobbs.com/architecture-and-design/getting-back-...
I think we need a better word for apps that are single tight self-contained systems than "monolith". You can design elegant interfaces, and avoid creating a sloppy mess, with function calls or objects too.
This rant sounds just like any rant from old dev mocking a new tech. "This is less efficient, this is too complicated, this can't be taken seriously, this won't last".
Creating a character obsessed with "this is dead" hardly dissimulate the obsession with "this won't work". Do whatever you please, we don't care. But don't mock others about what they please.
Passing through that, let's address the critics.
Microservices and docker are not necessarily tied. I write only monolithic apps, and use them with docker through dokku.
Etcd is a microservice problem, not a docker one.
You don't need coreos or kubernetes to use docker in production. You need them if you want massively scaled applications, just like you would have many servers running the same app with replication without docker. Most of us don't need that (and those who need it probably won't find it more complicated than what is needed to do that without docker).
If you don't want to manage servers, well, don't manage them. That's what cloud services are made for. But please tolerate some people love devops and not spending much direct money into infrastructure.
In any case, the author of the post actually agrees with you: https://circleci.com/blog/it-really-is-the-future/
Probably because we have seen it all before, and there isn't much "new" most of the time.
I know it's tongue-in-cheek but few if any of these new fangled things are critically dependent on one another.
It turns out that the optimal size depends on the balance between the overhead costs associated with allocating resources within one firm and the transaction costs associated with two firms doing business with each other. The overhead costs are higher with large firms because there's more internal resources, including people, to allocate. On the other hand, transaction costs are higher with small firms because each firm does less themselves so they need to transact more with others to accomplish their goals.
As the relative costs vary over time, the optimal size varies too, and firms in an industry will grow and shrink. If it increases, then you'll see mergers and acquisitions produce larger firms. If it decreases then you'll see firms start splitting or small startups disrupting their lumbering competition.
I suspect a similar thing happens in software, where there's an optimal service size. It could be infinite, where it makes sense to build large monoliths to reduce the cost of two systems communicating. Or it could be one, where it's optimal to break the system at as fine a granularity as possible (function level?).
The optimal size depends on the balance of costs. All else being equal, by drawing a service boundary between two bits of functionality you shrink the services on either side but you increase the number of services and add communication costs for them to exchange data and commands.
How these costs balance out depends on the technology, and there are competing forces at work. As languages, libraries and frameworks improve, we can manage larger systems at lower costs. That tends to increase the optimal service size. As platforms, protocols and infrastructure tools improve, the costs to run large numbers of services decreases. That tends to decrease the optimal service size.
The microservices movement, and to an extent the serverless movement, assume that in the medium- and long-term the technological improvements are going to tip the scales sharply in favour of small services. I agree that's likely the case. But we're not there yet, except in some specialized cases such as large distributed organizations (Conway's law). But it's going to be at least a few years before it's worthwhile to build most software systems in a microservice architecture.
But new technology is necessary and early adopters are necessary. Iteration is necessary. Don't punish it.
Is there an advantage to using docker when it takes 3 hours to rebuild our relatively small database?
This joke will never get old to me
That said, if you can get your system to work with a single Heroku box, you really truly can simplify your life. That is what we're trying to do with http://gun.js.org/ , be able to start with a single machine and no configuration/setup/complexity. Then grow out.
We just had a discussion on the WebPlatform Podcast about all of this P2P stuff (https://www.youtube.com/watch?v=NYiArgkAklE) although, like I said, I probably got too jargony.
But props to circleci for calling out the elephant in the room. Great marketing actually.
The idea is you can deploy any app to any infrastructure of your choice (inside Docker containers). This means that you are not locked into Heroku and it gives you much more flexibility.
It's basically a hosted Rancher http://rancher.com/ service with a focus on a specific stack.
I think in the future, there will be a lot of services like Baasil.io (specializing in various stacks/frameworks) and managed by various open source communities.
Docker and Kubernetes WILL become more accessible to developers - I would bet my life on it.
I'm currently building a CLI tool to allow deploying in a single command - So you can get the simplicity of Heroku while not losing any flexibility/control over your architecture.