The first minute is some guy chatting about how Chad gave a talk that was 'visually horrifying' which it turns out was because it looked like a McDonalds theme.
Total waste of my time; with something written down, I could have skimmed some of the headlines in that minute, started reading, or otherwise learned something.
Pasting the whole thing would probably not be useful, but here's a sample from the downloaded XML:
> if you think about it like tiny components one of the best things about say a tiny method when you look at a tiny micro service is it allows you to consider replacing it but you can change it you can understand it really quickly but even if you can't if it's small enough and it's named well enough and it's decoupled well enough microservices tend to be you can throw it away and just restart if you have to
[0] Example: https://gist.github.com/DHager/2e01f0b82e5d3f5a39e6
[1] Look for accesses of https://www.youtube.com/api/timedtext
This is about a specific design pattern that some of us may have exposure to. It's not a speech for the ages as far as I can tell.
"The only thing we have to fear... is fear itself! And microservices!"
"I have nothing to offer but blood, toil, tears and sweat. Oh, and microservices. If we can get the Germans to adopt those, they're totally fucked"
It's a self-contained service presenting a set of related functionalities in a system having a clearly defined, documented, published interface that then becomes a dependency of other services in your system (and may itself depend on services; in the long run you end up with a DAG). It could be viewed as the most natural alternative to monolithic development and makes more sense nowadays, in a world that has better tech/standards around containerization and DevOps.
It's particularly suited to large organizations that allow for organizing teams around specific services. This also means that different teams can choose different tech stacks, depending on what's suitable and what the team has (or wants) experience using.
It's also very well-suited to async/eventually consistent functionality in evented systems. Most complex systems these days present a lot of requirements suited to this paradigm. You often see AMQs show up in relation to microservices for this reason.
You do have to worry about cascading failure when you need strong consistency and low latency responses (so make sure you build systems with sensible fallbacks and constantly test failure scenarios).
Load testing and SLAs are also good ideas with microservices. Another cool ancillary benefit to service-orientation is the ability to scale services up or down depending on load. In a monolithic system, you might end up with more waste because functionality is highly coupled. When you break things out, it becomes easier to identify where more resources are required and resources are correspondingly allocated more efficiently.
That's the high level overview imo.
The primary reason microservices have the opportunity to shine is that they are easy to scale horizontally.
Having been repeatedly burned by microservice complexity and failures, I'm firmly of the opinion that you should do Monoliths until you can't do them anymore. If you compose your code thoughtfully and have well defined API boundaries from the beginning, it won't be that hard to start breaking it out into microservices if you find that you can no longer scale vertically.
When implementing any architecture, the are a lot of confounding issues going to in the choices, including things like: what are the skills of the team, what are the SLA expectations, what hardware resources are available, what are the comfort levels for alternatives, what is the comfort level of retooling, what are the process audit requirements, what are the maintenance costs, what is the black-bus factor, so on and so forth.
Architecture is about addressing needs, not building software in the "hey, this is the current fad, so let's do it." Although, for some companies, that is actually a design feature. "Hey, react.js developers, come here!"
I've seen microservices that fail to scale because the entire thing was written with synchronous/single-queue calls. The biggest reason for writing a microservice is the guarantee I can prove that nothing has changed. If your authorization/authentication API is separate and proved out, I can deploy my application without concern for backdoors getting (accidentally or intentionally) into the code. I always remember: "No code is easier to verify for correctness than no code."
I always cringe when I hear "kill X before it kills you" or "x is considered harmful." What will kill any project is failing to understand and appreciate the implications for the design decisions.
There are ways to minimize this, and certainly ways to instrument such that DAGs are detected, but I'm genuinely interested to know about how people are avoiding cyclic dependencies in their services. I think it's important, because if you're using containers, there's a good chance you're using (eg) readiness probes to determine if your containerized service can reach it's dependencies. If you have a cyclic dependency and something breaks, all of a sudden you can't bring up any containers in the cycle: A can't come up until B comes up, but B can't come up until A does.
(if someone would like to clean it up, that'd be great. That someone can't be me, because I'm in class right now. If you want to find a copy of this, go to the "More" option on YouTube and pick the "Transcript" option.)
One point I could pick out was:
> I've been talking to people who want to know how they can do microservices too and like they really don't understand what it means. "It's just a word I've heard, it's good you know" and they're really smart people so we've already destroyed this concept.
I'm not convinced by the arguments (some of them seem somewhat contradictory: "Small projects succeed" v "Small is not the goal") but I would be interested in seeing the talk turned into a blog post that could be studied more closely and discussed.
Definitely. To reiterate something I posted on Reddit a while back:
___
The lesson I've drawn from it is this: Design for deletion. Your first priority is to design your code for the inevitable day when your successor (or perhaps you yourself) dislikes it. The less effort/risk to uncouple and remove it, the better.
Even the most elegant work will eventually stop being relevant as the business' underlying problems change. If that's not true, then what you're working on should probably be structured as an independent library anyway.