So, taking this to Microservices: while we can build a bunch of little services that will be reused everywhere. The large majority of a system will be this kind of glue: You can't make every single piece of code you write be as well defined as grep, but with a service interface. And while you will be able to track what the little pieces do, the bigger command and control pieces will always present a problem. We can't wish complexity away, no matter how hard we try.
So designing a bunch of microservices and hoping most of your problems will be solved is like trying to build something in Unix without perl and shell scripts. But I see companies, today, that think it's a silver bullet. They've not read Brooks enough.
They never do. It is incredible how those mistakes keep being repeated.
They are effective, reliable way to get certain things up and running in a given time frame and in a decentralized fashion. If you have limited resources and need to have things working in that time frame, decentralized services can be the right decision even if they give you problems later.
Further, being a fairly reliable way to do things, they have appeal even if your time frame is far enough ahead to see the problems.
Coming from a .NET background, I had an interesting path. I started with DOS-based imperative programming, then databases, then OOP/OOAD, then finally functional programming with F#.
Once I truly got on the functional programming bandwagon, I started asking myself what was all this scaffolding for? Why didn't I just build composable functions that passed formatted files around?
This is 180-degrees from the way I used to code, but damn, I like it. A lot. I can use the O/S as an integration tool, and the entire deploy/monitor/change cycle is a million times easier.
I wonder how many other OOP guys are going to end up in my shoes in another 10-20 years or so?
Note: I see other commenters are talking about how you can't solve your problems simply by using micro-services. I'd agree with that, with one caveat: if you've coded your solution in pure FP, you've solved your problem in a way that's by definition composable. You can certainly decompose that solution into microservices. I think the question is whether or not you have to "re-compose" them into one app in order to make changes.
If you're writing pure transforms, you're already creating the micro-services. It's just a matter of where they live. But if you start to play fast and loose with imperative programming, sure, you're going to need some industrial-strength glue. Even then it's going to be a mess.
It'd be interesting to have a pure FP language where you could either compile the entire code as one piece, or automatically split it up into chunks and deploy separately. You could keep the code in one place and the only thing you'd need to tweak would be the chunking. (You could also layer in some DevOps on top of that where certain pieces would talk to other pieces on a schedule, or across a wire, and that could be specified in the code. You could even meld this into a puppet/ansible-style system where not only do you code the solution, but you code the deployment as well. Neat idea. Somebody go make that.)
This is a good reminder. There is a lifecycle. Know when and where it starts, and when it ends.
Quite beside the point, but:
- You don't want to use `cat`, and
- You probably want to pipe `tail` into `sed`, not the other way around.
This will be substantially faster if `file` is large, because it lets `tail` be clever about how it finds the last ten lines.
And it's fairly important stuff for me on a regular basis. At work we generate hundreds of gigs of logs daily, and doing things in the right order with tail and grep etc is often the difference between a script working or not, or between it taking seconds and taking minutes.