But my bigger point here is that you're essentially asking "well how do you monitor your monitor?" At which point up the chain do you have enough? Also, I think the original post was simply a demo of what is possible. Yet whenever someone posts something, people go in the comments to belittle it. "Yeah, you built a monitoring solution... Well what happens if that goes down?"
Which is a legitimate question. But obviously if your production service is that critical to your business, you won't be monitoring it with a service that costs $0.0000002 per execution.
I think you underestimate the interdependency of services in AWS. Historically, if there were problems with S3 or EBS in us-east-1, you could expect the entire API to be flaky, and things like autoscaling to fail. These have been better distributed, but failures still cascade.
> I think the original post was simply a demo of what is possible
No, it wasn't a demo, it was an actual production issue. No alarms, no error logs, no way to tell it wasn't working other than someone noticing the queues were getting larger and contacting AWS.
> people go in the comments to belittle it
Only because the original project projects AWS Lambda as "the solution" for such problems, not realizing that it is just as fallible a solution as everything else.
> Well what happens if that goes down?
The solution to this is well known - two monitoring systems in physically separate locations that monitor each other as well as mission critical systems. Nagios, Icinga, and a dozen other well-tested solutions work remarkably well for these roles, yet people keep writing "new" solutions over and over and over.
> But obviously if your production service is that critical to your business, you won't be monitoring it with [this] service
Then what's it's value, other than as an intellectual exercise?
The whole setup will still cost $0/month.
> The solution to this is well known - two monitoring systems in physically separate locations that monitor each other as well as mission critical systems. Nagios, Icinga, and a dozen other well-tested solutions work remarkably well for these roles, yet people keep writing "new" solutions over and over and over.
Because not everyone needs heavy solutions to do something simple. Side projects, small sites, etc. And some people enjoy implementing old use cases using new technology. When Go was rising in popularity, half the posts on the front page were re-implementing fairly common features in Go.
Even if you're not going to implement this yourself, there can still be some value for other readers.