- Static frontend hosted on Netlify (free unlimited scale)
- Backend server on Google App Engine (connecting to Gcloud storage and managed DB via magic)
I realize I'm opening myself up to vendor lock-in and increased costs down the road (if I even get that far), but I've wrangled enough Docker/k8s/Ingress setups in the past to know it's just not worth the time and effort for a non-master.
One notable example is how their NDB client library used to automatically handle memcache for you, but they got rid of that with Cloud NDB Library and forced clients to implement their own caching.
The sequence of datastore APIs I've seen during my experience with AppEngine is:
* Python DB Client Library for Datastore[1], deprecated in favor of...
* Python NDB Client Library[2], deprecated in favor of...
* Cloud NDB Library[3], still supported, but they ominously warn new apps to use...
* Datastore mode client library[4]
[0] https://steve-yegge.medium.com/dear-google-cloud-your-deprec...
[1] https://cloud.google.com/appengine/docs/standard/python/data...
[2] https://cloud.google.com/appengine/docs/standard/python/ndb
[3] https://cloud.google.com/appengine/docs/standard/python/migr...
[4] https://cloud.google.com/datastore/docs/reference/libraries
> If you're not already familiar with these tools consider using a managed platform first, for example Render or DigitalOcean's App Platform (not affiliated, just heard great things about both). They will help you focus on your product, and still gain many of the benefits I talk about here.
And:
> I use Kubernetes on AWS, but don’t fall into the trap of thinking you need this. I learned these tools over several years mentored by a very patient team. I'm productive because this is what I know best, and I can focus on shipping stuff instead. Your mileage may vary.
I actually spend very little time on infrastructure after the initial setup (a week of part time work, since then a couple of hours per month tops).
For comparison, this post describing what I did took nearly a month of on-and-off work. But I might just be slow at writing :)
Real vendor lock-in is when you have decades of code written against an Oracle DB and you're getting charged outrageous Oracle rates and it would also cost a fortune migrate.
Real cloud vendor lock-in is when you have decades of code written against a [cloud vendor] and you're getting charged outrageous [cloud] rates and it would also cost a fortune migrate.[sic]
If you use something like AppEngine to run a Flask or Django app, you will not be locked in much because those are open source libraries with well known runtime options elsewhere.
Same to some extent with any sort of managed OSS database.
If you use something like Cloud Datastore or Firestore or DynamoDB , you are using a proprietary API and will have to rewrite all your client calls , or write an extensive shim, and probably significantly re architect to port.
Even in the “hosted OSS” option there are usually some vendor specific stuff but it can vary a lot. Something like AppEngine specifically used to be an absurd amount of API lock-in but has evolved over the years to be more of a general container runtime.
1. Firebase Hosting for the React frontend
2. GraphJin (Automatic GraphQL to SQL Engine) on App Engine for the backend
3. Cloud SQL Postgres for DB
https://github.com/dosco/graphjinHeroku is great for general applications, but if you're trying to do something that isn't a standard CRUD app, it can really start to bite you in the arse.
Their DB pricing in particular is incredibly inflexible compared to AWS RDS. Among other issues we had with Heroku at my old job, was having a DB that was hitting its storage limits, but was miles away from hitting its memory or connection limits. There was no option but to upgrade to the next tier, with additional memory etc., even though all we needed was additional disk.
That's not to say that Heroku is bad, but like any tool, you need to be aware of the long term costs that are often associated with term convenience.
I run a NodeJS GraphQL server in App Engine Flexible, and it is basically just like running it in a Docker container. It's also pretty trivial to run in Google Cloud Run if I so desired, there is even a tool to assist: https://github.com/GoogleCloudPlatform/app-engine-cloud-run-...
That said, using a static frontend cached on a CDN in general improves initial pageload and cuts down on traffic to your server by a lot. Netlify makes this easy if you want to use React on the client (with NextJS).
With AppEngine you get direct access in one console to all the bells and whistles of Google Cloud, basically the same as the other infra giants. AWS has even more bells and whistles but I find its console more annoying.
Heroku is similar, but Netlify is at least equally simple I found. Maybe someone else can shed more light on differences.
I've also paid for extra builds once or twice in the past (automatically charges a few dollars when you cross the build time limit), and I pay them $9/mo for analytics.
I also have a high(er)-traffic frontend on a CDN which is used by their customers. User writes there are purchases/payments handled by third(fourth?)-party SaaS.
We're all in on AWS and don't care about lock-in.
The vendor lock-in argument isn't worth considering for most businesses.
Amazon will bend over backwards to accomodate a company spending $500 mil a year on hosting (apparently what Snap spends). Sure it's only a fraction of its revenue ($386 bill for AWS), but half a billion is half a billion.
What product is this?
https://lab.cccb.org/en/arthur-c-clarke-any-sufficiently-adv...
I was feeling a bit down on my projects, but this has me amped up seeing how the ultimate goal of working on features rather than deployment is possible, and very real!
Best of luck with Panelbear!
Also interested in the costs for your setup. My costs are in my other comment [1].
But it's worth realising that one purpose of code organisation in larger companies is to mirror the team organisation. That's a constraint on code that can interfere with the best technical architecture.
You can do better with a monolith in a one-man team!
That's one of the weirdest reasonings I've ever heard. What happens when you have to downsize that team? But yea, shoehorn each individual contributor to say a single microservice out of a hundred and you'll wonder why your software doesn't develop fast - everyone's too tired trying to understand what each abstraction does that is meaningful so they spend less time understanding how the pipeline works and how to integrate into it.
Sometimes, the organisational structure drives the code structure (Conway's law [0]). I've seen real world consequences of this, where the disconnected system stovepipes in a large organisation reflected the team structure of the organisation's purchasing function. The purchasing teams didn't speak to each other, so neither did the systems they purchased. The systems had separate support contracts, incompatible upgrades, and each one was a wholly distinct integration target, if you were a third party.
When you are working on a project, if you hit the edge of your current knowledge / skills, push just a little bit further when it’s something that interests you instead of just aiming to hit the basic requirements / lean on other people.
This minor effort compounds over time; do it for twenty years and you’ll be an expert in multiple disciplines and also an expert in how to tie them all together into one cohesive whole. Aim to be a “T-shaped” person, and just expand over time.
At least in my experience I can read about different architectures all day and sort of understand them, but I only really "get" it once I find a non-toy problem I need to solve and attempt to apply the knowledge. Then you see how it really works and form hard skills which stay with you.
My advice: - There is no defined learning path yet (to my knowledge). - Start by reading the GitHub readme of technology in these articles (ex: nginx or kubernetes). - If interested, try to spin up a tutorial app. - Try to make something useful. Maybe this is a spin on a tutorial, or something novel. This is the hardest but best way to learn.
Finally, I’ll add that many folks learn these skills on the job either directly or having worked in proximity to new tech. It does seem this was how the author learned.
Hope that helps! I’m sure others will have great advice, too!
You can add additional text about why you did certain things - and then store the data in a wiki or checked into git or similar so you can find it when you need it.
1. Did a project on digital ocean, just ubuntu and node 2. Year later, Did a project using meteor, spent way too much time trying to get it all install with Vagrant (so all info from 1 was not useful) 3. Year later, Changed meteor setup to use docker ... so had to learn docker (so all info from 2 was not useful) 4. 2 Years later, Tried to do something with AWS lambda (so all info from 3 was not useful) 5. 1 Year later, Tried something with Apollo (so info from 4 was not useful)
And to be honest, none of the projects' various needs are all that different. I feel like one "good" solution could have, should have, should now exist ... but I haven't found it.
I guess I kind of feel like people who learned Rails back in the day found it met all their needs and they were able to do 50 projects on it. What is that thing today that if I learn today won't be out of date in 1-2yrs?
My first job ~2005 was at a small shop with like 4-5 people and around 20 physical servers under our control and the same amount on-premise with clients (mix of Windows Servers, Linux distros and BSDs). We did have a sys admin person, but he was only responsible for the servers themselves and the base configuration. Everything application related running on it was our responsilibity as developers.
And after that, in the following jobs and as a freelancer, there were I wide variety of things I had to ramp up on quickly. Different build processes, application monitoring, backups, different cloud providers, hidden costs, etc.
Also I have been keeping a "Today I Learned" journal, where I just put small comments and snippets. It is hardly ever any deep insight, but for the most part "to do x in framework y solution z worked". It is also mostly a write-only journal. Just writing things down helps a lot with memory.
I also hope we don't need to know all this stuff in the future. It's pretty really low-level and it's much better if we can focus more on creating differentiation and building your actual product. (Full disclosure: I've founded a startup that's trying to do exactly that, so I guess I'm biased!)
(Over time I've learned that it's best to avoid side-projects which have user accounts and store data on behalf of other people, because that's not a side-project: it's an unpaid job.)
While hobby projects can be a great start, the best way to learn is in a team of experience coworkers. The basic concepts of something like Kubernetes are very easy to grasp, leading people to believe Kubernetes is easy and completely missing the giant complexity the system introduces (that's way many people on here say its overpowered for 99% of projects, which I tend to agree on). Even with seemingly simple things like Docker, there is a massive amount of depth that's in my experience very hard to find in blog articles or YouTube tutorials.
That being said, if you have to chance to learn about such things from your coworkers by applying them on your day job, I think the best choice is stell do have hobby/testing projects and combine the learning by doing aspect with some good books. I also recently learned about two YouTube channels that do a pretty good job with explaining such tools and applying them to the real world in a beginner friendly way. [1][2]
[1] https://www.youtube.com/channel/UCdngmbVKX1Tgre699-XLlUA
This is why I "write". I started a decade ago capturing short notes for myself about the technologies I use. Writing it down helps me remember it in two ways. First, the act of writing (primarily by pen) is proven to increase your memory of a thing. Second, I can open my notes for step-by-step reminders.
You don't have to blog publicly. Checkout the Zettelkasten method if you want to use Index cards. Keep a set of Markdown files in a private repo. Whatever floats your boat.
If you keep notes in a notebook I found that labeling mine as "Stray Thoughts" was one of the best things for me. That prevents me from moving away for that notebook trying to categorize my thoughts. If they are just stray thoughts, I can put any random thought in that same notebook. The same thing works in a set of text files or a Zettel.
I learned most of these tools at my day job through some catastrophic failures. From my experience, failure has always been the best teacher.
Kubernetes just happens to be a great sandbox for failing hard :) Lots of stories here: https://k8s.af/
However, I wouldn't reach for tools that didn't solve a problem I truly have, be it cost-effective scaling (my day job), or reusing what I already know best even if unconventional (my SaaS).
I guess what I'm trying to say is: focus on solving your immediate problems first with the tools you already know. Your toolbelt will expand without you realizing it.
I don't have a complete answer, but so far documenting things as I go about doing them helps, especially if I write down what I tried, what went wrong, what worked and why. Its a lot while starting off, but over time as the concepts sink and become habits, my docs move to higher abstractions automatically and then it is mostly clear. The key words for me are 'train of thought'. The solution (the how) is important obviously, and always useful when quickly referring, but when making bigger changes it is more important to remember th why.
It is hella time consuming, needs dedication and practice and good tools (I couldn't start without org-mode myself)
* Flask + Flask-Login + Flask-SQLAlchemy [1]
* uWSGI app servers [2]
* Nginx web servers [3]
* Dramatiq/Celery with RabbitMQ for background tasks
* Combination of Postgres, S3, and DigitalOcean Spaces for storing customer data [4]
* SSDB (disk-based Redis) for caching, global locks, rate limiting, queues and counters used in application logic, etc [5]
I like how OP shows the service providers he uses, and why he decides not to self-host those parts of his infra. Also, there's a large up front cost involved for any stack (Rails, Django, k8s). I'd be interested in a more detailed writeup with configs, to try out OP's auto-scaling setup. My configs are linked in the gist below [2] for my non-auto-scaling Flask setup.
I spend about $4,000/mo on infra costs. S3 is $400/mo, Mailgun $600/mo, and DigitalOcean is $3,000/mo. Our scale/server load might be different, but I'm still interested in what the costs would be with your setup.
[1] https://wakatime.com/blog/33-flask-part-2-building-a-restful...
[2] https://gist.github.com/alanhamlett/ac34e683efec731990a75ab6...
[3] https://wakatime.com/blog/23-how-to-scale-ssl-with-haproxy-a...
[4] https://wakatime.com/blog/46-latency-of-digitalocean-spaces-...
[5] https://wakatime.com/blog/45-using-a-diskbased-redis-clone-t...
The "point" of Kubernetes is to drop the difficulty of building a service like Cloud Run to zero. It drops the cost of building a Heroku down to zero. I'd bet my bottom dollar that fly.io and render are running on Kubernetes (maybe they mentioned it somewhere already and I just missed it). With the right cluster set up, building one of those platforms (or others that I won't mention) is almost as simple as setting up stripe checkout and writing a web interface to turn form fields into JSON fields and send them to a kubernetes cluster (possibly with hard multi-tenancy! not to get too into it, but you can literally provision kubernetes clusters from kubernetes clusters, ephemeral or otherwise).
No other tool in the devops world except for maybe the initial orchestrator wave (ansible/puppet/salt/chef) has been this much of a force multiplier. Ok, maybe that's hyperbole, but if adhoc-bash-scripts->ansible is 1->2, Ansible->Kubernetes is similarly 1->2, especially if you consider baked in cloud provider support/innovation.
But here's the secret -- perversely, I'm happy deep down that everyone thinks k8s is too complicated/is a buzzword/isn't worth the effort. All it means to me is that I'm still ahead of the curve.
I have setup Kubernetes but never run it myself in production. But I work with a Hashicorp equivalent setup with Docker, Nomad and Consul. I also have several Service Fabric clusters. I think it all is just a complete waste of money. Buying services/metal in the cloud or going serverless or whatever is cheaper and with much lower risks for most minor businesses.
It really depends on what you do with that cluster, if all you do is run deployments with services and ingress (the equivalent of ECS + ELB), its easier than doing the terraform thing IMO. It’s certainly easier than cloud formation and building AMIs.
I completely agree that buying metal in the cloud is cheaper (that’s part of my secret, shhhh).
I disagree on server less because I think it’s only a matter of time before it becomes a frog boil scenario. Bit of a tin foil hat theory but I think there’s a reason companies want you to move to serverless — the complex flows you build make it sticky, the hidden costs are everywhere, they can simply raise the price at any time, and they scale cost with your business. I think we’ll see more and more of the “I got a thousand hits in a second and my bill was crazy because X” once this deluge of free credits runs out. Also definitely not sure about serverless for small business, it’s such a new paradigm, maybe if you get prebuilt flows but it’s definitely simpler to set up dokku/caprover on a droplet.
You can still deploy apps directly onto Kubernetes and it works very well for this purpose, but it will require a lot more learning than one of the platforms listed above. If you enjoy learning, Kubernetes is an incredibly powerful and satisfying tool to have in your kit, and the initial learning curve isn't as steep as some make it out to be. If your goal is to deploy apps as quickly and simply as possible however, go with one of the pre-existing platforms.
If you still want to learn Kubernetes then a really great book is Kubernetes Up and Running. It goes into just enough detail at the right point in time to make it simple while still being useful. If you do a bit of Googling, you might find a free copy of the book that used to be offered by Microsoft to promote their Azure Kubernetes Service. Otherwise there's Kubernetes the Hard Way² but that's more focused on administering the Kubernetes cluster itself, rather than how to use the cluster to deploy apps. You'd need a pretty convincing reason to administer your own cluster rather than spinning up a managed cluster on GKE or EKS.
My advice: - Grab a copy of Kubernetes Up and Running - Install minikube on your local PC - Experiment and have fun learning
Hope this helps.
---
1. https://twitter.com/kelseyhightower/status/93525292372179353...
2. https://github.com/kelseyhightower/kubernetes-the-hard-way
- kubeadm (read the logs)
- k0s
- k3s
If you want to understand everything though, the way I started was:
- read the kubernetes documentation front to back
- go through setting up a cluster the hard way (look up the kubernetes the hard way guide)
- set up ingress on that cluster (nginx ingress), and make sure you understand the interplay between kube-proxy, ingress, services, and your deployment. The actual flow is more like kube-proxy->iptables/lvs->containerd but you want to be able to “think in k8s” (i.e know where to look and have an idea what to check when something goes wrong).
- install cert manager for free https certs
- tear that cluster down, and set a cluster up with kubeadm (This will be much easier, and you’ll know what it’s doing because the logs are great and you’ve done it before)
- tear that down and make a cluster with k0s/k3s
I want to point out that it really depends on what your goals are. Kubernetes is useful to me because it’s a one stop shop for a wide range of possibilities.
If you just need to get an app up as fast as possible, install caprover/dokku on a DO droplet and git push to deploy.
> The problem is mitigated somewhat by our orchestration system. The control plane for Fly.io is Hashicorp Nomad, about which we will be writing more in the future.
https://fly.io/blog/persistent-storage-and-fast-remote-build...
- Cloud Run (serverless containers)
- Cloud SQL (via proxy)
- Cloud Monitoring & Logging (formerly Stackdriver)
- Compute Engine (if necessary, e.g. websockets)
- Cloud Build for GitOps (deploy on push)
It's clean and simple (to me). Billing is in one place, nicely separated by projects. Monitoring & Logging is already built in. No need to span multiple dev SaaS tools. So far managed to avoid Redis caching because Golang + Postgres is fast enough, so far. But if you need Redis you can DIY on Compute Engine or try Cloud Memorystore (configure the memory to a low amount for cost savings).
Google Cloud drawbacks: Additional charges necessary to connect Cloud Run to VPC (via proxy instances). Load balancing on GCP ain't cheap ($18/month, though to a larger enterprise that is a rounding error). But in my setup I didn't need these things.
As shown above, I have heavily optimized for cost and simplicity in my setup.
I ended up moving away from it.
I find the UI to be too slow for the purpose it serves. I'm fine with a slow-ish app sometimes but not when I have to use it often and during incidents.
I also had a few instances over the course of several years where policies seemed to have transparently broke because a system metric name changed. It's possible the issues were of my doing but I don't think they were.
Lastly Monitoring, Tracing and Error Reporting are too disjointed. I wanted a solution that created a more holistic view of what's going on.
If you have a one-person SaaS company, how do you get past customers’ resistance to a single point of failure, namely you?
Do you pretend you’re not just one person? Do you only have customers who could handle losing the service when you, say, run away to meditate on the mountaintop? (Or get run over by a beer truck, or whatever.) Is there some non-obvious solution?
And — back on topic — is the architecture part of that sales pitch? “I’m just one dude, but look how simple this is, it can run itself if I am devoured by mice!”
If you do customization for larger customers (and you should), like boiling a frog, one day you become mission critical to their business. Once they recognize that, then they will start asking questions. Now they're kinda stuck with you. You did charge enough money, right?
At that point you must appease them with a plan. Have their code and database on a dedicated server instance. Have them pay for it and own the account (you just saved money). Make sure you're using their domain they control. Give them access to the source code. It's on the server, so that's easy. Write up a doc with how to access everything and all the frameworks and tools you use. After this, they will never bring it up again.
Worst case scenario, sell them the software outright. Price it much higher than you think they will pay. Then double that. Trust me, I've done this a few times.
It gives me a headache now and then, and am now in the process to get someone else onboard.
A suggestion, hopefully helpful: a better approach to securing your admin console than simply layering 2FA onto it would be to expose it to a private WireGuard network. One very easy way to do that is with Tailscale, which will hook up to your GSuite authentication --- Google's 2FA stack will be far better than anything you'd likely build on your own.
Tailscale is disgustingly simple to set up. If you're a product person, it's actually upsetting how easy they've made it to get set up.
That's not to take anything away from the excellent writeup, but more so that someone who is thinking about starting a SaaS maybe doesn't jump to the conclusion of "I should go learn Kubernetes".
Different framework implementations of a CRUD website with authentication.
A lot of popular web frameworks have basic authentication out of the box & easily allow you to tie free authentication with accounts like Google, Microsoft, and many others. There are also paid alternatives that may save you more money than the free ones if you need advanced authorization controls or other features.
Most devs probably have a collection of ways they've done it in the past that they pull from when needing to adjust from the default framework's methods.
If you don't mind paying just a little bit of money, you can get even more out of the box SaaS like functionality with tools like Jumpstart (if you're using Rails).
[1]: https://blitzjs.com/
There are two Servers load balanced with DNS.
Each Server has 3 jails (Nginx, App, DB) and 2 NICs
The internal NIC is for replicating the DB, and for the App Servers to target the Primary one.
Diagram and Configs: https://blog.uidrafter.com/engineering/freebsd-jails-network...
Both servers are SuperMicro with:
- 6 Cores 3.3/4.5GHz (E-2136)
- 32GB ECC DDR4
- 2 × NICs (em, igb)
- 2 × 480GB SSD
- 20TB on 1Gbps with DDoS FENS
- IPMI over VPN
I rent them to Hivelocity.
===
FreeBSD vs OpenBSD
Ilja van Sprundel answers your question by comparing the number of kernel vulnerabilities since 1999 of the BSDs and Linux. [1]
I don't think FreeBSD, even well hardened [2], is as secure as OpenBSD. After all, OpenBSD's main focus is security. I use OpenBSD for orchestration and monitoring, and I have an experimental setup of OpenBSD with VMM but they crash sporadically, so I'll wait a bit.
At any rate, my goal is to have two heteregenous paths (maybe OpenBSD, FreeBSD) or (Solaris, Linux). This way I could simply shutoff the vulnerable path when there's an unfixed vulnerability.
[1] https://youtu.be/rRg2vuwF1hY?t=264
[2] https://vez.mrsk.me/freebsd-defaults.html
===
BTW, I have the FreeBSD hardening and setup scripted, which you could add into the ISO in `/etc/installerconfig`, or downloaded from the orchestration and manually ran with `bsdintall script myinstallerconfig.sh` if you wish.
In my situation I am finding the lack of consistent environment a reoccuring issue, the developer environment does not match production. However I kept it simple with Google App Engine Standard and Flex environments, I found the deployment process simple and was enough for me (at the time) - however I am finding we are going to step into dockerland; however I feel like it is very over my head!
I want to get out of IT after 20 years, but there is no way I will stop tinkering with OSs, Raspberry Pi IoT devices, SoC, light coding, etc. It's different when it's a hobby than when you're faced with time constraints, budgets, and nagging bosses.
A project I'm about to start at home is taking an existing 1080P dash cam (front and rear) that features great night vision and hack it using a Raspberry Pi that handles motion detection, sends stills, and uploads to the cloud. Sure, I could go buy an extant system that just works, but what's the fun in that? It's like Legos. I could go buy my kid a fully-assembled car or spaceship, but I'd rather him learn how to follow instructions, see cause and effect, and experience the pride of a job well done. YMMV. There is something really uplifting in seeing "complex" technical stuff working that you yourself built. It doesn't even have to be as good as existing tech.
For our frontend we used Webflow. My friend was able to create the entire marketting site, and all the app UI's without needing help from me. Webflow is an awesome tool for that sort of thing.
For the backend, I built a simple Node/Express API and hosted via Heroku.
To this day, everything is still running fine and the API is processing roughly 200 million requests a month. The total cost to host that on heroku is $50/mo.
You can definitely have a simple stack but have it be highly scaleable!
Another good read is Wenbin from Listen notes. https://www.listennotes.com/blog/how-i-accidentally-built-a-...
It seems to me that even when you outsource your infrastructure to a major cloud provider, you're still spending a lot of time yourself setting everything up.
I'm certainly not criticising Anthony here – what he's done, especially in terms of product development, is remarkable – but just thinking about the industry at large.
And since it hasn't taken off (and probably won't ever), it just costs me a pennies a month since I'm under their free limits, plus the domain.
Firebase's database is a NoSQL database, whereas almost all my data for the apps and (micro-)SaaS I was building had relational data.
Their frontend data fetching felt clunky and did not fit my requirement.
Also, the fact that Firebase is a closed-source backend felt scary in the hands of Google (https://killedbygoogle.com/).
Firebase's problems and my desire to have the perfect backend made me build an open-source alternative to fix all the shortcomings. PostgreSQL instead of NoSQL. GraphQL instead of REST. 100% open source. That is now https://nhost.io.
Regarding the rate limiting, you're load balancing into nginx services that you've configured to limit requests. Are they synchronizing rate limiting state? I can't seem to find nginx documentation supporting this. What value is there in this style of rate limiting, considering User X can send a sequence of requests into a load balancer that routes them to nginx boxes A, B, and C? The big picture that 3 requests were processed for user X gets lost. Your endpoint-level rate limiting, however, may potentially be achieving the synchronized rates if the redis servers in a cluster are synchronizing. I guess I'm asking about the strategy of using multiple lines of rate limiting defense. Is nginx-level rate limiting primarily for denial of service?
The horizontal autoscaler should be based on throughput rather than hardware consumption, shouldn't it? If the req/sec goes below a threshold, spawn a new service. Can anyone comment?
It depends. If you want to scale on other metrics than cpu/mem, then HPA can do custom metrics too. See https://kubernetes.io/docs/tasks/run-application/horizontal-...
Specifically the Prometheus Adapter.
That is the closes thing to a number of requests I could find. So this architecture, no matter how solid, is somewhere between „way to large“ and „matches perfect“.
It seems like a solid breakdown on how to deploy your services to k8s and how to properly do CD deployments. But it does never mention whether it actually makes sense at the scale he actually has.
This is a key point. I don't know Kubernetes, and for this kind of scale I'd probably use, say, Heroku. But if I did know Kubernetes, I'd probably use it as it would be one less thing I'd have to worry about if I had to scale up quickly: you never know if that little side project with a dozen users is going to become an overnight success.
What does "make sense" in this context mean? It sounds like you're assuming he chose K8s for the scalability, but scalability isn't the only consideration here. Familiarity of the tooling is the biggest one that he mentions in the post. He even goes so far as to say that k8s probably isn't right for everyone, it's just what he knows.
It's efficiently supporting a profitable application and requires minimal maintenance. That seems to accomplish the goals of "infrastructure", broadly speaking.
- A single VPS server to host the app. I love DigitalOcean.
- A single docker-compose file to bring up the entire stack containing the front-end, the back-end and the database.
- Caddy for automatic SSL certificates and proxying.
- JavaScript/TypeScript for building stuff.
- Cloudflare For DNS
In my experience (both myself and observing others) this is the cause of lots of side project (sometimes even startup) failure. Lots of people choose a tech stack that's far away from what they've worked with, so they never get past the "read the docs and try to get anything working" stage. For a real chance at completion it seems like the recipe for success is choosing a stack that's 1ish derivative away from a dev's competencies so they have a new and exciting thing to learn, but are able to continue progressing and adding value.
I am also a person that, prior to using Azure, was an absolute "Kubernetes is a big waste of my time and I'll just skip it" person. I wrote it off as predominantly "resume-driven". Now, having used Azure for about a year, I'm rewriting all my Azure infra to use AKS to better insulate me from the inevitable issues that come up when I GTFO of the Azure sphere as soon as our credits run dry. And, what I'm learning, is Kubernetes is a just-fine deployment/orchestration/management tool for containerized infrastructure that is _not_ a massively complex microservices infra. It's just a more streamlined approach to scaling and managing cloud-agnostic tooling/containers.
That said I agree with the innovation token concept. None of this junk makes you money, solve a problem first.
What about taxes and invoices to other countries?
Looking into a Stripe plugin like https://www.quaderno.io/
I really enjoy using Stripe, and their support is great, but sales tax compliance makes me a bit jealous of those using Paddle.
It turns out Kubernetes is actually perfect for small teams as it solves many hard operational issues, allowing you to focus on the important part of the stack: the application.
The key is to stick to a simple setup (try not to mess with networking config) and use a managed offering such as GKE. We may need a Kubernetes, The Good Parts guide.
As long as at least one of them is an expert on kubernetes. In this case, the one person in the team is that person, and as he points out in the article, he's using it because it's what he knows.
That should be the takeaway, I think. The "trope" remains pretty sensible IMO; I've seen it first-hand, jumping on kubernetes without the know-how is a foot-gun factory, and that team ultimately gave up on trying to implement it.
Running stuff on some k8s managed for you is imo perfectly fine.
What is the closest thing out there today? Or at least a tutorial for sane, small production setups?
FTFY.
Why think of groups of people as though they have a single mind?
* Automatic DNS, SSL, and Load Balancing
* Automated rollouts and rollbacks
* Health checks and zero downtime deploys (let it crash)
* Horizontal autoscaling (in early access!)
* Application data caching (one-click ClickHouse and Redis)
* Built-in cron jobs
* Zero-config secrets and environment variable management
* Managed PostgreSQL
* DNS-based service discovery with private networking
* Infrastructure-as-Code
* Native logging and monitoring and 3rd-party integrations (LogDNA, Datadog, more coming this month!)
* Slack notifications
More at https://render.com.
As the author say, he already got a lot of experience of it so it worked out great for him but it is probably easier just to install the tech needed for a small company.
Unless you have something very special going on, the dependencies (like databases) are probably not going to be that many.
This tech stack looks over-engineered upon first glance, but I don't know much about the author or his product.
I use Kubernetes a fair bit whilst developing OpenFaaS and teaching people about K3s, but there is a whole world of development teams who aren't prepared to consider it as an option. One of the reasons we created "faasd" [2] (single-node OpenFaaS) was to help people who just wanted to run some code, but didn't want to take "Kubernetes mastery 101"
For a small app using a managed service like Cloud Run plus some cloud storage should get you very far. I saw that Heroku is still popular with the indie community, with the author of Bannerbear getting a lot of value from the managed platform.
[1] https://thebootstrappedfounder.com/ [2] https://github.com/openfaas/faasd
It’s a Rails monolith deployed on Heroku.
I’d rather have the time to build new features for my user base than spend it learning how to use k8s or wrangling AWS through its abysmal console website.
I am not sure what the right answer is, but I at least appreciate that there founders out there willing to give the little-er shops a chance. A healthy ecosystem with competition is good for the most amount of people.
Ideally, you would get it to the point where a newbie can use it as a reference.
I'm just wondering why you don't also run your managed services in k8s?
Much better to leave that to the cloud provider to manage.
From the article:
> However, as a project grows, like Panelbear, I move the database out of the cluster into RDS, and let AWS take care of encrypted backups, security updates and all the other stuff that’s no fun to mess up.
>"Web Performance and Traffic Insights
From the small stuff to the big picture, Panelbear gives you the insights you need while respecting the privacy of your visitors. It's simple, and fast."
Price is based on client websites' page views per month, with free tier to 5K page views.
I.e.: are you no longer in business, sold it or no longer running it solo?
How do you handle database migrations when using an otherwise automated CI/CD flow with gradual deployment?
python manage.py migrate
on pod startup. If there are no changes, it'll do nothing. If there are changes, it'll do the migration.Here are some migration libraries:
The in-cluster Postgres is mainly for experiments or staging.
Kubernetes is too complicated for me
If the latter, I would think Dropbox would count, since at least originally it was a single founder.
Edit: Maybe the downvoters can explain what they're disagreeing with?
Regardinging CloudFlare cookies, I am not sure if this is the authors problem, since I guess he is neither processing nor storing any data about it.
It is pretty clear it is
-Inmotion shared hosting (some $10/mo fixed)
-PHP (codeigniter framework) with MySQL
Not very proud in the age of Cloud, but I can’t deal with all the complexities. Command line scares me (which seems to be the requirement these days for any development). Now I have a simple ftp folder mapped directly in VS Code.