But I rarely hear of actual teams using them; it's usually individuals using CDEs for side projects.
Are you using a CDE at work? Would love to hear about your experience.
With Bento Remote get a fresh new fully dedicated EC2 machine in about two minutes from when you run `bento remote create`. It is continuously tested on every merge and verified to function. Everything you need is preinstalled and ready to go. Just connect VSCode to it and start coding. No futzing around getting your local environment to work. Break something on your Bento Remote? Throw it away and get a new machine. Switching projects? Grab a new machine from the pool that is preconfigured for that project. Your settings travel with you.
We had a potentially unique set of circumstances though. Our full stack development requirements were VERY high and needed 64GB to even run our largest and most active apps. This made other 3rd party tools fantastically expensive. Building it ourselves let us fit instacart specific needs and workflows. As well as do clever things like hibernating the EC2 instance after hours when the users laptop is idle. This plus a host of other measures were essential to making this not only cost effective but a large net gain.
So I highly recommend everyone at least take a look and evaluate if you have the need. Start with the SaaS versions and see.
The cheapest instance I could find approached $1k USD per year without egress costs. If you can build all of that, why use EC2 over your own hardware? If you’re spinning up containers, why not use the dev’s workstation for that? You already paid for it, right?
Despite containers working locally, the ease of having a remote image with everything preinstalled is huge. Lots of large companies offer internal dev cloud environments - I’ve always found it to be a frictionless experience versus local setup at other companies.
Bento was actually our first attempt at doing all of this on our local MacBooks. Many users found that their machine would slow to a crawl and heat up. M1 Macs and now OrbStack instead of Docker Desktop help reduce that. Moving off of docker helped as well. Ec2 gives us the consistent image that is easily testable and repeatable. Egress costs here are negligible.
We dont' keep the machine running all of the time. It stays up running during the users work hours (~8 hours) and then hibernates (the machine is off but memory persisted) which reduces EC2 costs (minus EBS) to zero. We detect activity on their laptop and then wake it when needed.
We're not just a shopping cart product. I thought the same before I first joined but it turns out getting groceries to your door is incredibly hard to do at nation-wide scale.
My company switched to 100% remote dev envs a couple years ago. When you cut a branch it spins up a VM and you can connect to it from VS Code (native or browser based) or just plain SSH. It works great. The lag is not noticeable at all. Dev envs are fully provisioned and up to date with all tooling and dependencies so you don't need to bother with managing any of it locally. Given a choice I don't think any dev at my company would go back.
I brought an old under-powered Macbook I had not used for a year or two, but I was going to use codespaces so no problem to run the entire stack. I thought. Codespaces works great from my home office PC...
Constant disconnection meant I eventually gave up on codespaces to directly work on my machine, but then had to try to download the world (brew update, git cloning, a new JVM version, docker images, all new SBT and Scala binaries, transitive dependencies, etc). Granted I caused it by not pre-downloading a more up-to-date dev environment but I was not planning on using it locally. :/
Scraping nails on a school blackboard painful hours later I mostly only did theoretical changes, and spent most of the time chatting/networking.
What we did agree was to put in an offer to the co-working space to set up a decent mesh wifi that can handle lots of people downloading all of maven, using devcontainers, zooming etc. So that we can come back.
You can always develop locally if you're on the plane or w/e. In our case local builds try to use a remote runner if available if not they happen locally.
We'd had all our sites set up to run fairly easily via docker compose prior, but I'd still find myself debugging people's setups fairly frequently. And giving developers data and secrets was often either insecure or complicated, depending on the codebase.
With codespaces, people can just jump straight into a working project, without pulling any client code or secrets or data onto their machine. It still requires maintenance sometimes but at least when I fix the codespace config I know everyone will definitely benefit from the changes.
The main downside is it's pretty expensive (if you have, say, 10 devs using it all day every day) compared to "free".
If you work on just a few projects, and/or you have very sophisticated systems across the board (like every site has an on-rails setup script with useful sanitized dev data, and secure SSO'd secrets management), I doubt it's worth it.
But in our case, a relatively junior dev being able to spin up a working dev version of a site they've never worked on in 5 minutes with no issues, so they can knock out a 3 hour change and maybe never work on it again, is a big money saver.
It's also meant that we can more easily standardize everyone's laptops without having to consider how well they work as bare metal dev machines (which has meant we can move everyone to fairly cheap macbook airs without people moaning about their tooling or storage size etc.)
I also like that access to a lot of stuff becomes directly mediated moment to moment by someone's github access (which for us also runs through our sso, cloudflare zt etc).
We're doing it in a slightly clunky way though - we use docker compose still, inside the codespace. I like this approach personally bc it feels like we're less locked in to the platform. For us it also made the initial migration easier. I think it also makes debugging the environment a bit easier because you don't need to keep rebuilding constantly on changes, you can just dcb dcup...
I've run k8s/k3s with docker-in-docker this way too. Really easy once you get it setup, and great for playing with architecture ideas.
I work in a small shop and things are messy. Similar to having hundreds of WordPress sites, but we managed to standardize the main set of plugins we use on all clients (this has its own git repo), and clients will have their custom theme and some custom plugins (in another repo).
Ideally we would have a tool that lets us spin up a dev site for any client, fetch the production database from the last backup, anonymize the data, connect an IDE and have git commit access.
You can tell codespaces to include the AWS commandline tooling automatically via the devcontainer "features" attributes. And you can tell it to run a script once the codespace has initially been created using the postCreateCommand (which imo is a lot easier to debug than beforeCreate...
For us the s3 credentials live in the github repo as codespace secrets (although I think you could set up a much better auth approach via the vscode aws plugins possibly).
Most cloud environments are also limited in terms of what you can do. e.g: issue sudo while running a process, attach to a process with a debugger.
Usually when these come development environment ready, it also hides away underlying details - i.e, I no longer know the command line etc to should I need to write infrastructure code/automation later on.
I guess there are domains where these are non-issues. But for a wide variety of my use-cases local development is going to be preferable, because by design there are limitations in the alternative.
Also I can't really say that I've heard anyone outside of this thread who is excited about this at all. I see people here have different ideas but personally I've never heard of it from anyone I work with or know directly.
I can also concur that I work in an area a that is doing somewhat bleeding edge infra work (probably second stage adopters) and none of my colleagues or myself is actively seeking out this technology. We don’t really seem to have the problems it purports to solve in a major way at this time.
The issue is that most of my team are in that 10% and now that 90% of the company have their needs met, we're getting pressure to conform because maintaining all the physical infrastructure for a small group gets expensive. Unfortunately, we can't use cloud environments for a reasonable chunk of our work, so that really means we're get forgotten about until we complain loudly enough, no matter how friendly I am with the relevant teams.
I tried Github Codespaces and thought it was cool but wasn't nearly as fast as my remote workstation.
Edit to add; cost me about $70/month depending on a few variables on collocation for a 4U. My same apps on AWS would cost 10x more. So that’s plenty of budget for my hardware that I usually get a year or two out of; doing minor repairs/upgrades myself.
I'm really interested in this kind of thing.
https://www.webhostingtalk.com/forumdisplay.php?f=131
Also just googling "[major city name] colocation" will usually find results, though be warned pricing is often "call us" vs. stated upfront.
Not to mention the superior network connectivity (10 gigabit/s). In fact, there are no 10gbps options from the common last mile carriers available at any price (AT&T, Crapcast). And if they did offer it, it'd probably be $500USD/mo+ just for Internet.
AT&T has a 5gbps fiber plan available for $250USD+/mo, but there are very few (>10 fractional portions of cities nationally) areas where it is offered today.
If you have a strong enough desire for cost-effective, really fucking fast Internet for a fair price, move to Korea. In .kr you can get 10gig for something like $30-50USD/mo.
Edit: Oops, my bad. I meant to reply to this comment:
https://news.ycombinator.com/item?id=37934982.
> Step 1. Create an Oracle Cloud account Step 2. Create an Ampere 6 core, 32gb memory instance for like $5/mo Step 3. Use Jetbrains Gateway to run your IDE as a thin client, executing on that host.
> You get a pretty darn beefy ARM64 VM instance from OCI for extremely cheap. You can get these in a region near you, with low latency. And Jetbrains Gateway works pretty great.
We use OCI, aka Oracle Cloud. As a customer, OCI is more financially appealing compared to the competition (AWS, Azure) for bare compute infra. Not as cheap as OVH, if that fits your requirements. For me though, OCI is preferred because it provides all the standard cloud building blocks of multi-region + (cheap) object storage + block storage.
I mean, fuck Oracle and Larry Ellison, right? That said, building and operating a cloud at scale is a lot of work and I'll leverage their work as long as it makes fiscal sense.
if you're up for quick chat/DM, would love to hear more (link in profile)
At this point id question if I want to be this tiny piece of a tiny cog in a tiny gear among a billion gears in the huge clunky machine (comp and bottom layer of Maslow hierarchy aside).
This gives our support developers instant access to a fully configured development environment across all of our client sites, it really helps speed things up, previously there would be a min of 1-2hrs local setup for a new developer to work on a project, now it's 5 mins and guaranteed no problems getting going. So we can spread support developers across more projects and not worry about local setup. I put together a free starter repo if anyone wants to try that for Magento dev https://github.com/develodesign/magento-gitpod
My background is in cybersec and I worked at Snapchat after they acquired my previous (cybersec) startup - first time I saw the concept of CDEs, naturally I thought - can we combine productivity and security? This is what Ozrenko (my partner and ex Snap as well) and I decided to do.
Along with all CDE management functions, we designed infrastructure security mechanisms that are transparent to developers and that make their life easier (I and Oz are developers) - we realized that, we can expand the concept with a load of DevSecOps automation and eventually, we've hit very novel code security practices that we embedded in the environment (and filed patents on them).
Our security is mostly to keep developers chill while protecting the org (and reduce infrasec cost ;-) : protect all resource access credentials from leaking (phishing and malware), we provide data loss prevention (IDE, web apps), we detect secrets and prevent sprawling, we detect external code pasting in the code base, etc. All transparently and no hassle for the dev team - everybody gets free SSO to any resources!
Most importantly, we have hardened the platform at very complex organizations such as Broadcom, SwissRe, Niantic Labs and others with our self-deployed platform - you can imagine the difficulty of running efficiently across WAF, traffic proxies, VDIs and SASE. Oz is the man for that.
So in summary - we have today the most advanced CDE platform that provides both efficiency and security for all your resources and assets. We are a Swiss company working world-wide (greetings from Tokyo this week), so if you are motivated to join us (any function), please let me know!
Sadly, we have not chosen a codename for the platform yet.
We generally see a ton of velocity increase in dev from teams that adopt the Cloud IDE alongside the rest, as exhibited by lots of comments. Internally, we dogfood them 100% and the team would never go back to the old "local-first" ways. So we are using them and love them. The key qualities we love are ephemerality, parallelism, and accessibility. But in general there is a lot of resistance for various reasons, as seen in many comments here.
We wrote up some of our POV in our docs here: https://docs.withcoherence.com/docs/explanations/remote-deve...
[I'm a cofounder of Coherence]
Devpod did this right. Gitpod is actively working on supporting it. No I one hundred percent do not want to use your coherence.yml files instead.
Even at megacorps where they have really good cloud dev environments, adoption is not universal. Many many people at my current employer have big under-desk workstations to do their day-to-day programming.
I do think there are some real wins in ensuring development environments are consistent and versioned. Knowing you can pick up the exact version used to develop the project without dedicated effort is attractive.
It's not nothing. Most developers are issued laptops, and a 64GB Macbook costs over $3000. Plus a CDE can be shared.
At work we have very beefy workstations that just aren't available in those hardware specs from cloud vendors. They use workstation grade hardware (A6000 GPU, threadrippers) that offer better performance/price than similarly sized cloud offerings which use datacenter type hardware. To actually tick all same boxes you'd need to go for much larger instance types. Plus it would be questionable whether one could realize the scaling benefit of the cloud because those machines are more pets than cattle, killing them over night and bringing up a clean system the next day would likely upset dozens of different workflows and tweaked setups. And then the software expects network filesystems with low latency random access, not blob storage... They did the cost calculations, buying hardware is cheaper in the long run, including ops.
And then there are customers that have extremely stringent security demands. Segregated networks, video surveillance, no data must leave the premises. Convincing them to move the data into the cloud might not be impossible but it would be a big recertification ordeal for one cloud vendor. And then we'd be locked into that vendor until we could get something else. And we have several such customers.
I get that spinning up a new dev env is really handy, but with something like Ansible it's possible to get a good dev environment on someone's local machine within hours, not much worse.
What am I missing?
It's a swiss based company that provide a secure environment based on container.
We like the many security features they offer, like proxying outgoing connections, organization wide controls, importing automatically source code ACLs and many possibilities for custom images. Onboarding new people is also very easy once we got our custom base images working.
We have a very good contact with Strong network and they are very responsive to our requests and comments.
Compared with other integrated Web IDEs, we feel that security was at the core of the design, which is not always the case, and this transpires in the way the product is structured.
E.g. For Computer System it spawns an environment with GDB pre-installed, while for Intro to AI it has PyTorch/Python3 pre-installed
But at the same time if everyone moves to devcontainers, will this skill even matter for the most part? Because the people who want to learn how to do something are going to do it regardless, and those who don’t might not need to worry about it
I remember being a Junior Compsci student watching other students struggle to install the Java JDK on their Windows laptops (install JDK not JRE; configure PATH; install Eclipse; point it at the JDK). The professor had to come around and help them. I was the only student with a Mac and I had transferred from a community college where we had already learned Java in the first semester, including how to install and set up a local environment. All of this to say I feel like colleges should better prepare students to set up their local environments, although I see how useful the online REPL options are for getting started.
Alternatively, you could do something similar with Google Colab and a notebook you make to serve as a template.
Refer to the section titled “CSCI JupyterHub Coding Environment”
Alternatively, as another user pointed out: https://tljh.jupyter.org/en/latest/install/custom-server.htm...
Step 1. Create an Oracle Cloud account Step 2. Create an Ampere 6 core, 32gb memory instance for like $5/mo Step 3. Use Jetbrains Gateway to run your IDE as a thin client, executing on that host.
You get a pretty darn beefy ARM64 VM instance from OCI for extremely cheap. You can get these in a region near you, with low latency. And Jetbrains Gateway works pretty great.
On the plus side, this is an entire VM, so if you've got containers, or whatever else you need to run, that all executes there too.
Has it improved in the last year?
There are still irritations. But I am comfortable using it day-to-day.
It's absolutely critical that your remote gateway be nearby. I'm about 10ms away from mine, and though there is sometimes perceptible lag, it's not bad at all.
i so desperately want it to work, but it messes up often enough that it's not worth it yet
fwiw, using local intellij with nfs mounted source is a better experience
> Ampere A1 compute… with cores billed at $0.01 per OCPU-hour and memory billed at $0.0015 per GB-hour in all regions.
So for 6 cores and 32gb memory I’m calculating $78.84 per month.
I’d love to get 32gb of ram for $5/month.. but it sounds too good to be true.
Our solution is based on Firecracker, which enables us to "pause" (& clone) a VM at any point in time and resume it later exactly where it left of, within 1.5s. This gives the benefit that you won't have to wait for your environment to spin up when you request one, or when you continue working on one after some inactivity.
However, there's another benefit to that: we can now "preload" development environments. Whenever someone opens a pull request (even from local), we create a VM for it in the background. We run the dev server/LSPs/everything you need, and then pause the VM. Now whenever you want to review that pull request, we resume that environment and you can instantly review the code or check the dev server/preview like a deployment preview.
It also reduces cost. We can pause the VM after 5 minutes of inactivity, and when you come back, we'll resume it so it won't feel like the environment was closed at all. In other solutions you either need to keep a server spinning in the background, or increase the "hibernation timeout" to make sure you don't have the cold boot.
It's kind of like your laptop, if you close it you don't expect it to shut down and boot the whole OS again when you open it. I've written more about how we do the pausing/cloning here (https://codesandbox.io/blog/how-we-clone-a-running-vm-in-2-s...) and here (https://codesandbox.io/blog/cloning-microvms-using-userfault...).
> I have never in my career seen a good implementation of cloud development. At every company I've ever worked for, "cloud development" is nothing but a Linux VM that gets provisioned for you in AWS, a file watcher that syncs your local files to the VM, and some extra CLI tools to run builds and tests on the VM. And every time I've used this, the overhead of verifying the syncing is up to date, the environment is consistent between my laptop and VM is the same, all this other mess...every time I end up just abandoning "cloud dev" and doing things on my laptop. God forbid you change a file in your cloud VM and forget to sync in the reverse direction. Not only is local development more reliable, but it's also faster (no remote network hop in the critical path of building things).
Everything is remote but you work in a local IDE and everything feels local
Started a new job recently and had my laptop up and running, ready to code in about an hour (only second nix box I've brought up), by day 3 I was building and running the main monolith monorepo with my own local flake. I have since replaced redis, postgres, and two ancillary services in containers with devenv services/processes; it's been really great not dealing with docker volumes, networks, images or building containers and managing pruning them.
It would be interesting to play with automating deployment of my nixos machine configuration into a cloud VM or pod as I work 99% CLI anyway, but I just don't really see the need... this is just easier.
I loved it-- I loved having separate environments per-project. I enjoyed the collaborative features as well (send a link to look at code or preview something, etc). I see a lot of potential with them and I would love for them to be more mainstream.
After Amazon acquired the company I cancelled my subscription (I was paying annually.. I think it was $190??). I knew Amazon was going to murder the service, require an AWS login and who knows what.
I have tried others since then like code spaces and some open source/self hostable solutions (I have even tried the old self-hostable Cloud9 code).
Ultimately, I gave up on it... why? I didn't like the idea of self hosting (more attack surface area, etc). I didn't like any companies offering the service.
I bet it's hella outdated and full of security issues now though.
Then someone will realise "hold on, can't my own computer run this stuff?", get rid of all the cloud layers and run the environment directly on their laptop.
They'll write a blog post about it and people will be amazed that it's possible.
And the cycle will be complete.
https://github.blog/2021-08-11-githubs-engineering-team-move...
We persist home directory of a user across all their workspaces and also mount a common data directory to all so that we have access to ore downloaded datasets.
I was initially kinda opposed to this and preferred "bare metal development" on my local rig. But the performance is actually pretty incredible for remote development with vscode such that I don't really notice things are running on different hardware.
We recently had some new members join the team and I decided to spin up some dedicated EC2 instances for them to use for this exact purpose. They aren't being used yet, but as our stack becomes more sophisticated I think workloads will transition there. It's done with a custom terraform module that also provisions other assets needed for each dev (regardless of local or remote dev) like an S3 bucket, some dynamo tables, IAM roles, etc. Being able to onboard a new dev with a handful of lines added to a mapping is pretty awesome.
tl;dr I would absolutely consider remote dev spaces.
By far, what I appreciated the most was I didn't have unnecessary data on my local hardware. No customer data. No order data. Nuttin'.
What does D.O. stand for in the context?
Call me old school but if I can run my tooling locally I typically prefer that in most cases, and Nix does a stellar job of tracking everything deterministically, so sharing amongst the team works great too.
So much so I think replit actually uses it under the hood for some of their environments iirc.
* quick PR tweaks on someone else's branch => github code editor
* data science & customer success work: jupyter notebooks (GPU) & google colab (CPU)
Based on those experiences, and local dev experiences, we invest in a mix of native + containerized + ci/cd staging server dev experience... and not experimenting with cloud IDEs.
Are there useful cloud dev env setups for Android? The way Google has made sure Android dev remains locked in to Android Studio i.e IntelliJ Idea and also to the Great Gradle and what not I assume it’d have to come from IntelliJ - https://www.jetbrains.com/remote-development/.
Has anyone successfully tried this for Android remote dev? I suspect even if it’s feasible with lots of fragile moving parts the resource hungry setup will surely make it very costly for having a personal setup in the cloud.
If you have editor attachments and want to work this way, suck it up and learn your VS Code. That's what all the tooling supports first class. Second class is JetBrains. Vim and Emacs aren't even considered. I'd say just run them from inside the container, but then you run into things like custom SSH that munges CRLF. When you look at who actually authored the devcontainer standard, it makes sense.
My experience with is it has been wonderful for getting started and immediatly becoming productive with very complex systems. Most of those systems have 1 (or very few) experts who need to help everyone else with their setups. When problems arise, and they often do, they've become the bottleneck and Codespaces removes that. Those experts can focus on keeping just that up globally versus locally for each individual.
Outside of that scenario, complex systems, I've experienced it to be overkill. The negatives that come along with using such systems haven't outweight the benefits.
Today, with Daytona we are trying to solve just exactly this challange. Daytona is the enterprise-grade GitHub Codespaces alternative for managing self-hosted, secure and standardized development environments.
The unique value of Daytona is that you can self-host it on your own infrastructure and benefit from high-density workspaces which offer efficient resource utilization while ensuring a great developer experience.
Disclosure: Obviously, I work for Daytona, and was working for Codeanywhere. :)
https://engineering.linkedin.com/blog/2021/building-in-the-c...
I'm happy to answer questions but others have already posted many of the benefits. As far as I know, local containers on macOS still have performance issues so we mostly use them in the cloud.
For our node/js repos, yarn does a great job at keeping things simple, deterministic and fast. So even if we have containers, devs use the direct method for faster devloop.
For bigger companies with much more complexity, remote devboxes make a ton of sense. You want to manage farms, not individual flower pots.
Nice thing about containers is that you can run it locally without internet.
Containers + vpn solves a lot of pain.
Bunnyshell creates two types of CDEs:
- local IDE, code running in the cloud. The cool thing here is you get to use a local IDE (any IDE/editor you want), no lag, the user edits a local files and the files are synced in real time to the cloud environment. All editing on local, all execution in the cloud. Supports debuggers.
- remote IDE, code running in the cloud. When configured in this way, both the IDE and the execution happen in the cloud. No code on local.
The CDE is just one side of Bunnyshell, the other is provisioning environments on demand or automatically (eg. ephemeral environments on each PR). All these envs support remote development (if enabledd).
Our team makes heavy use of CDEs internally for actually building the product.
One of the submissions was built in Java and didn’t use Docker (I guess the person had an aversion to it). I didn’t want to bother installing Java on my machine, so Codespaces seemed like a good idea. It took less time to get it working than it would’ve to find the download link on Oracle’s website.
I’ve also used it to write a few lines of code on my iPad, but this was far from ideal.
We have our own custom built cloud dev envs where I work and I def see the value. I don't need to worry about a conflicting version of a dependency that I have installed locally for some prototyping affecting my day-day productivity.
Is there any CDE with good Copilot (or equivalent) integration available today?
I've been told it's fairly easy to tweak and run in Debian too. I basically wrote this for those times where I want a fresh development box but I'm too lazy or computationally cheap to even bother installing Ansible to run a `host: localhost` self-configuration playbook on.
Also I have to upskill some colleagues and what better way to do that than share a demo in Codesandbox that they can fork and play with, rather than an email with 20 step instructions on installing node (on Windows)
For actual development I can't work with a thin client, it's too slow.
Spinup time is fast, it's easy to use VSCode remotely, and it's easy to have multiple environments for different types of projects (Python, Java, Go, etc.).
Neovim + LSP's also works perfectly if you prefer that over VSCode (via ssh)
You can also ssh into the workspace and you can do w/e on the terminal (runs in a k8s container).
Personally I used Gitpod at work every day (Nx + React on a fairly complex stack) for 2 years and I loved it.
Always fresh, always working. I don't use it (at work) anymore because I moved to another company and some of us are in Australia / New Zealand where latency becomes a issue (so far).
Not quite the same since it's not really containerized, but I set up the project once and then I'm good to go from there
It's nice cause I detest Apple/MacOS and this lets me avoid it completely since I'm just interacting with a Debian machine
Vs code works great but I would really prefer to find a way to get neovim working with less lag. Mosh support is improving but still isn't fast enough not to be annoying!
But that's probably not what you're talking about.
Some notes:
- Codespaces uses the devcontainer spec, so folks can use our setup offline if they want
- I get far fewer questions about how to do XYZ, and when I do get questions, I can almost always reproduce them, which is a breath of fresh air
- We do pay a fair chunk to use high CPU/memory machines, this is worth it for us. I have a very lightweight laptop that I use
- Some of our developers like having multiple separate codespaces at a time as a way to separate out different projects. I just use branches, but some folks like the codespace workflow.
- I can run vulnerability scans over our dev environment which is nice
- When I onboard devs, I can get them to the point of running our full application suite in a 15 minute meeting
- I don't have to deal with m1 vs not-m1 issues that were popping up all the time before we made the switch
- Having a linux base is nice, as that's what we use in production. We deal with some annoying dependencies and not having to install them on Mac anymore is nice
- It's seamless changing between my laptop and desktop that I work from. All the code is instantly on the other when I switch, even if I haven't pushed up to git
- Chrome + Notion + Slack + our task tracker app + etc. take up a lot of memory now adays. Even with 32gb machines, folks often would run out of memory trying to run our app. On a codespace, all of the memory is dedicated to just the app.
- With prebuilds, when a developer opens a codespace, we already have all of our python, node, go, etc. dependencies preinstalled (also things like awscli, terraform, pre-commit, VsCode extensions, docker). They just run `aws sso login` and then `start --serve` and things work.
- I live in an RV, and sometimes don't have great internet, but my Codespace is on a remote machine that does always have fast internet. Even being an internet-based service, this works great for me.
The biggest cons:
- Some of our devs were really passionate that they preferred other IDEs than VsCode. Some other IDEs do now have support, but they aren't as supported as VsCode yet
- Codespaces have downtime occasionally (as did Cider at Google). Most folks just use a local copy/devcontainer of the app or do other things when this happens, but some folks choose to always use devcontainers/local copies because this annoys them so much. At Google, we just posted memes when this happened and went home.
- Codespaces become inactive after a controllable amount of time has passed. Some folks don't like the 40 seconds to reactivate a codespace after being inactive for more than 30 minutes, so they either write scripts to keep their codespaces alive or just don't use codespaces at all.
Overall:
- most of our newer employees exclusively use codespaces.
- A few of our older employees who developed locally for years have chosen to continue developing locally, and will just hop into a codespace to run a quick terraform command or something if their local version gives errors.
- The number of questions we deal with regarding issues on a single users' machine have gone down dramatically, and tbh most of the questions come from folks who use the local still
- We do pay a pretty penny for this, but it's a small fraction of our overall cloud spend or costs per employee
its workflow is still quite at its infancy though. but the SaaS route has allowed teams new to adopting to best practices still benefit from them, while they move towards adopting them.
GitHub Codespaces for quick stuff, I prefer my IDE.
I'm aware of IDX though