Namely, it was extremely hard to onboard new developers to work on the product. They had to understand the whole thing in order to contribute.
Changing something was also hard, since it was intertwined. Adding a feature or removing a feature was hard. Especially given the fact we were not designing to spec, and the domain experts we were building for could not give feedback. That was a constraint outside of our control, so we were building for users we never met based on what we thought would make sense.
The first commits on our own platform were to establish a plugin architecture. There's a core, and there are plugins. We could add or remove functionality changing a config file. Applications are plugins and onboarding is easy and smooth, since a junior developer can start working on one plugin and then expand their knowledge.
We're reaping the rewards of that.
Then you can identify parts that share too few contributors and encourage people to work on them together. You might also find parts where the knowledge is already lost, since everyone who worked on it already left the company. In that case some team can volunteer to take ownership of the code and take time to make some sense of it.
Of course, easier said than done.
[0] https://codescene.io/projects/167/jobs/55946/results/social/...
Don't think the company would want to release all the code publicly.
If you think of microservices as contract enforcement, it should be harder to produce unintended consequences on the macro level, because everything should flow through the api. (assuming you don't have something weird like several microservices manipulating the same data sources directly), so architecturally it's easier to understand code/data flows than the same monolith system that hasn't properly enforced modularity.
The main problem is most folks rushing head first into the silver bullet don't understand is things have trade off, and in microservice land it's versioning, testing, deploying, and monitoring.
Nowadays, though the tooling is pretty good, and if you put your microservices in a monorepo (seems backwards, I know) you can avoid the versioning, testing and deployment difficulties, and use GKE + istio and you've got tooling to help handle the ops problems, so actually, maybe enforcing code quality is actually the harder problem, and limiting the size and scope does sorta make sense.
Here are some:
1. You're perhaps more subject to Hyrum's law. If plugin devs can see it, they will use it. The general observation here is that it's harder to control the visible interfaces and implicit dependencies you export than the dependencies and interfaces you rely on. As one example, semantic versioning doesn't cater for this at all. Plus, most of the practice knowledge in software is on managing relied on dependencies.
2. Dog follows tail. It can happen that a plugin becomes so successful the overall system evolution slows down. The core system can upgrade, but adoption/deployment can be constrained when a particularly valuable plugin doesn't move up to the latest and the customer base sees more value in the plugin than its core platform. This can compound poorly over time, and in extreme cases the desirable plugin can become its own platform/system (something I think business savvy tech leaders are increasingly aware, and wary, of).
3. Operational complexity. It can be harder to run and maintain a plugin based system than a closed one. 2 is a consideration here, but so are other concerns, such as security and resource isolation. Strategies vary, but who pays this cost on a relative basis is one of the more (and perhaps the most) interesting aspect of working on or using plugin systems. As one example of this, think about allocating responsibility for bugs.
4. R&D complexity. It may take more time to design and build a plugin system than a closed one. Incrementally evolving to a plugin system can be difficult if you didn't start there to begin to with. So you usually need a clear opening motivation to delay reward (or avoid over engineering) to invest in a system design where functionality can be extended by non-core developers.
For exaple, we have the platform and it has icons on the sidebar for Notebook, Object Storage, etc.
Every single one of these is a separate application and a separate repository. These applications are independent in how they deal with business logic, so there's no loss of expressiveness. They just must present certain "receptors" or interface if they want to be plugged into the system. The "interface" is a big word, and someone can produce a valid minimal plugin (that does nothing except be loaded) in two minutes.
This allows us to contain details of a plugin to the plugin itself, and not having it leak to other parts of the product. If we want to activate/de-activate the plugin, it takes less than 10 seconds manually.
Now, sometimes a plugin depends on another plugin. But they make their requests to that plugin, and fall-back to something else in case that plugin is unavailable.
The amount of engineering time this has saved us is delightful. I think of all the code we did not have to write and it makes me smile.
That's for containment and encapsulation at the application level. But we also follow that mode at the functionality level, too. For example, model detection and tracking is done by plugins.
We like to do things and have an abstraction so that we can churn out functionality for similar things "industrially", without thinking too much, but also so we could remove things easily without breaking the rest. Making code not just easy to add, but easy to remove is important. When we did that, we were able to remove a lot of code, too.
It is a spectrum, and we started by using it to contain at the "app" level.
That's from our internal wiki, and "Plugin Architecture" was the first entry in that, just to tell you about how important it was.
Also, given that you describe yourself as a "junior developer", here's a reply to an Ask HN about "How to become a senior developer".
https://news.ycombinator.com/item?id=25025253
Plugin Architecture:
Dynamic Code Patterns: Extending Your Application with Plugins - Doug Hellmann
Link: https://www.youtube.com/watch?v=7K72DPDOhWo
Description: Doug Hellmann talks about Stevedore, a library that provides classes for implementing common patterns for using dynamically loaded extensions
PluginBase - Armin Ronacher:
Link: http://pluginbase.pocoo.org/
Description: Armin Ronacher is the creator of Flask, Jinja, and a bunch of other stuff.
Miscellaneous links on plugin systems:
https://developer.wordpress.org/plugins
https://techsini.com/5-best-wordpress-plugin-frameworks/
https://eli.thegreenplace.net/2012/08/07/fundamental-concept...
https://pyvideo.org/pycon-us-2013/dynamic-code-patterns-exte...
https://en.wikipedia.org/wiki/Hooking
http://chateau-logic.com/content/designing-plugin-architectu...
https://www.odoo.com/documentation/user/11.0/odoo_sh/getting...
https://en.wikipedia.org/wiki/Hexagonal_architecture_(softwa...
The way I describe it as "receptors" as in neurotransmitters is because this fascinates me. Both nicotine and acetylcholine bind to nicotinic acetylcholine receptor (nAChRs). I found that to be amazing.
There have been downfalls for me though - interestingly never the obvious problem. To me the obvious problem is performance, there's usually an overhead in a plugin system of some sort but it hasn't been an issue for me yet, maybe i'm just lucky.
An annoying real issue has been adding in dependencies between plugins, that has always introduced horrible issues later on. I think based on my current life experiences i'd even choose to duplicate functionality over introducing dependencies now - despite the fact that idea is basically anethema to conventional wisdom.
Yes, not only keeping the cost of change low, but unlocking potential in a way that seems prescient but is just common sense. I like to think about this in terms of "unit of thought", "protocols, interfaces, specs", "impedance matching" to maximize power transfer, unknown future consumers.
Doing that adaptation upstream prevents people writing adapters downstream. It may be adding a "REST" API so that consumers you may ignore can work with that. It may be using Cloudevents spec for your events to make it easier for people to work with that. It may be making sure the output of your system is a Docker image to make it easier to use that elsewhere for a user, whether that may be a human or something else.
Systems that produce known/standardized/understood building blocks unlock a lot of potential and avoid downstream work.
I mean to a point I get it, but if some code is unmaintainable, you don't keep trying to fix it, you have to decide to replace it.
Valve has no excuse, they make crazy amounts of money, they can fund the development of a new engine from scratch easily. They just choose not to.
I've seen this effect in code that wasn't nearly this bad, and I've even felt this way...
But in the end, I've decided to do it anyhow. The end result was that I became the guy that could fix anything (in other people's minds, anyhow) and my job was actually more secure than if I'd followed the path of least resistance.
Had these devs followed the hard path, too, I think it would have helped get things cleaned up, instead of continuing to pollute everything even worse.
Sometimes developing is hard, and you just can't shy away from the hard parts. It just makes everything else harder.
But on a more serious note, writing automated tests for game engines involves a lot more than just "duh, unit tests" (especially when testability wasn't a concern in the original design).
Back then people often had no idea what they were doing, often because they were doing stuff nobody did before, ever.
It becomes difficult to even know if the test is checking the right thing since you’re completely unfamiliar with the context.
The hubris of trying to circumvent the law of fast, good, cheap; pick two.
I think sometimes in project this complex you (nor anyone) have no idea where some code SHOULD be and inserting it in "wrong" place causes weird things later
The way they talk to entities to handoff determining visibility is fantastic, and there are a number of other small design details that make the engine very pleasant to work with as a modder or game developer—but there are some things that are rather hard as an engine developer working with Source, or black boxes because no one has public information of how particular systems work anymore.
Even internally at Valve they’ve broken particular portions of Source and the Half-Life codebases because they don’t understand how particular interfaces work anymore, but some older members of the hlcoders community still do.
Having said that, it's built on Valve's Source Engine, so still faced many of the same issues
I assume many if not all the original devs have left.
And I very much do mean "replaced" there. Physics, since you mentioned that, was switched from Havok to the in-house developed Rubikon. And since Havok is a licensed middleware, they couldn't just bolt some new stuff on and call it theirs. That's going to be a full from scratch replacement.
Similarly the "UI module" was fully replaced, from the Flash-based Scaleform to Valve's in-house Panorama which is fairly similar to HTML5/CSS/JS. This module replacement was also "ported" to Source 1, and was implemented in CSGO as well. Which gets back to the lines between game engines "versions" are blurry.
You can get wallhacks in multiplayer by simply using WriteProcessMemory calls. [0]
[0] https://github.com/Snaacky/Diamond/blob/master/diamond.py
See Riot and Valorant from earlier this year. There was a lot of outcry and the response from the devs was basically "we don't give a damn".
Other games, for example, scan window titles or signature for a variety of debuggers/hacking tools like IDA and x64dbg. There's many techniques and variations you can apply to make things like this more "annoying" - but never impossible.
Earlier this year, there was a PCI card PoC that would read memory and act as an "undetectable" wallhack - people are clearly crafty enough to always find their way around.
I'd expect more developers begin to deploy kernel-level anticheat in the future.
Because "we" gamers tend to prefer to have fair game
Playing against cheaters destroys fun and the games itself.
It's hard trade off, but your average gamer would rather to play fair game.
This general philosophy has been around in all CS games and has worked well IMO. Just gotta find an applicably well maintained server to play on first.
The end result is pretty fantastic, but it was expected 5 or 6 years ago.
At the end wrote "He/him".
Anyone knows what does this mean and what is this trend? Is this to clearly state your gender and how you identify yourself?
- don't want to show their face on the internet
- people whose appearance is ambiguously gendered
- people who are not the gender that people assume from pictures of them
- people who used to be addressed as a different gender
and, when addressed, want to be referred to correctly.
for instance, i'll never show my face on the internet, but i like it better when someone says "_he_ wrote that program" rather than "_she_ wrote that program."
for all those people, it makes sense to put their pronouns in their bio.
but that leaves the problem: if only the people in the above categories put pronouns in their bio, and you have pronouns in your bio, that might imply you are, for instance, ambiguously gendered.
so people who are conventionally masculine like Rich put pronouns in their bio to normalize it, and to make sure that "having pronouns in one's bio" is not a "thing only OTHER people do".
i think it's a good thing to do. i'll do it too.
Although this seems nice, I don't think it actually solves the bigger problem (if there actually is such a problem in the first place). You can not wear a label everywhere you go with your preferred pronoun, this would lead to people stuffing their bios with all their genes, preferences and believes (eg. gender, race, religion, political view, etc.).
The actual problems are:
1) People assuming someone's gender.
2) People getting upset when their gender is incorrectly assumed.
3) Not having a well established social protocol to ask someone their gender without one or both of the parties feeling uncomfortable.
4) In my opinion it further emphasizes that gender is something really important, that should be mentioned immediately as it changes the way you look at someone. I think the correct progressive way of thinking is to disregard gender entirely and assume everyone is "genderless" unless it actually matters. Does it really matter if he is a he or a she? Does it matter that much if a stranger on the internet uses the wrong pronoun?
5) Same issue applies for all other previously mentioned characteristics of an individual (race, religion, political views, etc.).
Maybe even supports compelled pronoun use.
of course programmers will keep on complaining but in the end, it does not matter. if it works, don't fix it. doing rewrites brings nothing to the business, only to the developers. sure, the rewrite will save dev hours along the way but the rewrite itself is not free so all in all...if it works...
btw this is also why language design of composition instead of inheritance is so important for big projects. you will learn this way way too late if you do not get it already.