At some point, the application had some bugs which were not appearing when the application was run in debug mode in Visual Studio. The solution was obvious: installing Visual Studio for each customer on site and teaching the users to run the app in debug mode from Visual Studio. I don't know how they convinced the users to do this and how they managed with the license but it was done.
What happened next was even worse.
There was no version control of course, the code being available on a shared disk on the local network of the company with the code copied over in multiple folders each having its own version, with no particular logic to it either, V1, V2, V2.3, V2a, V2_customer_name, V2_customer_name_fix, ...
After that, when there was a problem for a customer, the programmer went there to debug and modified the code on site. If the bug/problem was impacting other customers, we had to dispatch some guys for each customer to go copy/edit the code for all of them. But if the problem was minor, it was just modified there, and probably saved on the shared folder in some new folder.
What happened next was to be expected: there was no consensus on what was the final version, each customer having slightly different versions, with some still having bugs fixed years before for others.
Around 1991 when Cygnus had 6-7 employees and was based in the apartment complex where I lived, none of the GNU codebase was hosted in any sort of revision control. Everything was FTPed around as obscurely named tarballs. We had gathered something like 27 different forks of gdb floating around the net, for example. This was back when forking was generally considered a tragedy, something I managed to change five or six years later).
Rich Pixley came and said “all of our code should be in revision control, and I want to use this newfangled thing called CVS.” Michael was OK with it but John and I were steadfastly opposed. We agreed to an experiment, grudgingly, subject to a whole whiteboard of absurd conditions (“must be transparently integrated into emacs so we don’t have to know it’s there).
Pixley agreed to all of that and then ignored all of it completely. It was immediately such a win that everybody adopted it without complaint, including us two obstreperous a-holes.
A few years later a preventable crisis was how revision control first became client-server.
Then some other dev committed 9MB of tabs 0x09 at the end of a file. Then the site was "slow" (cause the homepage was 10MB). And the blame went to...CVS somehow.
I left.
One company I joined was cleaning up the last vestiges of “customize it for every customer by letting people edit their copy on server,” which predictably turned into a mess. They were all very small customizations to styles or images but enough to make upgrades a total mess.
I did work at a company where despite having full source control they didn’t actually know of they could ever deploy the server component again. Edits got made to the live server but then made again in source control, or vice versa. There was one more senior person who couldn’t be talked out of their old workflow.
In theory everything matched.Eventually they even checked and got it all under control where they were positive it was the same and kept it that way.
But it had only ever been deployed from scratch… once. And for like 15 years it lived there and kept getting upgraded. It would all be moved when new hardware was brought in.
But it wasn’t installed from scratch. We truly did not know if we were capable of doing that. It is possible if that server was destroyed and we couldn’t restore from a back up it would take us an unknown amount of time. Even though in theory deploying should be as simple copying the files and starting the web server.
Were there odd configurations that had been made eight years ago that kept it running? Some strange permission changed somewhere?
I wasn’t on that team. But it always made me nervous. That was absolutely a core application of the business.
I really like small shops sometimes. You get a lot of freedom and get to have your hands in a lot of disciplines. You learn a lot of things, including things that’s should never be duplicated.
He had a spreadsheet where he kept track of the versions used by every customer. Once identified the version, he would open (no joke) a drawer in his desk and pick the right USB stick with that version on it.
I've always wondered whether this overhead was a worth price to pay for not wanting to learn a couple of SVN commands.
1. The only developer on the team with Github and put forward the idea of the company not hosting their own source code with TFS.
2. The only developer using branches with git when the co-founder asked (demanded) everyone to only use master.
The list goes on!
I joined a dynamic DNS provider once that had been around since 1999. Their tech, sadly, had not progressed much beyond that point. Showing the higher ups version control was like showing cavemen fire. Of course once the higher ups arranged to have training sessions led by the new hire for the entire dev team the VP of Engineering couldn't handle it and had me fired. Fun times.
That old project though was never fixed though, probably still running that way now.
Ultimately by the time I left I tried to introduce redbeanphp (orm), git for source control, and CakePHP for some structure. Nothing stuck. When I left it was still raw sql string queries, .zip files when they remembered for backups, and 400,000 line php files with everything caked on there.
For VB Sourcesafe was the go to tool if memory serves.
This is not a case of new vs old, rather incompetence vs competence.
Then you have to start tracking the limitations and issues in 2-3 year old releases and the solution to issues is never “upgrade to the latest”.
At that point, source control starts feeling worthless.
Source code management back in the days of VB (long before VB.Net) was not the same as what you see today.
Enterprise and business users will wade through endless swamps of crap if the software provides enough value. This is a lesson in why "it must be perfect before we release it" is such nonsense - that just says the value your app provides is so low that users barely care and they'll abandon it at the first problem. If that's the case you're not providing anything worth paying for.
- No one having any preconception of how you're /supposed to/ do things or whether you'd even be the type of person to know, so you just kinda figure it out yourself. You spend a lot of time on reading and learning skills. Version control? Wow, cool, what's git, let's try that! A new graphing library? Let's give that a shot, maybe it'll make things easier! You want XYZ? Let me go read about that for a day.
- No one having any idea what's even possible: being treated like a wizard for introducing the tiniest piece of automation or improvement that makes someone's day easier or doing something they never thought was possible. Lots of appreciation and excitement for showing and teaching people new things (... and I know this is somewhat selfish and ego driven, but who doesn't like being appreciated?)
- Similarly people having no idea how long those things should take which, tbh, can be a nightmare if you're not trusted and respected enough to be consulted but also great if people believe you when you say it's gonna take 3 months.
- Beyond the basics just being mostly kinda left alone to do your job however: no standups or tickets or the 30 other kinds of daily (micro)management that is probably necessary but ends up feeling tiresome and stifling at an individual level
- Not being part of software company "culture": no performance review driven development and promo packet madness, no weird rating and ranking systems, no OGPs or KPIs. No ladder. Your bosses think you did what was required of you, so then you're good, and if it's a good year you get a raise, and that's that. I do recognize that with a bad boss this can be a terrible and unfair spot to be in - but again, subjectively with a decent enough boss it felt like a lot less weight on my shoulders at the time.
- No hacker ninja pirate segway mini quadcopter you're the smartest people in the world and we're the best company to work for sort of b.s.
- Socializing with people who are good at and love to talk about stuff other than software
Reading over that, I'm thinking maybe I lucked out a lot and that wasn't most people's experience from that era. And there's some level of rose tinted glasses going on. And/or maybe my years in the rat race are starting to show :-)
Working directly for the users without any weird BA/PM/TA shit in between is glorious, both because you can always walk up to get immediate feedback (people generally like to see you are actively working on their issue), and in a place like that you can likely deploy it in the middle of the day and immediately improve their workflow.
It still amuses me that IT was located together with finance, because we did reports xD
I once ran across a (c) program that had 26 variables, each one letter long, one for each letter of the alphabet. They were all global variables. And many of them were re-used for completely unrelated things.
However on a more practical note, the "Java" used on Smartcards effectively requires that all variables be treated as constants, other than one array. You dynamically allocate the array when handing an event, and it only lasts for the duration of that event.
I want an obfuscator doing such a transformation:D
It took five years and the firing of the long-time “head” programmer until some meaningful change was made.
As a glib answer, can I suggest, that without proper training, there were a lot of developers who had never trained under anyone or any company with proper practices??
Honest question. How does our profession root out intrinsically obvious bad practices?
The training and best practices you’re talking about is learned experience about how to avoid it getting impossible. But that almost always involves expense that the business side considers ‘stupid’.
Holy smoke! That's actually the most creative solution (horrible, but creative) I've ever heard to fix a Heisenbug:
You’d think this sort of thing would be a problem of the 90s or early 2000s, but I’d bet you there are any number of companies with similar situations today.
For reference, Perforce was released in 1995, Subversion in 2000, git in 2005. RCS existed, but only in the Unix world.
I was brought in when an old sheet cost them $10m on a bad trade because the yahoo finance end point it was hitting stopped responding and it just used the last value it had gotten - three months before.
Now you only need to run the app in debug mode on your own server.
Hopefully they won't discover at some point that customers were in fact depending on those bugs. That's poison for the programmer's soul.
And yep, I just checked: The limit was 255 characters.
We had a group of talented designers/artists wrangling Photoshop and pumping out a few of these designs a day, each, and as we scaled up and gained a lot of repeat customers, tracking these PSDs became a big problem. The designers were graphically talented, not technically savvy. The PSDs were stored on a shared NAS drive that the whole company could see. The designers had a complex naming system to manage major revisions, but overall there was no "history" beyond the classic "_v2_2008_09_best.psd" naming technique.
Several times every week I had to fix "accidentally dragged the PSD folder somewhere and lost it" level problems. Getting IM'd from tech support because the underlying server was falling over trying to clone the multi-GB folder 3 times, logging into a workstation as Admin and searching for an updated PSD that the vacationing Designer hadn't synced back to the NAS before leaving for work, that kind of thing.
As soon as I was promoted to Supervisor I made my first big move. It took a lot of training, far more talking than I thought it should (back then I didn't know anything about politics), but I was able to get SVN implemented to replace the NAS share. I wrote quick-reference documents, in-depth guides, (this was before I knew that no one reads anything, ever, for any reason) and eventually just had to do one-on-one training with everyone just to explain the concept and basic useage.
One of the most satisfying feelings of my career continues to be watching attitudes change over the course of a summer. None of the design-y people liked the new set of hoops they had to jump through. Check-Out, Check-In, Lock, etc, it was "too much". Then, at a happy hour someone mentioned how we hadn't lost the PSD folder in a while. Later someone came to me panicking because a client wanted to re-run an ad from 2 months ago with a couple tweaks, and she didn't have the source PSD or the source material -- I did a live demo on how to get a historical version back, and that's when it really clicked with everyone. With internal political will behind the toolset, it now became an IT problem, as our SVN useage was nothing like Engineering's usage.
Of course file locking was a huge PITA, that feature replaced "forgot to copy the changed file back before vacation" as a problem category. But it also eliminated the problem where 2 people would open the same PSD directly from the NAS share, make their changes, and only the last one to save gets their work persisted. So, a toss-up I guess.
In my first real job, I worked for a company that maintained a large legacy product programmed in a combination of COBOL and Java.
In order to work on the Java side of the product, you checked out individual files from source control to work on, which 'locked' the files and prevented other developers from checking out the same files. This functionality was not part of our actual source control system, but was instead accomplished with a series of csh shell scripts you could run after ssh'ing into our development server.
Each of our customers had a 'master' jar file that represented the actual final compiled product (a jar file is really a zip file archive, which bundles together the resulting compiled java class files).
Once you had finished implementing your code changes, you ran another set of scripts which found the master jar file for each customer, unzips it, copies the compiled files from your local machine into it, and zips it back up again. Finally the source control lock is released.
This means, effectively, that the codebase was never compiled as a whole at any point in the process, instead, we just manually patched the jar file over time with individually compiled class files.
Over the years, small errors in the process allowed a huge amount of inconsistencies to creep into the codebase. Race conditions would allow two developers to lock the same file at once, or a developer would change a class that was a dependency of some other code that somebody else was changing. Sometimes code changes would make it into some of the customer jar files, but not others. Nobody knew why.
It took a small team two years to migrate the entire codebase to git with proper CI, and a huge chunk of that time was reproducing a version of the codebase that actually compiled properly as a whole. After the project was finished, I resigned.
Of course where they started was just awful but a place that recognized it's problems, commits to fixing it, and has sufficient competency to actually fix it sounds rather nice to me. Many orgs get stuck at step 1.
I presume there were other reasons for resigning, or you just like massive refactoring projects.
After everything was finished up, I was feeling burnt out and realised that I'd held on for too long at a company with a fundamentally bad culture that wasn't going to change just because the tech did, so I moved on.
Oddly, the project was one of the best I've ever worked on.
CSS was just starting to get adopted and every project we worked on just had one “gobal.css” file. When someone else had global.css locked, you’d call dibs if you needed it next. Inevitably, everyday someone would leave the office and forget to unlock global.css and no one else could get anything done.
Many of these earlier systems sound terrible and hacky to us, but they were the best that was available at the time. Systems to "lock" files you were working on were pretty common because basically nobody did merges well. Most of them were based on having a server to manage the whole thing too, so they were only really common in corporations and larger and more mature hobbyist teams - this was also before you could spin up a cloud server for most things with a few keystrokes and $5 a month. It's low-effort to spin up a local Git repo to track your work on a tiny personal project, but nobody's going to set up a CVS server for that.
Anybody remember Joel On Software's 12 Steps [0], written in the year 2000, 5 years before the Git project was started? Where "use source control" is Step 1? There's a reason why that's there - at the time it was written, source control was clunky and a pain in the ass to set up, so a lot of smaller companies or ones with less mature development teams never got around to it.
I may be getting a little "old man yells at clouds" here, but be really thankful that Git is FOSS, ubiquitous, works great for 98% of projects, and is superlatively awesome compared to everything that came before it.
[0] https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...
When I moved to SVN, it was a regression. I really don’t understand why people describe SVN as cutting-edge (and why people didn’t like Clear Case).
I concur. Creating a CVS repository was not harder than creating a git repository now `cvs -d $dir init`, and using, typical for the time, shared network drive, gives server experience without setting up a specialized server. It was just not included with Windows/typical Windows dev tools, so you had to know it existed, and knowledge was slower and harder to get.
I've never used RCS or CVS myself, but I remember the file locking thing in descriptions of it, and that it was why CVS was named "concurrent" - it fixed that limitation.
Most of Access’ problem is how it’s inseparable from both VBA and COM. While Microsoft has tried to make COM sexy again with WinRT and C++/CX, VBA is quite senile now. Microsoft has killed VBA in Outlook, killed VBScript in Windows Server, and is now trying to kill VBA in Excel too. MS is pitching modern JS as a successor (which I’m not too mad about…) but I just don’t see how JS could unseat the sheer volume of Access VBA out there. Especially as “Office JS” is intentionally kneecapped for things like local computer files and Win32 so it has feature parity on web vs mobile app vs desktop - it’s going to be awful.
I'm amazed they even allowed people to spend company time and money doing this. Corporations tend to not see the value in improving such precarious situations as long as the software keeps working. They just want employees to cope and accept the messiness. People usually cope by quitting.
I am interested in how you managed it when someone had to hold code for years (as I’ve seen)? Basically, we also had a way to share control with one other person, and the person who was taking the second source were responsible for updating both versions at the same time manually (which never happened, so implementing a large project that touched hundreds of source files had to dedicate a couple weeks to manually hand comparing files, manually implementing the changes, and then manually retesting)
> In order to work on the Java side of the product, you checked out individual files from source control to work on, which 'locked' the files and prevented other developers from checking out the same files.
COBOL + Java and file lock... I thought we worked on the same project and it brought back many nightmares!
But we were using Subversion's lock system and had a proper CI/CD with Jenkins.
So you won, your project was worse!
The admin user could override or unlock locked files. We had to do this if a developer left a file checked out after they left, or were on vacation. Nobody knew the admin password for the SourceSafe repository. That was OK though, all you had to do was make a local account on your PC named the same as a the source safe admin acccount, and you'd have admin access in SourceSafe.
The Rube-Goldberg VCS-mutex is hilarious though.
Tools do not fix bad design.
The primary author of it didn't know about arrays. I'm not sure if he didn't know about them being something that had already been invented, or whether he just didn't know Perl supported them, but either way, he reimplemented them himself on top of scalars (strings), using $foo and $foo_offsets. For example, $foo might be "romemcintoshgranny smithdelicious" and $foo_offsets = "000004012024", where he assumes the offsets are 3 digits each. And then he loops through slices (how does he know about slices, but not arrays?) of $foo_offsets to get the locations for $foo.
By the time I was done refactoring that 12k+ was down to about 200 ... and it still passed all the tests and ran analyses identically.
All his sins can be forgiven because he made it so easy to refactor by writing comprehensive tests.
In a similar vein, an alternate explanation may just be that a lot of bioinformatics algorithms are described as string algorithms. It would be a pity if that was just taken too literally, but I've seen a lot of bioinformatics perl that works on strings in this way.
While trying to be charitable though, it's almost impossible to come up with a rationalization for the three character offsets string. Sometimes I think when people really go wrong is when they hear of some problem or are trying to be too clever and end up going beyond their abilities and stray into farce.
Never attribute to competence that which is adequately explained by incompetence.
I am somewhat surprised that a programmer who was unaware of arrays in Perl managed to have tests. But then again, he managed to implement their own version of arrays, maybe he came up with the concept of software testing by himself :-P
Maintenance programming is all about understanding the software‘s context and implicit design or a model thereof — even if it is a 20 year old patch work.
Young developers tend to be amazed when I find the source of an obscure bug in our 1M lines of Perl code application as a senior. But the thing is, I just happen to „get the application“ and developed intuition about its inner working. It’s rarely of knowing Perl particularly well. The thing could have been written in TCL or even Brainfuck — with a working model of your software in your head (not with the memory of the total code base, mind you) you will eventually find the problem in a piece of software written in any language.
My youngest worked in a furniture chain over the summer. And they got sent a big, heavy furniture set from the central warehouse, which the store actually didn't want. So, they sent it back. The problem was that the system didn't allow them to say: please, don't send this again. And the non-natural intelligence at the central base decided to send this set again. When my youngest started working - they were loading this set back for the seventh time.
Why 'loading'? Because no one could find a way in the program to send this ill-fated set back on the same truck that brought it. No one, except my youngest, that is. He managed to find the combination of keys and checkboxes that allowed them not to unload the unwanted set and ship it back on the same truck - and he immediately got a raise.
I suspect the set is still traveling. But now they only load-unload it at the central warehouse.
I feel a lot of this is the difference between theory and practice. Sure each of these things are bad, but probably a lot might have been the right choice at the time, and in a way, most companies, even most projects running for many years, end with similar quirky processes, messes, and hacks.
It's the war stories of what used to be, often only told by the code and the database as the creators have long left.
When I look at our couple year old startup, we have some of these things. Prototypes that just keep functioning and only get more attention when they break, manual processed that sort of make sense but also don't, integrations with systems that were build in the early 2000's (it works, but it ain't pretty). We fix many, but I'm sure some will survive way too long.
As software engineers, especially online, we like to discuss the ideal way to do things, best practices, testing strategies, redundancies, the whole nine yards. When time is of the essence and stakes are high, it all goes out of the window and you gotta just make it work in whatever way is possible to fight another day.
Or this one, it implements EVERYTHING for rendering and interacting with a message in a single class, all non-service message types, all drawn manually on a canvas "for performance", and all input processed manually too: https://github.com/DrKLO/Telegram/blob/master/TMessagesProj/...
I even had the permission from the main developer to refactor some of it, but I never got around to actually doing it.
It's a miracle this app works at all.
1) His libtgvoip code, previously used for audio calls, is the worst code I have ever had the displesaure of touching, it caused all kinds of issues ranging from instability to segfaults, and thankfully it was completely replaced by webrtc.
2) The android app is literally the smoothest and most responsive android app I've ever used.
Grishka, please stop being so salty. VoIP is hard to do right, but not impossibly so. VoIP being hard is still not an excuse for writing garbage C/C++ code (and I don't mean to offend personally here, but the code was really a mess, please stick to Java development on your own personal projects).
It was replaced by WebRTC that would sometimes just suddenly disconnect your calls, right.
> I managed to improve the situation after weeks of refactoring and testing
Did you submit any PRs with your improvements? I used to review and merge those, unlike the Android app devs.
2) It might provide the best UX in the entire universe, but that still doesn't justify having a two-megabyte Java file with over a hundred anonymous inner classes that gaslight you because it's often a `new FrameLayout(activity){` that overrides onLayout and does something completely different. So you see a FrameLayout, you assume it behaves like one, you add views to it, then it doesn't work the way you expect. You start questioning your sanity and only then do you realize that it's one of these stupid things.
Oh and did I mention that even just editing ChatActivity.java is an exercise in frustration? At the time, I had a 2012 MacBook Pro that worked fine for everything I did on it, except editing 2-megabyte Java sources in Android Studio. It would sometimes take a solid 10 seconds for keystrokes to register.
In other words, it might be the best Android app in the world, but it blows up in your face nearly every time you try to add a new feature.
So it is not only working but working well.
BTW, would you mind sharing how you got that job?
I started working with Pavel Durov in 2011 on the VKontakte Android app, after winning several of his developer contests, and it all just went from there. I was the last one of the "old" team to transfer to Telegram. My main job was actually the libtgvoip[1] library and its integration into the Android app (the other platforms were done by their corresponding devs).
I got fired in 2019 because the calls weren't working satisfactorily for Pavel. He got furious when he was unable to call some journalist, cursed at me in front of the entire team and sent me a termination notice. In hindsight, I should've definitely not taken building a VoIP library from scratch alone upon myself. It's a massive undertaking. But at least I learned a lot, and many people say that my implementation performed noticeably better than the current WebRTC-based one ¯\_(ツ)_/¯
I'm now working on two fediverse projects at the same time — Smithereen[2], a VK-like ActivityPub server, and the Mastodon Android app[3].
[1] https://github.com/grishka/libtgvoip
So much this at my job, login to my laptop, login to vpn using password and 2-factor, check out my admin credentials using my login and 2-factor, login to the jump box using my admin creds and a different 2-factor, finally login to the system I need to be on using my admin creds. Multiply the last step by however many systems I need to connect to. Also, the clipboard and screen resolution are going to get messed up along the way no matter how much you mess with the settings.
The worse part wasn’t the code itself (although it was bad), but the fact that there was so much abandoned paths in it and there would be three or more different versions of crucial functions with the same name (or sometimes a different name but doing the same thing) in different places and sometimes all being called by different paths in the workflow. Or not being called at all.
And it was very math heavy (calculating natural gas pressures and flow rates through different sized pipes and fittings and sizing regulators to meet certain loads). Think Excel formulas on cells that referenced 15-20 other cells, each of which was a formula on their own that referenced other pages and cells, some of which were filled by VBA. And that wasn’t even involving the VBA code full of brute force solvers for multi-variable equations that used heuristics he’d worked out by trial and error (if it’s a delta over 1.5, add 5 to this variable, otherwise subtract 3, but if the delta was less than 0.5, add 1 and so on - it eventually converged or found no solution, but a binary solved did the same thing, only faster and easier).
It took me and a junior developer several months, during which, of course, multiple change requests were going through to make it multiuser and secure.
Both my nightmare project and one that I’m very proud of once it was completed.
Like, at a job where all the lead devs and, apparently, the whole internet agreed that you can't do proper source control for Excel because you can't export-and-import code for all modules (including sheets & ThisWorkbook) without copy-paste and because a running module can't replace itself. The solution ending up so simple, that I was embarrassed to hear it called "a stroke of genius". I still have that code somewhere.
I could talk through pain points they were having, we’d come up with a solution together, I’d hack up a quick prototype and launch it just to them to try out. We’d tweak it over a couple of weeks and when it was good I’d launch it to all customers.
Messy, ugly code base. But it worked well because it wasn’t over-managed. Just developers doing development. Amazing what happens when you get out of the way of smart, talented people and let them do their work.
Originally our company only did business in one marketplace (the UK). When we started doing business in multiple marketplaces, it came time to modify the system to cope with more than one. Our system assumed, _everywhere_, that there was only ever one marketplace. I had a plan: I would copy-paste the system, make an "international" version that supported many marketplaces, then transition over our original marketplace to the international version and shut down the old system. This way, everyone could keep working on the original marketplace like normal, they'd get the new marketplaces on the new system, and we'd do a clean cutover once ready.
It started out quite well. I got the international version working and onboarded our first new marketplace. The business was very happy, there was lots of opportunity in the new marketplaces. They asked that I delay the cutover from the old system and focus on developing things for these new marketplaces. After all, the old system still works for our original marketplace, we can move it over once our work is done on the new marketplaces. I said yes, of course!
It's now 5 years later, it turns out there's a lot to do in those other marketplaces. To say that the system is split-brained is an understatement. When training new hires, we teach them the difference between a "product" and an "international product". When either system does something, it double checks and cross-references with the other system via a colourful assortment of HTTP APIs. There are a variety of database tables that we "froze" in the old system and continued in the new system, so you could tell that an ID referred to something in the new system or the old system if it was above or below the magic number we froze it at. We have overarching BI dashboards that query both systems to produce results, and they have to heavily manipulate the old system's data to fit with the new multi-marketplace model, and any other changes we made. Both systems are extremely tightly coupled, the old one is a ball and chain to the new one, but the new one is so tightly integrated into it that there's no hope of separating them now.
I've learned to say no since then.
For many people, their first job in software engineering is the worst codebase they will deal with professionally for this reason. The first job hires lot of people with little/no experience. As soon as someone gains some experience than can move on to better paying jobs, where there are better developers with better standards.
I had one customer who came back with the same request, slightly differently worded, every single month, and every single month I'd say the same thing. They had this site they were running that was essentially a Yellow Pages type site. They had a large set of companies with contact details, each with multiple business categories associated with it. You'd choose a category, and they'd return a list of matching companies.
The problem was the site was really slow. I took a quick look around, and saw that all the time was lost querying the database. Taking a quick look at the schema I discovered that their approach to categorisation was to have a TEXT column, with semicolon separated 4 character strings in it. Each 4 character string mapped to a business category.
So when someone wanted to load up, say, all pest control companies, it would check the category mapping table, get the 4 character string, and then go to the companies table and do:
SELECT * FROM companies WHERE categories LIKE "%PEST%"
So on each page load of the main page type the site was there to provide, it did a full text search over the category field for every single record in the company table.I guess that's probably okay for the developer without real world scale data, and real world traffic counts to worry about. But they had lots of data in the database, and that category field could have dozens of categories against a company. As soon as they had more than about 4-5 simultaneous customers performance started tanking.
I could never get them to accept that they needed to rethink the database schema. One month they were bleating about how is it possible that Google can manage to do such a search across a much larger amount of data, much faster. They really didn't like my answer that amounted to "By having a sane database schema". All they were willing to do was pay over the odds for our most powerful server at the time, which had enough capacity to hold the entire database in memory.
I stumbled across that comment a few years back and it changed the way I handle tags and categories so just sharing it here. If anyone has an equivalent for Sqlite, I’d love to hear it!
These tracker databases are usually used to generate a HUD - numerical readouts surrounding each player shown on the table. PT allows for custom HUDs to be saved/exported/shared, and there is a cottage industry building and selling such HUDS. These HUDs can often bundle hundreds of custom stats - basically a subset of SQL queries, a simple example would be "times_raised / opportunities_to_raise WHERE position= 'button'", that sort of thing.
For performance reasons these custom stats are cached, which obviously makes some sense. However, each cached stat creates a new column in the custom_cache table of PT4's current database, a PostgreSQL 9.0 backend. If you play mostly Heads-Up SNG or the 3-handed Spins, it's actually quite easy to go over the (IIRC) 4096 column limit there by purchasing and importing one too many fancy HU HUD packages!
This completely borks PT4, it can no longer open - and players can no longer make money! In my previous work, I've supported many a player and fixed this "too many columns" error numerous times (by deleting enough custom HUDs from their account until the PT4 can start up correctly once more). Funny to see this pop up elsewhere!
There are so many entangled services and machines that you feel like an Indiana Jones. You ssh into a machine and feel century-old dust beneath your footsteps. And you never know what will you find. Maybe a service which holds the company together. Maybe a CPU eating hog which didn't do anything useful last 3 years.
I don't enjoy writing new code much. But in such an environment even with my limited skills I can do decent improvements, especially from security point of view. Feels great
But maybe you can replace your statement with "windows guru" and SSH with "remote desktop" and perhaps that would be fun
We had no code reviews, no design docs, no tests, nothing. We made the changes the way we thought they were right and would git pull them onto the production server.
After I struggled to get productive for the first four months, my manager went on a four-week Christmas vacation. In a moment of frustration, I seized the opportunity and rewrote the whole project from scratch. I don’t remember if my manager ever noticed, but that was the moment I finally got productive.
Is that kind of stuff common? People checking out on Black Friday and coming back for New Year's?
There's no date on this article, but it feels "prior to the MongoDB-is-webscale memes" and thus slightly outdated?
But, hey, I get where they're coming from. Personally, I used to be very much schema-first, make sure the data makes sense before even thinking about coding. Carefully deciding whether to use an INT data type where a BYTE would do.
Then, it turned out that large swathes of my beautiful, perfect schemas remained unoccupied, while some clusters were heavily abused to store completely unrelated stuff.
These days, my go-to solution is SQLite with two fields (well, three, if you count the implicit ROWID, which is invaluable for paging!): ID and Data, the latter being a JSONB blob.
Then, some indexes specified by `json_extract` expressions, some clever NULL coalescing in the consuming code, resulting in a generally-better experience than before...
Really!? Are you building applications by chance or something else? Are you doing raw sql mostly or an ORM/ORM-like library? This surprises me because my experience dabbling in json fields for CRUD apps has been mostly trouble stemming from the lack of typechecks. SQLite's fluid type system haa been a nice middle ground for me personally. For reference my application layer is kysely/typescript.
However I would still advise people to use a third normal form - they help you, constraints help you, and often other sets of tooling have poor support for constraints on JSON. Scanning and updating every value because you need to update some subset sucks.
You first point is super valid though - understanding the domain is very useful and you can get easily 10x the performance by designing with proper types involved, but importantly don't just build out the model before devs and customers have a use for anything, this is a classic mistake in my eyes (and then skipping cleanup when that is basically unused.)
If you want to figure out your data model in depth beforehand there's nothing wrong with that... but you will still make tons of mistakes mistakes, lack of planning will require last minute fixes, and the evolution of the product will have your original planning gather dust.
Internet Archive has a crawl from today but no earlier; which doesn't mean it can't be earlier of course. My guess is it was written recently though.
Pairing your approach with a „version“ field where you can check which version of a schema this rows data is saved with would actually allow you to be incredibly flexible with saving your data while also being able to be (somewhat) sure that your fields schema matches what you’re expecting.
That is how I read it.
Another possible reason you don't see those code bases anymore is the fact that such teams/companies don't have a competitive comp, so there are mostly junior devs or people who can't get a job at a more competent team that get hired in those places
But I do try and get a schema that fits the business if I can.
> This may sound like a mess to you. But it was remarkably enjoyable to work in. Gone were the concerns of code duplication. Gone were the concerns of consistency. Gone were the concerns of extensibility. Code was written to serve a use, to touch as little of the area around it as possible, and to be easily replaceable. Our code was decoupled, because coupling it was simply harder.
This one guy established himself by making an Access database for their core business, and when the web became a thing, built a customer site. But not on it—in it. He simply served ASP pages directly from the database, inserting dynamic content in queries. When I was asked to help improve their terrible design, I was forced to untangle that unholy mess of queries, ASP (new to me) and HTML. It was easiest to write all the HTML and insert their ASP right before I sent the files back (because I wasn’t given access to their DB/web server). Thinking “I could do better than this” got me into programming.
He was a Microsoft-everything head. Finally went too far when he presented a new web interface starring a Clippy-like parrot using Microsoft’s DirectX avatar API. The executives were unimpressed and then I noted that 20% of our customers couldn’t use his site. (I probably still had a “best viewed with IE” badge on the main site, lol)
The old codebase was an hairy ball of scala using a very outdated version of a (now) infamous actor framework. Before they figured out that untyped messages kinda left out one of the major selling point of Scala.
The code was readable, but the authors had this strange idea that every little piece of logic should be part of its own "Actor". An actor is pretty much equivalent to a class, and each one of them had their own little file. With this many classes, with very specialized purposes, you ended up with 90 character identifier names.
To understand what would be a single function in a normal code base, you would have to dive through half a dozen files through several repositories to piece together the logic. Generally, at the end, you find that most of the code is about passing around a value, and there is this one file where there is actual logic applied to the value.
It wasn't that awful. The thing was that it was impossible to change anything: no documentation, no test, no bug tracking, not even any PR process, laconic commit messages, no Wiki pages. And the actor framework made it very difficult to add tests. But later developers did manage it pretty well, they drew fences around the old services, with an HTTP API to communicate with it. And all later additions were part of different services that were very cleanly and consistently designed.
I've experienced something like this, where the other devs preferred to breakecode down into the smallest units possible. I often saw a half-page procedure changed into multiple function calls, each with their own boilerplate and error handling (as required by the style-guide), such that you could not view the entire logic on one screen.
Every procedure name described the intent, they said, so you know what the parent did without having to worry about details like how it did it. I, meanwhile, trying to track down what the actual implementation was, would hold as many subfunctions as I could in windows with a tiny font, and hold the rest in my head…just to find out stuff like:
"Oh, so it's trying to use system feature foo.x, but I know foo.x is broken current release for a certain edgecase. Which case happens to be what our customer does all the time…"
At a company that I used to work, they heard the same rumor, so instead of using identity columns or sequences, they kept a table with a number of ids "available" (one row per id). Whenever unique id was needed, the table would be locked, an id selected and marked as used. If there were no ids available, more ids would be added and then one used. A scheduled job would remove ids marked as used from time to time. Note that there was a single "sequence table", that was shared among all of the entities.
That was not even the weirdest part. That id was unique, but NOT the primary key of the entity, only part of it.
The structure of the database was fairly hierarchical, so you had for example a table CUSTOMER in 1-to-many relation with a USER table, with a 1-to-many relation with an ADDRESS table.
while the primary key of the CUSTOMER table was a single CUSTOMER_ID column, the primary key of the USER table was (CUSTOMER_ID,USER_ID), and the primary key of the ADDRESS table was (CUSTOMER_ID,USER_ID,ADDRESS_ID). There were tables with 5 or 6 columns as a primary key.
Maybe they wanted to avoid maintaining a separate CUSTOMER_ADDRESS 1-to-many table or maybe it was done to make easy reverse lookups.
Another WTF moment was realisation that MS SQL Server does not support BOOLEAN type. That made porting code fun.
The standard does not have a boolean type. It's a postgres extension that the other open source databases adopted (because, yeah, it's obvious). But the proprietary ones insist on not having.
The official recommendation is using byte on MS SQL and char(1) on Oracle. Both are ridiculous.
A few months in, when I approached the CTO and asked if I could start writing a test framework, he deemed it a waste of time and said "by the time you'd commit the test, it would go out of date and you'd need to rewrite it".
Naturally, the build would break about 5 times a week.
Boeing was a major customer of this system, so when shit hit the fan at Boeing a while ago, I wasn't surprised.
Programming is a lot like this.
We had to grow fast and prove that we had a market fit. This meant making compromises, but the technical debt was actually under control. We knew what to improve and how to do it once we had enough resources.
Eventually we proved ourselves and got decent funding and could hire more developers. We were looking forward to improving our system.
Unfortunately the new guys (many fresh out of college, lead by a few "FAANG"-ish seniors) decided that the technical debt was a proof of our incompetence and convinced the leadership that they needed to take over the technical development and rewrite everything. The result was a year where no major features were added and we lost our momentum. The funding dried up and the company had to let go 2/3 of the employees.
The worst part was that the new systems were more complicated to extend due to their clever "scalable design" (distributed monolith) with more micro services than developers. When the rewrite was done and it was time to add the new features, they had lost the trust of the leadership and just threw code at it in desperation to get things done as quickly as possible. I pithy the developers that inherited the result... By that time the rock stars had moved on and are inflicting damage elsewhere after touring conferences where they presented their greatness.
It really becomes a problem when the same short term calculation becomes the standard - you can quickly rack up costs that future you or future company will never pay down.
IMO its a classic example of real world enshittification
I have a zsh alias `alias changes='git add . && git commit -am "Changes" && git push'` that I use for my commits in the repo.
This all may feel silly, but it's what I needed to get back to that time of playing. Where I never worried about the code. But where I also didn't feel I was wasting my time working on code no one could see. I'd definitely recommend trying something like that if you are struggling with it.
Might help to have enough experience that you have (well-placed) confidence in your gut feel for how much engineering a particular thing needs.
(More common is to not have an engineer mode, or to not have enough experience to do it well.)
If you don't have the kludge mode, try some things where kludges are outright necessary. One time I recall this was when writing Emacs extensions. For example, it used to be that you'd hit a point where normal programmatic navigation/parsing of the buffer was just too slow, and a well-placed regexp operation that worked 99.99% of the time (and the other 0.01% of the time it's OK enough, and the user understands) made the rest of the code viable.
Another example is personal open source projects, when you really want to experiment with an usual implementation approach, and you give yourself permission.
(Maybe don't do this when the open source is done only for resume purposes, where you're trying to demonstrate you follow all the current fashionable conventions. You're almost guaranteed that someday a new-grad on a public forum or not-very-technical hiring manager will stumble across that bit of code, and fixate on the one thing they think they recognize as a bad practice. Documenting the kludge and rationale, in, say, a code comment, is great practice, but, again, it will also draw the attention of the least-skilled. Much like, if you're relating an anecdote in a job interview, and implied are 20 things you did right and 1 thing you did brilliantly, and you note as an aside one mistake you made, most of that will whoosh over the head of the dimmest FAANG interviewer, but they'll latch onto that mistake you spelled out, as a wise insight they have, and that's what's going in their report. :)
Then, armed with an experienced multi-skilled brain, when your startup's imminent MVP launch is facing crazy constraints, you triage what bits need to be rock-solid so they'll absolutely work and not fail, what bits need creative kludging so you can hit your launch window for other key requirements, and what bits you need to mitigate any compromises. Ideally, you have the wisdom to know which is which, and can activate different skills for each kind of work.
That Turbo Encabulator needs to be well-machined, but the new sensor you just decided it needs for a test doesn't need a week to be drilled into the engine block, nor a mount to be designed and CNC'd or 3D-printed, but the nearest Velcro or zip-tie or maybe wad of chewing gum will do.
Also, be aware of, but don't get hung up on, engineering best practices. Chances are that someone is going to see your POC and put it in front of a client or promise it to someone. It will enter service being less than perfect, if you are lucky.
When you pull this off people will tell stories about you and brag about you when you are not around. None of them will know anything about the code engineering.
Joel Spolsky released the "Joel Test" for determining if the software team you were interviewing had good practices in 2000. One of the requirements was that they used version control. Not all that many teams actually did. Especially during the dotcom craze.
Today, passing the Joel Test is table stakes, and it's the rare shop that doesn't. But it took years for that sort of thing to become ubiquitous.
https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...
4. Do you have a bug database? ["an organized database listing all known bugs in the code"]
No, each team has one or several poorly organized Jira boards that contain some fraction of the known bugs.
5. Do you fix bugs before writing new code?
Lol? Does any major company do this?
6. Do you have an up-to-date schedule?
No, we have weekly sprints and the vaguest idea of when bigger projects will land.
7. Do you have a spec?
I haven't seen a real spec in years.
8. Do programmers have quiet working conditions?
Well, I work from home, but all the fancy offices are open-plan.
10. Do you have testers?
Not for any of the stuff I work on, and I don't know if we have them at all.
11. Do you do hallway usability testing?
Absolutely not.
So that's 5/12. How many companies you know would get 11 or 12?
With 10 years of hindsight, I cringe thinking back to all the bad decisions I pushed for and the no-doubt terrible code I wrote. But I also marvel and look back fondly at being given a shot and being in an environment where I could just build something from the ground up and learn everything soup to nuts on the job. God bless whoever inherited that codebase.
Probably some of the most fun I've ever had writing software was making Gilfoyle-2s.
In the codebase itself you can see the evolution of code styles, older views and the core of the system is written in a very old and very stateful manner while newer parts of the application use modern architecture practices, we have two local databases that load configuration into the app for our different clients and their requirements, an global state is loaded at all times to check what is the correct business logic to follow.
Im a junior, few years into Android Programming and while sometimes its frustrating having to deal with some nasty bug because a random variable is updated for reasons only god knows, i think the experience its giving me its something im going to appreciate years down the road.
25 years ago phones still had cords. Infact, C# itself is only just barely 19 or 20 years old at best. I know I felt like everything that came before me was ancient when I was young, but a decade is a long time in tech so i just felt the need to point out that if you had a codebase for a phone from 25 years ago if it was not written directly in assembly, it would have been running a very slim java version called Java ME (Micro Edition) for feature phones which was all the rage before android, or if you were unlucky you would be dealing with BREW which was a SDK for c/cpp development on feature handsets.
https://en.wikipedia.org/wiki/Java_Platform,_Micro_Edition https://en.wikipedia.org/wiki/Binary_Runtime_Environment_for...
C# Would have been beyond a luxury for a mobile application 25 years ago.
Constraints affect outcomes in (at least) three ways:
- by absolute limitation (you cannot violate these)
- by path of least resistance (the constraint makes X easier)
- by strategy (creative use of knowledge & skills leads to novel solutions)
---There is an injured person on top of a mountain, and they need medical attention. You need to go up the mountain, get them, and bring them down.
You only have so much strength/endurance, so nothing you do will ever be faster than what your body is capable of. You need to get up and down in less than two hours. You need to carry an injured person. The mountain has two slopes: a long gradual one, and a short steep one.
Most people would climb the long gradual slope, because it's the path of least resistance. But it will take 4x as long to climb up and down it. Climbing straight up the steep slope would be incredibly tiring, and unsafe to bring someone down.
You can, however, zig-zag up and down the steep hill. It will take more time than going straight up, but faster than the long way, you will be less tired, and it's safer to bring someone down hill.
---
Constraints can be good and bad. Good use of constraints can allow you to get something done effectively. Bad use of constraints leads to failure. So it's important to have someone who can utilize those constraints effectively.
Vision, leadership, and wisdom is more important than the people, skills, materials, or time involved. The former determines the quality of the outcome more than the latter.
The calendar table struck a chord. We had one for holidays. One system needed to know when they were to calculate pay dates. Except once a year it would “run out” and someone would have to go add in the next year’s worth after a bad calculation was spotted.
The second time I was told to do it, I put in 10 years worth. The company didn’t survive long enough to need more.
My first “big” project was actually that pay date code. Every once in a while the server would hang, and people had deduced the date calculation was the problem. But no one knew why. And it wasn’t frequent enough to be worth taking time from the other two programmers. But I was new and thus “spare”.
After not being able to find any errors, I brute forced it. I ran that calculation for every day of the year for every pay type (weekly, monthly, bi-monthly, etc) and it quickly got locked in an infinite loop. Armed with the “bad” data that triggered it, it was easy to solve and provide tests to prove it would never happen again. I don’t remember what it was, exactly.
I wrote a little program to scan the CD:s as I inserted them into my computer, indexing the data into a database I had created, and then labelling the CD.
It wasn’t exactly exciting work but I still miss those days sometimes, everything was new and unexplored.
Not needing to conform to some company-wide standard is probably really pleasant while it lasted, but every such effort adds to the haunted graveyard, and the lack of consistency will eventually come back to bite whoever is still around.
[1] https://www.usenix.org/sites/default/files/conference/protec...
I work at a small business. Despite computer software being about the literal opposite of our business (plants), the founder built an entire suite of interconnected tools that runs off MS BASIC for Xenix, on a single HP machine running SCO OpenServer. The machine has so many customizations, self-scheduling cron/at jobs, odd nooks for files, weird tweaked programs, and special conventions that if a server with a dedicated hostname qualifies as a pet (as opposed to cattle), I'd be THIS THING'S pet.
The system handled EVERYTHING. Accounting, payroll, pesticide management, inventory, attendance, business contacts, shipping label printing... all out of a bunch of terminal menus (which are actually text files with control codes that get `cat`ed out).
But by God, the most horrifying part of it all are those BASIC files. They're IMPENETRABLE.
Firstly, I don't believe this version of BASIC supports named functions or subroutines. At all. But that's fine. MS BASIC being what it is, the interpreter only can deal with a certain number of characters per logical line, and that includes data definitions.
This version of BASIC (like so many others) includes its own serialization format and record/file access scheme. You declare the layout of the data file you want, open that file, and BASIC will handle (most of) the rest.
So when the founder started hitting the internal line limit while defining the data file's fields, he would cut the names of the fields down to fit more on that one line. Over time `30 AS EMPLOYEENAME` became `30ASEMPLNAME`, which became `30ASEMNAME` which became `30ASAF(1)`.
Every cent we transact, and every employee's timecards still flow through this old system, some even using bona fide Wyse terminals. To reiterate, this man was, first and foremost, a farmer. His contraption is terrifying, but commands immense respect. It's lasted 30-some years with continuous tweaking and refining, and we still have yet to replicate even half of its functionality. (Though there are other organizational issues that are making that difficult.)
On a personal note, aside from the calcified codebase and occasional spelling errors, it's a stellar business application. It's fast, mostly coherent, and keyboard-driven in such a way that experienced employees can navigate it faster than the terminal can refresh. We've been working for years to replace it, but at the same time, there's a lot our newfangled Angular+PHP+MySQL replacement could learn from it.
Needless to say it gave my team a few days of flabbergast and speculation on what system they must have built on to hit a row limit at only 5 digits. And yet for a non-tech industry it was mostly working.
In 2014 or so, someone at a company told me that they just don’t do branches, and merge everything into trunk. I agree that long-lived branches are bad, but no branches at all? Sure enough, this was OK — meaning branches could be kept and synced by a few people outside the main repo, but forks are never pushed to the repo.
Using table columns for data, as long as you’re building for an actual specific business vertical and not just a general-purpose framework.
You never know when you’ll want to put an index on the data, and databases already have built-in mechanisms to select a subset of the fields, etc. You avoid all kinds of joins. If you need to extend the table, just start a new table with the same synthetic ID.
I would say, in tables where you don’t want to even have global locks (eg sharded tables), just don’t have an autoincrementing ID. Instead, try a random ID or a UUID, and insert it, then use it across tables.
In short, what was described here is actually good design.
- there is so much stuff to improve. There’s nothing better than the feeling of improving things with code (or by removing it)
- it’s back to school again and everything goes. You can implement features in any way you want because the constraints the system imposes. Granted, sometimes it’s painful to add functionality
- there’s usually no room to subjective topics like clean code and architecture. The most important thing with these systems is correctness (and this is usually an objective topic)
- nobody can blame you for something that doesn’t work. It’s always the fault of the legacy system
I wouldn’t recommend working on such systems to junior engineers, though.
I don’t really like to work on “perfect” codebases where everyone follows the same pattern, with linters, where if something breaks is because of your shitty code (because the codebase is “clean”). It’s very frustrating and limiting.
And there is nothing worse than a crappy codebase the company won't let you improve.
I've created proof-of-concepts that worked perfectly but would make you cry if you looked at how they worked. Eventually they became that mess. Everything is a self-contained unit so it doesn't mess anything else up. Of course, there is always time to keep adding new stuff but never to refactor it into what it should be.
I prefer the way with linters and tests, it at least lessens the chances of whatever is put in being broken (or breaking something else). (Then again, somebody putting "return true" in a test _will_ surprise you sooner or later)
I put together a thread of all of the wild incidents I've seen over my many years of working as a FinTech SRE:
The reason we had it is we had master-master replication, but without requiring the peer to acknowledge befor committing a transaction. To avoid inconsistencies, we preferred one server for even ids and the other for odd ids. But when writing a new record, an autogenerating sequence would just give the next id without regard to what "side" the request was on. So we had a table to keep track of the next id to use for each side, where we incremented the next id by 2 each time a new id was allocated.
It was a little weird, but it worked fairly well. Although we eventually changed the architecture and removed the need for those tables.
Something like this: a "genius" programmer was somehow, for some reason using svn commits as a method dispatcher. Commit ids were sprinkled throughout the codebase. A new hire broke the entire system by adding comments, and comments weren't compatible the bespoke svn method dispatcher.
Does anybody remember this article? I really want to read it again.
One particularity of the EAV system is that you end up having tables with hundreds (and growing) of columns. It made Magento itself extremely hard to work with and optimize. I hope they moved away from this model since.
To be fair, this was before nosql databases were a thing.
[0]: https://en.wikipedia.org/wiki/Magento
[1]: https://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80...
https://www.postgresql.org/docs/current/sql-createsequence.h...
In reality, that wasn't too unusual to see because frameworks would use that technique because it's a lowest common denominator across RDMS.
I've worked with globally unique (to our application) integer keys, and per table integer sequences (which obviously aren't globally unique), but I don't recall seeing anyone use a global sequence but purposefully reuse elements of the sequence before.
In the article, what he complains about is not about what a sequence is, but about implementing it manually with a table that is read, incremented and then saved. This us more expensive, and depending on how it was implemented you need to take care of the whole data flow so you are unable to allocate the same id twice. That's what he considers odd
Codebases like this or from the OP is cool to learn how to not do certain things.
This included a project called IefParser, its job was to parse incoming data files and put the data into databases. It was a serious project. Like really serious. The software input format came with a full manual with a section highlighting the changes from previous versions. I have not seen that level of detail before or since.
And It was also very old. Written about a year or two after I was born.and never rewritten after. So the software side of things were less serious and more hacky.
Core code was written in C. Database used was oracle. And code was driven by Perl batch script triggered via cron job. And all that ran on IBM AIX (unix). Yes,not windows or Linux. It ran on unix.
The sheer difference in software requirements which were meticulously and the software was mind boggling.
Some fun facts:
- c code could not be compiled on windows . You need to login to dev server via putty and run makefile to do it.
- Perl code was not checked in to repository. Perl code also was different for each environment.
- unix version has a vi editor which for some reason didn’t tell if you were in edit mode or command mode. WTF! Yes, other editor didn’t exist. Apparently I was only one who bothered to learn vi in India. As no one else in India could reliably edit files
- cron job schedule was also no checked in. After a “great learning opportunity “, I decided indeed to check that in
- I once had the horror of fixing bugs in Perl and cron job. Apparently global variables are the state of art in Perl.
- cron job always failed on New Year’s Day because the batch script uses MMDD formatted folder for temp file storage, and was unable to understand 0101 is greater than 1231. I volunteered to track down the issue, and fix it for good. M3 shot it down saying, “we don’t to take risks on such a mission critical service”
I guess there is dailywtf but that's mostly bugs. Probably good enough though
If this story reminds you of the codebase you work on, don’t let its romanticism deceive you into thinking that suffering is somehow actually good.
The first contained monetary values. These were split over two columns, a decimal column holding the magnitude of the value, and a string column, containing an ISO currency code. Sounds good so far, right? Well, I learned much later (after, of course, having relied on the data) that the currency code column had only been added after expanding into Europe … but not before expanding into Canada. So when it had been added, there had been mixed USD/CAD values, but no currency code column to distinguish them. But when the column was added, they just defaulted it all to USD. So and USD value could be CAD — you "just" needed to parse the address column to find out.
Another one was a pair of Postgres DBs. To provide "redundancy" in case of an outage, there were two such databases. But no sort of Postgres replication strategy was used between them, rather, IIRC, the client did the replication. There was no formal specification of the consensus logic — if it could even be said to have such logic; I think it was just "try both, hope for the best". Effectively, this is a rather poorly described multi-master setup. They'd noticed some of the values hadn't replicated properly, and wanted to know how bad it was; could I find places where the databases disagreed?
I didn't know the term "split brain" at the time (that would have helped!), but that's what this setup was in. What made pairing data worse is that, while any column containing text was a varchar, IIRC the character set of the database was just "latin1". The client ran on Windows, and it was just shipping the values from the Windows API "A" functions directly to the database. So Windows has two sets of APIs for like … everything with a string, an "A" version, and a "W" version. "W" is supposed to be Unicode¹, but "A" is "the computer's locale", which is nearly never latin1. Worse, the company had some usage on machines that were set to like, the Russian locale is, or the Greek locale. So every string value in the database was, effectively, in a different character set, and nowhere was it specified which. The assumption is the same bytes would always get shipped back to the same client, or something? It wasn't always the case, and if you opened a client and poked around enough, you'd find mojibake easily enough. Now remember we're trying to find mismatched/unreplicated rows? Some rows were mismatched in character encoding only: the values on the two DBs were technically the same, just encoded differently. (Their machines' Python setup was also broken, because Python was ridiculously out of date. I'm talking 2.x where the x was too old, this was before the problems of Python 3 were relevant. Everything in the company was C++, so this didn't matter much to the older hands there, but … god a working Python would have made working with character set issues so much easier.)
¹IIRC, it's best described as "nearly UTF-16"
I bet no one even considered the idea that someone in Canada might pay in USD dollars.
I had to... discuss... with someone the possibility that if we were recording money received from a bank, we might do well to record the currency type as well, because... banks can deal with multiple currencies. "No, banks don't do that. That's not possible". This US company was getting ready to expand their services over to Europe, and I couldn't understand why no one could understand that this might be a necessary item to record.
Someone using this system in France to help finance a project in New York, for example, might want to know whether Euros or USD were the subject of the project. This was sort of a PM tool to note the cost of a project and how much was left in a budget - it wasn't moving money directly. We had a system to choose specific banks and note that 'person X moved Y currency units to the project' but... no ability to select the currency units. If it was from a UK bank, it would be GBP. A US bank, USD, etc. I was voted down as not understanding how banks worked.
Months later, a demo was presented right before launch to the people who were financing the project, and they asked how someone would know the currency being used for each project. Back came a work ticket "high priority" because we were already late for launch and multiple screens now had to accommodate this "new" requirement.
I know at least some of this is how I present info, but... being 'nice', being 'inquisitive', raising issues up the chain of command, etc.. rarely work. These sorts of extremely obvious things get ignored until the actual end users bring up the same concern. Somehow I wasn't supposed to know this sort of info because "you're a developer, you're not some international banker". Insane...
https://stackoverflow.com/a/78831591/467460
It is from the point of view of an app developer, not a DBA, but should be relevant to most people on HN.
If you have a messy app, you have a messy organisation.
Gatekeeping your work organization system from the people working on, and often MANAGING it is such a huge friction point I'm amazed they ever got clientele. I'm an Admin on a project right now and I can't change our status types without going through our assigned Atlassian rep.
And don't even get me started on the dumpster fire that is BitBucket. Ever tried to use the API? it's somehow even more worthless than the UI.
My take on this story was a backend "deal capture" system for an emissions trading desk about 25 years ago. The application was written in classic ASP, T-SQL and VB6 COM components. Oh, and VBScript. It only ran in Internet Explorer 6.
The heart of the machine was a massive, unholy stored procedure. It was about 35kb, and what it did was generate a very long string which was - you guessed it - a gigantic SQL statement with countless JOINs across many temporary tables.
The business logic was deemed sophisticated enough that it was decided we needed a workflow engine and some genius decided that we should use an expensive proprietary enterprise-y tool which used Microsoft Exchange Server as its datastore and operating environment. This was as terrible an idea as it sounds, because the "API" to talk to this engine was single-threaded and massively bottlenecked by whatever made Exchange Server suck. Most significantly, it also could only be properly accessed by a very specific version of a DLL, requiring that a server setup follow an agonizingly specific software installation procedure that was scientifically proven to be impossible for IT to execute because they would always jump to the end and install the latest version of the service packs and suddenly the DLL we needed would be impossible to access without wiping the machine and starting over. This is what people meant when you hear the term "DLL Hell".
The most interesting aspect of this system was the client UX. This was 1999, and Ajax was literally not a thing yet. However, the precursor to XmlHttpRequest was an IE-only thing called ActiveXObject. You could use VBScript to wire up form elements to be updated by an ActiveXObject in the way we'd consider familiar today, except that instead of FORM GET/POST, your ActiveXObject would talk to a VB6 COM+ DLL running inside of Microsoft Transaction Server. Looking back on it now, the whole thing was simultaneously an abomination and years ahead of its time. The security profile would likely give a modern developer night terrors; I'm pretty sure that if you were on a page that was being served by an IIS instance that had COM+ registered, your ActiveXObject could call methods on that object so long as the VBScript had the UUID that identified the COM+ object. End of story. Just wow.
Final detail that my memory retains is that the room full of chain smoking women who operated this system did not use mice. That is, they were entering data almost exclusively through the number pad and furiously slamming the tab key to move to the next field. They would develop intense muscle memory for the 200-300 fields on this form. They needed repeatability or it would literally break them. Thing is, there were many instances where they didn't want to fill fields in the order they are defined in the HTML, so over time about 90% of those form elements gained in-line event handlers, again, written in VBScript, which replaced the built-in tab key handling functionality of the element with whatever was deemed the correct next step in their insanely specific process. If we accidentally broke their expectation, there would be hell to pay.
That is, we would login to the live server with VNC and live edit the changes into the codebase while they were on the phone over the remote screen share connection.
There was no source control, and if there were backups, I'm dubious that they would have been restorable.
I wonder if better support of EAV tables would solve this issue better.
If one could do "SELECT price,color,year FROM cars! WHERE status='sold'" and the "!" would indicate that cars is an EAV table ...
entity attribute value
1 price 20750
1 color red
1 year 2010
1 status sold
... and the result of the query would be ... 20750 red 2010
That would solve most of the use cases where I have seen people use JSON instead.How do you pronounce that?
Was it like the word munch in “munching on some snacks”?
Or like the name of the painter Edward Munch? https://www.nrk.no/kultur/nrk-endrer-munch-uttale-1.539667 (note: this link is in Norwegian)
None of this is particularly interesting, unless you have a fascination with archaic file formats and/or an interest in historical and highly idiosyncratic government bureaucracy.
The really interesting (horrifying) thing about the job, though, was the state of the VB6 code which I was asked to translate into well structured and performant Java. There were some really hilarious and nightmare inducing subroutines like "BunchOfIfs" and "BigSelect", each of these monsters were several thousand lines long and consisted of exactly what you'd expect. Huge branching structures with absolutely no reasonable organization or design. I'm not even exaggerating to say it was just the accumulated cruft of 10 years of dirty hacks tacked on by the cheapest coders they could find where the only standards were if it works it ships. Literally the worst procedural code you can imagine with zero factorization, modularization, plenty of duplicated code copied and pasted then modified to do something slightly different than the 3 other versions of similar code elsewhere in the project.
Somehow, after a year of part-time work (20 hours a week, between classes) I managed to produce a working system to translate claims from any one of the weird formats into any other one of the weird formats, including 5 or 6 variants of said formats, each one which violated the spec in a unique way in order to satisfy the strange requirements of some particular insurance company. The Java version was less than 10% the size (lines of code) of the old system, ran 30x faster and produced correct output.
Still to this day it's objectively the most difficult, painstaking, excruciating, but also probably the best, most impressive work I've done. And it's the least I've ever been paid for writing code.
Oh and I forgot to mention, nothing was in source control and there were several variants of the codebase that had been modified slightly to do a similar but different task and then just continued to drift away from the codebase it was copied from.
1. https://en.wikipedia.org/wiki/ASC_X12 2. https://manuals.momed.com/edb_pdf/NSF%20(National%20Standard... 3. https://owcprx.dol.gov/static/UB-92.pdf
Shockingly, this project was notorious for regressions. We were on a weekly(!) update cycle, and we constantly had bugs reappear that had been solved months prior.
This was 2010 or so, and we were using SVN because the project lead didn't trust or understand git. He also didn't understand SVN. The solution to our constant regressions was pretty simple. Instead of merging branches into trunk, we would delete the master branch every week and create a new one out of all of the finished develop branches.
Surprising absolutely nobody apart from that project manager, this plan was a spectacular failure.
He also stole code out of my personal git repos from before I worked there, and bragged about how smart he was for stealing my code. So, y'know, just a general idiot and asshat.
1) Many columns can happen with per-customer customisations to a shared table. The common way is to have a "customcolumns" table with "ID,ColumnName,ColumnValue" linked to the base table that has an "ID" key, but SQL Server also supports this natively now with Sparse Columns: https://learn.microsoft.com/en-us/sql/relational-databases/t...
2) Shared identity or globally sequential numbers have a built-in schema object type now: https://learn.microsoft.com/en-us/sql/t-sql/statements/creat...
3) Manually populated calendar tables are actually a common schema design pattern, especially in manufacturing, shipping, and OLAP reporting systems. This is not that bizarre, it's just a bit weird that it would break logins! These tables can let you define all sorts of things such as international holidays, manufacturing calendars, tax years, finance reporting schedules, etc...
4) Dropping and recreating tables is also common, albeit usually done in a transaction. The fancy way to do this is with partition switching, where a new table is populated from whatever (external data, business logic, etc...) and then instantly swapped for the original without any long-running operations like truncate & insert would. See: https://pragmaticworks.com/blog/table-partitioning-in-sql-se...
5) Delayed reporting replicas ("here was a copy of the database. Data in this copy was about 10 minutes out of date.") is also a common pattern. At this point, the blog author is just complaining about the realities of business databases. Typically you'd have a bunch of read only replicas with different delays: Synchronous for HA failover, Asynchronous for DR failover and real-time reporting, and deliberately delayed on a schedule ETL copies for "point in time" consistent reporting. These would typically be done at 3am or something to minimise inconsistencies in the data. The modern way to do this is a SQL Server Always On readable secondary: https://learn.microsoft.com/en-us/sql/database-engine/availa...
6) "The main codebase I worked in was half VB, half C#." this is also surprisingly common. It's often not worth rewriting old code, there's not enough return on the invested money. These days there are automated conversion tools for VB to C#, such as: https://marketplace.visualstudio.com/items?itemName=SharpDev...
7) The Gilfoyle character is honestly the only thing that stands out here as an actual problem, not just a typical thing that happens in large enterprise bespoke software development.