I wish the scoring didn't prioritize timeliness. This is really the sort of thing I'd enjoy cranking out on a plane ride, not every day.
If you're not using your computer every day, just think of yourself as having been removed from the competition. Then you can just solve fun puzzles that don't have a competitive aspect for you.
You could also make a browser extension to hide the leaderboard if that helps.
I need a casual difficulty because I really do enjoy cozy simple short puzzles.
(And the reason is that it's the time that the creator can be readily available: https://adventofcode.com/2020/about )
If the creator was genuine about it not being important they'd explicitly limit the leaderboard with an opt in process. You shouldn't even be able to see the leaderboard unless you have opted into it. It shouldn't be visible or clickable.
This year I think I'll just watch from a distance. I think it is really worth doing it as fast as you can at least one year, it really highlighted a bunch of areas I was quite rusty in.
I did AoC a few years ago and realized that I had my GitHub profile associated to “middle-of-the-road” scores on a public list. It isn’t really a problem and I realize I may sound too sensitive: but it is a bit stressful to be ranked in a competition you have no interest in competing in. Think of an ignorant recruiter deciding to look at a “better” programmer higher on the list, etc. And in a community/society that places a huge premium on arbitrary scores like SAT and IQ, there’s a small twinge of unavoidable stress when you see “you completed 75,435th” simply because you decided to do a puzzle on a lunch break. I’d rather just avoid it entirely.
Likewise, there are many people desperately competing for a “top spot” that measures their ability to... drink coffee late at night, mostly. It’s just kind of gross and not something I feel like encouraging. Having an opt-in leaderboard would be fine, but AoC pushes it way too hard. The emphasis should be on fun daily puzzles, not an arbitrary competition based on who is most alert at a given hour.
I'm making mass general assumptions here...
This is the only month of the year when I will consistently get out of bed at 5:55, give the puzzles up to 2.5 hours, then have breakfast and get ready for work.
Personally I find it super convenient since I and many of my close colleagues begin work so late, but I also know a lot of people who would be on their commute to work at that time.
I do it when I have time for it and if I have time for it in the evening I have fun solving it. Last year I did a few once since i remembered these far to late but came till 10 ? challenge and forgott about it again till yesterday, and now try to follow up each day if i have time and want to and dont feel pressured by it it is for fun and maybe learning something new! I actually loved the aspect of writing half a interpreter last year I hope for something similar this year!
Obviously, nobody makes me do that or compete, but it I don't do that, then it just feels like I am just a passive bystander instead of a participant.
I'm based in the UK, so the puzzles get unlocked at ~6AM which gives an hour or two before work if I wake up earlier than usual.
I think I'll still take part this year, but at a more relaxed pace.
This year I'm going to try to use Elixir, a language I've started learning the past year, hoping I can at least get past day 5 this year.
I will be forever grateful to him for this opportunity.
---
I do have tremendous amount of respect of the developers who completes these on the back of their hands.
So far, I've used it as a way to learn new languages - I've done it in D, C#, Swift so far. I don't bother with the competition aspect, but I do have a few people that I bounce solutions off of.
This year I'm taking a different approach though, I'm going to use it to re-learn an old language - UniVerse[0] Basic[1]. In my first IT job, I supported an in-house system that ran on UniVerse. I then moved on to working with a commercial system built on UniData[2] (a close cousin to UniVerse). These products mainly exist to allow Pick-style MultiValue applications to run on modern systems. They are closed-source commercial products, but there is a limited-use personal edition available.
One nice thing about these is that they don't just emulate a Pick environment, they also give it features that Pick systems never had. For example, UniVerse Basic is capable of making HTTP/HTTPS requests, parsing XML and JSON, and at some point UniVerse Basic even gained the ability to interact with Python objects. One of the first things I built in preparation for this was a subroutine to retrieve the input data, downloading it and caching it if required.
It's been about 10 years since I've worked with this technology, so I'm really looking forward to re-learning it.
[0] https://www.rocketsoftware.com/products/rocket-universe-0/ro...
[1] https://docs.rocketsoftware.com/nxt/gateway.dll/RKBnew20/uni...
[2] https://www.rocketsoftware.com/products/rocket-unidata-0/roc...
How are you finding the challenges in PDP-11 assembly? Are there any unique things about the PDP-11 instruction set or architecture that helps in the challenges?
One of my favorite problems from last year involved programming a robot to traverse an obstacle course (https://adventofcode.com/2019/day/21).
Interestingly, the space of potential programs was small enough (105 bits) that you could smash an SMT solver into it and solve for a correct, minimal program, given the shape of your obstacle course: https://www.mattkeeter.com/projects/synthesis/
But I also think that it's a missed opportunity to measure productivity and runtime performance of programming languages.
In "Benchmarks Game" you get extremely optimized programs which is not usually achievable by an average developer. On AoC you would get a wide distribution of submissions and you could get the right estimate of how performant are the programs submitted by PHP, Python, or Rust developers.
I guess we wouldn't get to see people abusing mpz_add on Python or PHP that often.
You get what people choose to contribute — if you want to see Python or PHP programs that don't use GMP then please contribute those programs.
The benchmarks game shows up-to 10 programs for each language implementation: some will be all about performance, some will be about the language, some will be about program size.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
and
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Sort by gz and notice the Chapel programs seem to be written to show program size rather than performance:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
I look forward to it each year.