ETA: thinking back on it, several years ago I switched from 120 characters to 80 specifically because of this. I don’t have a car, so I read a lot of code on my phone while taking public transit.
I'm surprised reading on a phone is common. Not at all something I would want to do. I'm assuming largely read only there?
Makes me curious if the CWEB idea of styling specifically for reading has extra merit in that flow?
I do not want to type / create on my phone though, certainly not code, or even sign up for stuff / submit / do applications on phone. Makes me a rarity though.
Yes, aithrowawaycomm's claim about line lengths are supported by many, many different sources who recommend a 50 to 80 character line length.
https://duckduckgo.com/?q=reading+optimal+line+length
IMHO, 80 characters is too narrow, especially with Python, for most coding tasks so I settle more around 100 (+-20) characters. PEP-8 says up to 99 characters is okay.
Shading lines can also improve readability (this is known; the rest of this comment is speculation). It's probably not enough to simply alternating between e.g. white and off-white background, unlike for tables. More likely an alternation between 3 or 4 subtly-different colors is best; maybe these can form a pattern across additional lines (e.g. 121312321). If the difference isn't subtle enough, editing will suffer whenever you break the line count, but for reference only the human eye is pretty good at aligning subtle things. Maybe even an outright paper-texture background image? (This is more often a gimmick, but it can be useful.)
> especially with Python
What is special about Python in this case?Concisely and clearly define one concept or abstraction. Then, use it in the next definition. This isn’t hard! And stop with the insanity-inducing compound names. If your name needs 5 subclauses to clarify its intent, your semantics are stupid and you should be beaten with a shoe.
That gives me an effective working space of 110x37. Any concept that needs longer lines or more lines is going to be harder to follow.
It’s true that some people are illiterate and never learned to spell. That doesn’t mean it’s hard, it just means they never learned how.
For instance, I think prose is hard to read when you have to scan long horizontal distances from line to line.
However, code often has indentation, syntax, and chains (e.g. nested property access) that take up a lot of width before the actual information is presented, so width is more helpful.
You know, there’s a reason cars and trains are the width that they are.
I don’t believe the article says you should have to.
I view diffs side-by-side all the time with much wider lines than 80 chars, but I have 27" monitors, which are not unusual these days.
When there are unclear or conflicting rules ... [y]ou can end up with hilarious games like formatting tennis ...
Back then, formatters were rarely used, if at all. The major benefit of tools like gofmt, Prettier, etc. is that a major source of vacuous commits and code review has gone away.In the case of Prettier, bikeshedding can still happen over the .prettierrc file, but it's not hard to make an argument for using the default configuration.
Formatters are something I miss whenever using languages without a widely-used formatting tool. For instance, in Common Lisp, code formatting is "whatever Emacs does when formatting the whole buffer". Depending on the exact package used (e.g. SLIME or SLY), the results of formatting the whole buffer may differ. Contrast this to languages like Go where there is one tool (gofmt) and it exposes no configuration, so there is no possibility of bikeshedding over code formatting.
// Empty file. Use default Prettier settings. No rules here.
To make it clear for other people that it was not just a mistake that the formatter was missing its configuration or that no config file existed at all. Useful to deter from adding new rules because someone is capricious about their own preferences... // Empty by choice
// Use the Prettier defaults
// No customized rules
Taking the time to write a specific one gives added weight to the decision to use the defaults as anyone adding a rule has to remove the haiku.However, the benefit of automatic formatters is that if you really like 80 better, you can reformat before editing, and undo that before sharing your results.
Now we're shrinking the resolution/sizes of the devices we are using to justify it...
That's a paddling.
Seriously, professionals do this? If someone else wrote it and it does what it is supposed to do, it is not, NOT (no apologies for shouty emphasis), my job to change that.
(I hate tabs, but will never :retab someone else's code.)
My job is either/both to fix bugs and add new capabilities, not to be precious about, well, anything.
I would have serious reservations about any team member who wasted all our time on that.
Yes. At my job we mainly use defaults of rustfmt in most repos. Some repos have rustfmt.toml in them. And the neat thing is that cargo picks up this automatically so there’s no per-repo config you have to do after cloning those repos in order to follow repo-specific formatting rules.
In CI we have a step that runs
cargo fmt --all --check
And that exits with non-zero code if any formatting is different from what rustfmt wants it to be. Which in turn means that that pipeline stage gets marked as failed.In order then to get your changes merged, you have to follow the same formatting rules that are already in use. Which in turn is as simple as having
cargo fmt --all
run at some point before you commit. For example invoked by your editor, or from some kind of git hook or alias, or manually.It’s nice and easy to set up, and it means that every Rust source file within each individual repo follow the same formatting as the other Rust source files in that same repo.
I don’t particularly care about what specific formatting rules each repo owner chose. (And the vast majority of our repos just use the defaults anyway.) All I care about when I’m working on code is that there is some kind of consistent formatting across the files in the project, and that formatting changes between commits from different people are kept to a minimum so that git log and git blame gives the info that you are interested in. So for me I am very happy that they do it this way at my job.
...it occurs to me, this feels like you're conforming to the tool instead of the other way around.
Is there a reason git blame doesn't have an "ignore whitespace" option? Is it harder than it seems?
Just like you don't want to allow both underscore_split, camelCase and TitleCase names in a single JSON object (or even different JSON objects returned by the same API), you might want more consistency overall to exactly avoid people's editors rewriting code someone else did (eg. you edit a module and your editor reformats the entire file).
> I hate tabs
This opinion seems common but I haven’t seen a practical argument against tabs among practical arguments (for an admittedly niche situation) in favor of tabs. This seems an appropriate place to ask about it.
Consider this example:
def someFunctionName(Int firstParameter, Boolean secondParameter, String thirdParameter)
Let's say, that line is too long. We could format it like this: def someFunctionName(Int firstParameter, Boolean secondParameter,
String thirdParameter)
Or we could format it like this: def someFunctionName(Int firstParameter, Boolean secondParameter,
String thirdParameter)
Or event his def someFunctionName(Int firstParameter,
Boolean secondParameter,
String thirdParameter)
In the second two variant the indentation depends on the length of someFunctionName, so it is not necessarily a multiple of four or whatever your tab width is.There are other similar situation when aligning in multiple columns, etc.
It's a visceral rather than a rational thing (which I suppose is implied by the word "hate" :->). I also have trailing spaces and all tabs highlighted for similar reasons.
The side effect of this that I don't particularly love is having to split a function's arguments on to multiple lines. Usually if I'm at that point though, I'll probably end up having to split those arguments up event at a ~100 character column limit.
To me, splitting the arguments is the preferable of the two situations. I always want to be able to see the whole line.
You want to be able to do side-by-side diffs on your laptop using a normal sized (not tiny) font.
You want to be able to paste snippets into design documents and emails and blogs without accidental wrapping or truncating or scrolling.
80 is nicely legible. It works really well.
When I'm coding for myself, I use all of that at times. I love that there are some things I no longer have to break onto multiple lines any longer.
Also, if you have a ton of nested loops etc, then you don't end up with that problem where you can only see the first few characters of the deepest lines.
To me it is important to try to keep code concise so that it is easier to read. This applies to variable names as much as it does to how many lines a function is. The smaller the unit of functionality it is, the easier it is to understand in its entirety.
Consider this contrived example:
margin = (price - cost) / price
You can be reasonably sure what is being calculated, but it's hard to be exactly sure unless the surrounding context is very small. Having longer names allows you to keep more context for a line in isolation, meaning (personally) I require less backtracking while reading.e.g. the above could be rewritten as the following, removing a lot of ambiguity:
retail_margin = (retail_price - wholesale_cost) / retail_price
I find 80 character limits much too small in these cases. 100-120 characters is the sweet spot for me. I can still have 2 split panes of code on a single screen at once, and write more expressively without excessive linebreaks per statement.Names that are too long impair readability rather than helping it. You want names that are as long as necessary to meaningfully, semantically distinguish identifiers in a given context, but not any longer.
1. Use of DOS programs.
2. Printing out on a paper.
3. Split screen.
4. Other people who view, with different screen resolutions, and preferences for font sizes, window sizes, etc.
These are also some reasons why you might still prefer to use a short line limit, too.
How lovely for you. As someone with significantly impaired vision, even when corrected, I have my font size set to 18, thank you very much. A coding standard that assumes a 10 point font size would violate the Americans with Disabilities Act's 'reasonable accommodation' mandate. I wasn't pushy enough to act on it, but it sure pissed me off when my fellow team members blew off my complaints about how we formatted our code. (Including two space indents...grrrr.) My worse than 20/200 vision (Can't see the big E on an eye chart) lets me see code, but legally, I'm blind in one eye.
I admit, it's a classic 'no perfect solution' scenario, because I like very long variable and function names. I write tiny functions (5-10 lines) because they only need one or two levels of indent. Some programmers I've worked with really dislike such small functions.
Would it? You can still set the size to 18, you just might have to scroll or line wrap. That's a mild inconvenience, not "inaccessible".
I’m curious on several statements you made, please take these as genuine and well intended questions:
what’s wrong with two space indent?
Would a bigger monitor with higher DPI solve some of these issues?
Have you considered a horizontal scroll wheel or similar? I think this is a Band-Aid to a bad pattern, but may be a legitimate option
You may appreciate python PEP8, which discusses things like highly nested code and functions being considered bad. I first followed PEP8 kicking and screaming, but I think it forced me to remove some bad programming habits and I now lint check my code habitually.
Bigger monitor with higher DPI does help, yep.
Horizontal scroll: I suppose it could help. The more immediate tradeoff tends to be that I use a terminal full width on a large monitor when needed, and tolerate the fact that I'd prefer to have space to have a web browser open on the same screen.
I'm a big fan of Sandy Metz, who advocates 5 line routines. In Python and Ruby, this is possible, and the indentation problems go away. In C, it's more of a challenge.
It's one of the things that drives me nuts about shared Excel spreadsheets. Rather than just zoom in or out someone will mess with the font size to make it fit their screen and suddenly it's just screwy enough for someone else with different eyes on a different monitor when then goes to change... you get the idea.
Would it? I don't think having to make all my lines 44% shorter than they should be is reasonable; that's going to be a massive impingement on productivity.
I'm that git that puts each param on a separate line (so I can quickly comment them out when I need to). It annoys other programmers, but once I explain why ...
They're still annoyed, but at least they're quiet about it.
The 80 column standard for a line of code goes back to the 80 columns on a standard IBM punch card. Why 80 columns on a punch card? I don't know that one.
It makes it easier to visually parse the function name, templates, outputs, etc. from the parameters than if they were all on one line
I do that too :)
Sometimes the details are more important, sometimes the flow of control/algorithm is.
Some language are just long and wordy and having a 'fixed' size just leads to awkward line splits. e.g. all the languages that tend to favour 2 space tabs.
> Long lines that span too far across the monitor are hard to read. This is typography 101. The shorter your line lengths, the less your eye has to travel to see it.
But the longer your eyes have to travel vertically, so 101 doesn't justify a specific number 80, thus doesn't help much in resolving the trade-off
"the shorter your line lengths, the less your eye has to travel to get back to the left margin, the quicker and more error free you'll find the next line"
and no similar idea occurs vertically, save that it is already mediated by vertical section markers. (I very frequently wish that hitting page down would show me a distict mark where the previous bottom of screen is now)
> hitting page down would show me a distict mark where the previous bottom of screen is now)
For a literal page down the last line would literally become the first line, but yeah, that's the mark/animation that could be useful for when it's not a page
I'm somewhat against code formatting "rules" in general. If it's about readability (/aesthetics), different code will have different properties that make it readable or unreadable. Sometimes, forcing a line break makes code less readable. In other situations, it can have the opposite effect. IMHO, it's a local decision – and the human that's working on that code is better at deciding what's readable than a linter is.
As such, 80 is as good a limit as any, and makes you think carefully about avoiding deep nesting of blocks, which is usually a good idea anyway. It also allows putting many windows side-by-side on a modern big 4k screen.
I mainly use Scala, and I think the majority of Scala code would be far less readable if forced to 80 columns, than if it were a larger number (or simply unconstrained).
My WFH monitor is an Apple Thunderbolt Display, which is woefully out-of-date now. Even on this display, and even though I use a much larger font than most people (16pt Hasklig), and even with a generous project/navigation/etc sidebar, I still get 180 characters. The display I have in the office is even wider, but I can't measure it right now because I'm not there.
My point is, displays are now wide enough where arbitrary line limits don't make sense. Nobody is going to cram stuff into one line unnecessarily, so just leave it up to the local decision about what makes the code most readable. If for some reason a 240-character line is more readable in some situation, then we should talk about that situation rather than why they didn't break the lines.
Each column on a card indicates a single character or number, so a whole punch card is the equivalent to a single line of text (and was usually treated as such for programming).
Originally, there were only one or two holes punched in each column, providing just enough data for uppercase letters and numbers. But this slowly increased over the years, allowing more characters to be encoded. IBM introduced the EBCDIC standard in 1964, which enabled up to 6 different punched holes per column, encoded in eight bits. This corresponded with the development of the System/360.
When terminals were first introduced, they were designed to be compatible with 80 characters per line, as you would expect. Starting with 40 characters per line in the early 60s, eventually terminals like the IBM 3270 had 80 columns as the norm.
This was then copied by microcomputers. The Commodore PET launched with 80 x 25 character support right away. The Apple II originally had 40 characters per line, but they also sold an "Extended 80-Column Text Card" extension board to allow 80 characters (as requested by VisiCalc). This was built in to the business focused Apple III, and later into the IIe. The "e" in IIe stands for "extended".
The original IBM PC's Monochrome Display Adapter had a 720 x 350 display, with each character contained in a 9 x 14 box. 720/9 = 80.
As higher resolutions became the norm, editors began to put a line on the screen to visually indicate 80 characters. Certain programming languages have an 80 character per line limit like COBOL or FORTRAN, so the line did serve a purpose at first. But later it became sort of vestigial.
And here we are now, a century later, debating whether 80 characters is still a good line limit for code.
There are formatters though. The issue then becomes the git diff. This seems awfully like the tabs vs spaces debate.
And this is because, back in the super old days, most computers used punched cards. Which had 80 columns.
Back in the super-super-super old days, looms were "programmed" using paper tape, which eventually influenced the 24-column Hollerith cards for the 1930 census, and ultimately resulted in a horrific application of technology.
I have one rule: code flows vertically. I like reading books. Code should read as easily as a book. I have found that if you constrain your code to being shaped like a book, almost everything else follows naturally.
You can’t be indented 7 levels into hell if your code has a reasonable line limit. You can’t jam arbitrary numbers of statements onto a single line. You have to decompose your control flow at reasonable function breaks. Lines are a better primitive for editing and debugging. Etc. etc. etc.
People (including torvalds) complain that some small limit like 80-chars does not make good use of screen width.
Maybe the solution in whitespace-independent code like C is just to "reset" the indentation after a certain amount, or just choose not to indent certain blocks?
I just use tabs and set my tab length to 2 or 3 chars.
IDK how text editors would adjust the source code to fit multiple screen widths. When I have a line that's too long for the line limit, I have to re-arrange the code in that line to get it under the limit.
There is more difficulty working with code such as
if x { //... } else { //... }
Than code like
if x { // returns from this block } // execution continues
In some legacy code I sometimes encounter very long chains of checks which nest 4-8 layers, which becomes very difficult to maintain a mental model of where you are in execution at any point. I try to refactor into the second pattern from above when possible.
If someone breaks a statement early at 72, or 79, or 99 characters (PEP8 recommendations), indentation no longer represents the structure of the program, it now represents the style of the author. Everyone who uses a wider screen suffers.
But if the IDE knows how to fold long lines dynamically, then any reader can use any width screen for viewing and editing. And users may choose not to enable folding at all, so that indentation exclusively represents program structure.
> An 80 character limit has no relevance any more with modern computer displays.
It's still nice to keep the code or anything readable on a vertical mobile screen.
Some languages are naturally wider (Java, PHP, etc). I those, 120 is fine too.
Also, I tolerate comments past the 80th column but not code.
if i had a 2560x1440 display, i'd use four 80 character wide tiled windows.
80 columns is about the sweet spot to have an editor on one vertical half of my laptop screen with the font set to a comfortable (read: large-ish) size, with something else (e.g. a browser) on the other side.
I think 100 characters is called to replace the traditional 80 characters rule.
1) The maximum width of our viewports
2) How long we want our lines to be
WRT the second concept, the general rule of typography is that a line should be 60-80 characters long. But, crucially, this is not counting indentation; a "line" here begins at the start of the text, not at the start of the margin.
In the modern day we could decouple these two concepts. Imagine a standard code format that assumed a maximum viewport of, say, 120 characters, but still wrapped lines when they reached 80 characters in length, not counting indentation. Then you could get the benefits of both enforcing a maximum viewport size while having a comfortable line length for reading.