On the original ethernet being a mistake -
"I have another perspective on this - the bus concept at its roots dates to 1974/75 - a machine that could function as a hub in 1975 or even in 1979 would have potentially been as large and costly as the mini-computers Ethernet was intended to interconnect. A switch at that time would have been extraordinary costly.
Bus was the right technology for the time (as in the right one to go to market), it was right because it was an easy low cost means to get the LAN concept into more peoples hands so they could see the utility of it.
What I did find surprising was Metcalfe's resistance to UTP based standards with a hub in the middle once the technology did catch up - my gut from here 40 years on - makes me think that he was concerned about protecting his investments in 10base5 and 10base2, and making sure his installed base didnt feel like they had an obsolete product - as well as protecting 3Com's position as a 'Market Leader' in the space.
As an aside, I remember working on 10Base2 networks well into the late 90's and early 2000's - it was widely used in certain situations - like computer labs - where the bus topology made installation significantly easier than home runs would have been."
On 8P8C being picked as the connector -
"Also as someone who has spent a portion of my career doing cabling, thank you for picking the 8P8C modular system, its significantly better - both in cost and ease of assembly than anything I'm aware of existing at that time.
While there are some electrical issues with the 8P8C, the connector has been flexible enough that with minor design changes to keep up as line speeds have increased.
As a note, there are several vendors that make push-thru modular connectors (as in the wire extends beyond the front of the connector shell and the crimper trims it, much like how the cut blade on a punch tool works), which completely eliminate the need to accurately trim and face the wire ends - a huge time/frustration savings."
The fix was installing a 10bT/100bTX switch, installing 100bTX NICs in the servers, and breaking up their 10b2/10b5 networks into smaller segments each with media converter connecting it to the switch. An unmanaged 16/24-port 10/100 switch was "only" a couple hundred bucks back then.
Yep. Very fond of the "EZ RJ 45" line in particular. The strain relief that's built into them does not interfere with loading the cable and actually gets a good bite into the jacket once crimped.
The other advantage I've found is that it's harder to accidentally make defective cable ends. Trying to trim and face yourself you run the risk of one of the lines being too short or pulled back during the crimp and not mating well with the lug. The EZ RJ 45 effectively eliminates this as there is always wire under the lug during the crimp.
EZ RJ 45 does solve all of these issue, and makes for a reliable cable 100% of the time.
Which brings me to one thing that puzzled me about the video. In the part starting from 10:37 https://youtu.be/f8PP5IHsL8Y?t=637 it describes STARLAN as specifying two pairs of wires, one for transmit and one for receive. But naturally you can't really have full duplex without a switch at the other end of the wire; and yet the STARLINK "Network Extension Unit" is described as a hub (and indeed it seems that the first Ethernet switches/multiport bridges didn't ship until 1989-1990). So when at 12:10 https://youtu.be/f8PP5IHsL8Y?t=730 the video seems to suggest that STARLINK being half-duplex was a step back from the original plan, I don't see how that can be the case: I assume that first-generation STARLINK was never intended to provide full duplex, and the two pairs of wires were just reserved for future upgrades. That seems compatible with what Richard Bennett says from 12:29 . That's still interesting, though, because it suggests that the STARLINK contributors must have had switches in mind back in 1983.
Here is some interesting documentation from the original StarLAN Hub -
https://vtda.org/docs/computing/AT&T/AT&T_Starlan_10_Hub.pdf
There is also some neat and ancient documentation still lovingly preserved on the NCR website -
https://onlinehelp.ncr.com/Retail/Workstations/Wiring/Ethern...
- late 70s, rs232 terminals. well, it kind of worked. i spent a lot of time under the desk
- early 80s, frozen hose ethernet - i remember not being able to get a sun workstation to sit on the desk because the cable lifted it off too much
- late 80s coax ethernet all sorts of termination and other problems
- 90s twisted pair and hubs - things started working as they should.
- a bit later cheap switches came along, but still lots of wires at the desk
- now, wireless bliss.
- don't talk to me about token ring.
But of course I don't do this stuff any more.
I briefly used "thin" coax before the transition to IP cameras, it was a basic NTSC signal into a PCI capture card. 12/24V was pumped over a secondary cable attached to the main coax (siamese)
We were digital plumbers!
Then came 1000-base-T, where the “base” is kind of misnomer and it is actually related to SHDSL. Each pair is essentially a separate full-duplex 250Mbps link with active echo cancellation and four of them is combined to create one 1Gbps link.
If you're on a desktop browser instead of a smartphone, you can use F12 Dev Tools to inspect the parent post[1] elements to see that the original timestamp is still there as "2024-12-26T05:23:25 1735190605" ... which is actually about 4 days ago.
Some search engines (bing) still has a cache of this thread we're in from 4 days ago.
https://imgur.com/a/example-hacker-news-thread-put-back-into...
If one were to click on that 4 day old thread, you come here where all the comments are "x hours ago". It's confusing.