Small thing: I just checked Amazon.com: https://www.amazon.com/s?k=thunderbolt+25G&crid=2RHL4ZJL96Z9...
I cannot find anything for less than 285 USD. The blog post gave a price of 174 USD. I have no reason to disbelieve the author, but a bummer to see the current price is 110 USD more!
I think, tragically, the blog post has caused this price increase.
The offers on Amazon are most likely all drop shippers trying to gauge a price that works for them.
You might have better luck ordering directly from China for a fraction of the price: https://detail.1688.com/offer/836680468489.html
In my experience, the cheap eBay MLX cards are DellEMC/HPE/etc OEM cards. However I also encountered zero problems cross-flashing those cards back to generic Mellanox firmware. I'm running several of those cross-flashed CX-4 Lx cards going on six or seven years now and they've been totally bulletproof.
I'm going to try a couple other fan assisted cooling options, as I'd like to keep the setup reasonably compact.
I just ran fiber to my desk and I have a more expensive QNAP unit that does 10G SFP+, but this will let me max out the connection to my NAS.
Although I managed to panic the kernel a couple of times without the extra heatsinks on...
What also may not work are Dell rNDC cards. They look like they have OCP 2.0 type 1 connectors, but may not quite fit (please correct me if I'm wrong). They do however have a nice cooling solution, which could be retrofitted to one of the OCP 2.0 cards.
I've also ordered a Chelsio T6225-OCP cards out of curiosity. These should fit in the PX adapter but require a 3rd-party driver on macOS (which then supports jumbo frames, etc.)
What also fits physically is a Broadcom BCM957304M3040C, but there are no drivers on macOS, and I couldn't get the firmware updated on Linux either.
Pic of a previous cx3 (10 gig on tb3) setup: https://habrastorage.org/r/w780/getpro/habr/upload_files/d3c...
10gig can saturate full speed, 25G in my experience rarely reaches same 20G as the author observed.
Really: because I can, and it is fun. I upgraded my home lan to 10G, because used 10G hardware is cheap (and now 25G enters the same price range).
https://support.apple.com/guide/mac-help/ip-thunderbolt-conn... etc
You'd also mostly be limited to short cables (1-2m) and a ring topology.
After it happened 3-4 times, I started debugging. It turned out that we usually get at least a bit of sunlight around noon, as it burns away the morning clouds. And my Thunderbolt box was in direct sunlight, and eventually started overheating.
And a Zoom restart made it fall back onto the Wifi connection instead of wired.
I fixed that by adding a small USB-powered fan to the Thunderbolt box as a temporary workaround. I just realized that it's been like this for the last 3 years: https://pics.ealex.net/s/overheat
Also, if you remember where you saw that logo, please let me know!
I have a 10gbit dual port card in a Lenovo mini pc. There is no normal way to get any heat out of there so I put a 12v small radial fan in there as support. It works great at 5v: silent and cool. It is a fan though so might not suit your purpose.
Edit: forgot is isn't "true" PCIe but tunneled.
The placement is mostly determined by the design of the OCP 2.0 connector. OCP 3.0 has a connector at the short edge of the card, which allows exposing/extending the heat sink directly to the outer case.
If somebody has the talent, designing a Thunderbolt 5 adapter for OCP 3.0 cards could be a worthwhile project.
As a stop-gap, I'd see if there was any way to get airflow into the case - I'd expect even a tiny fan would do much more than those two large heatsinks stuck onto the case (since the case itself has no thermal connection to the chip heatsink).
Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
It all comes down to performance per Watt, the availability of cheap switching gear, and the actual utility in an office / home environment.
For 10 Gbps, cabling can be an issue. Existing "RJ45"-style Cat 6 cables could still work, but maybe not all of them.
Higher speeds will most likely demand a switch to fiber (for anything longer than a few meters) or Twinax DAC (for inter-device connects). Since Wifi already provides higher speeds, one may be inclined to upgrade just for that (because at some point, Wireless becomes Wired, too).
That change comes with the complexity of running new cabling, fiber splicing, worrying about different connectors (SFP+, SFP28, SFP56, QSFP28, ...), incompatible transceiver certifications, vendor lock-in, etc. Not a problem in the datacenter, but try to explain this to a layman.
Lastly, without a faster pipe to the Internet, what can you do other than NAS and AI? The computers will still get faster chips but most folks won't be able to make use of the bandwidth because they're still stuck on 1Gbps Internet or less.
But that will change. Swiss Init7 has shown that 25GBps Internet at home is not only feasible but also affordable, and China seems to be adding lots of 10G, and fiber in general.
Fun times ahead.
And while not every cat6 will do 10, it would still be worth a shot, and devices aren't using 5 instead they're using even less.
Not to mention that cat8 will happily do 40Gbps as long as you can get from your switch to your end devices in 30 meters.
For 10 Gbps I find it simpler and cheaper to use fiber or DACs, but motherboards don't provide SFP+, only RJ45 ports. Over 10 Gbps copper is a no go. SFP28 and above would be nice to have on motherboards, but that's a dream with almost zero chances to happen. For most people RJ45 + WiFi 7 is good enough, computer manufacturers will not put SFP+ or SFP28 for a small minority of people.
Practically spoken, a lot of the transfer speed advertised by wifi is marketing hogwash barely backed by reality, especially in congested environments.
> Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
For both laptops and desktops, PCI lanes. Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
For laptops in particular, power draw. The faster you push copper, the more power you need. And laptops have even less PCIe lanes available to waste.
For desktops, it's a question of market demand. Again - most applications don't need ultra high transfer rate, most household connectivity is DSL and (G)PON so 1 GBit/s is enough to max out the uplink. And those few users that do need higher transfer rates can always install a PCIe card, especially as there is a multitude of different options to provide high bandwidth connectivity.
Yes but a hogwash of several gigabits sometimes does give you real-world performance of more than a gigabit.
> Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
It's been a bunch of years that a single lane could do 10Gbps, and a bunch more years that a single lane could do 5Gbps.
Also don't ethernet ports tend to be fed by the chipset? So they don't really take lanes.
Servers had a reason to spend for the 10G, 25G and 40G cards which used 4 lanes.
There are 10 Gigabit chips that can run off of one PCI-E 4.0 lane now and the 2.5G and 5G speeds are supported(802.3bz).
wifi is not faster.
However ethernet is not as critical as it used to be, even at the office. People like the conveniency of having laptops they can move around. Unless you are working from home, having a dedicated office space is now seen as a waste of space. If the speed of the wifi is good enough when you are in a meeting room or in your kitchen, there is no reason to plug your laptop when you move back in another place, especially if most connections are to the internet and not the local network. In the workplace, most NAS have been replaced by onedrive / gdrive, at home NAS use has always been limited to a niche population: nerds/techies, photographers, music or video producers...
On consumer devices, I think part of the issue is that we’re still wedded to four-pair twisted copper as the physical medium. That worked well for Gigabit Ethernet, but once you push to 5 or 10 Gb/s it becomes inherently expensive. Twisted pair is simply a poor medium at those data rates, so you end up needing a large amount of complex silicon to compensate for attenuation, crosstalk, and noise.
That's doable but the double whammy is that most people use the network for 'internet' and 1G is simply more than enough, 10G therefore becomes quite niche so there's no enormous volume to overcome the inherent issues at low cost.
Until motherboards include SFP ports it's probably not worth the effort at all in home setting; external adaptors like the one presented here are unreliable and add several ms of latency.
A micro-ATX motherboard with on-board 2xSFP28 (Intel E810):
* https://download-2.msi.com/archive/mnu_exe/server/D3052-data...
* https://www.techradar.com/pro/this-amd-motherboard-has-a-uni...
Where did you get "several ms of latency" figure from? I have not measured external card, but may be I should do it... Because cards themselves have latency in range of microseconds, not millis.
Made me chuckle.
Displayport 2.1 UHBR20 is 80 gbps.
USB4 maxes out at 80 gbps.
As you can see, 1gbps ethernet is starting to look like stone age technology. 2.5gbps becoming the next step seems a bit strange when we were jumping orders of magnitude every few years before. But also, ethernet tends to be used on longer cables than DP or USB, and trying to push it much faster results in exponentially increasing losses to resistance and radiation, the cable starts acting like an antenna even with the twisted pairs. Fiber optics are much better suited to high speed long distance, but too expensive and fragile for consumer use.
`sudo su - <user>` also seems easier for me to type than `sudo -i -u <user>`
But this is a cool solution
If you're using an adapter card to add Thunderbolt functionality, then your mainboard needs to support that, and the card must be connected to a PCIe bus that's wired to the Intel PCH, not to the CPU.
> All other 25 GbE adapter solutions I’ve found so far ... have a spinning fan. ... the biggest downside of the PX adapter is that it gets really hot, like not touchable hot. Sometimes, either the network connection silently disappeared or (sadly) my Mac crashed with a kernel panic in the network driver. ... Other than that, the PX seems to do the job
All I want to do is copy over all the photos and videos from my phone to my computer but I have to baby sit the process and think whether I want to skip or retry a failed copy. And it is so slow. USB 2.0 slow. I guess everybody has given up on the idea of saving their photos and videos over USB?
Many phones indeed only support USB 2.0. For example the base iPhone 17. The Pro does support USB 3.2, however.
> I guess everybody has given up on the idea of saving their photos and videos over USB?
Correct.
My last two phones in the last 4 years had at least USB 3.1
As wireless charging never quite reached the level hoped – see AirPower – and Google/Apple seemingly bought and never did anything with a bunch of haptic audio startups, I figure that idea died....but they never cared enough to make sure the USB port remained top end.
I’m using Dropbox for syncing photos from phone to Linux laptop, and mounting the SDcard locally for cameras, so this is a guess.
Do you import originals or do you have the "most compatible" setting turned on?
I always assumed apple simply hated people that use windows/linux desktops so the occasional broken file was caused by the driver being sort-of working and if people complain, well, they can fuck off and pay for icloud or a mac. After upgrading to 15 pro which has 10 gbps usb-c it still took forever to import photos and the occasional broken photos kept happening, and after some research it turns out that the speed was limited by the phone converting the .heic originals into .jpg when transferring to a desktop. Not only does it limit the speed, it also degrades the quality of the photos and deletes a bunch of metadata.
After changing the setting to export original files the transfer is much faster and I haven’t had a single broken file / video. The files are also higher quality and lower filesize, although .heic is fairly computationally-demanding.
Idk about Android but I suspect it might have a similar behavior
Until USB has monthly service business to compete with cloud storage revenue.
With TB5, and deep pockets, you might probably also benchmark it against a setup with dedicated TB5 enclosures (e.g., Mercury Helios 5S).
TB5 has PCIe 4.0 x4 instead of PCIe 3.0 x4 -- that should give you 50 GbE half-duplex instead of 25 GbE. You would need a different network card though (ConnectX-5, for example).
Pragmatically though, you could also aggregate (bond) multiple 25 GbE network card ports (with Mac Studio, you have up to 6 Thunderbolt buses, so more than enough to saturate a 100GbE connection).
I had to do a double-take when it mentioned Kelvin since That is physically impossible.
It’s a little bit funny/coy to use it mixed with Celsius.
It 'reduces it by' ... not reduces it TO