Now let's address latency - that entire article only mentioned the word once and it looks like it was by accident.
Latency isn't cool but it is why your Teams/Zoom/whatevs call sounds a bit odd. You are probably used to the weird over-talking episodes you get when a sound stream goes somewhat async. You put up with it but you really should not, with modern gear and connectivity.
A decent quality sound stream consumes roughly 256KBs-1 (yes: 1/4 megabyte per second - not much) but if latency strays away from around 30ms, you'll notice it and when it becomes about a second it will really get on your nerves. To be honest, half a second is quite annoying.
I can easily measure path latency to a random external system with ping to get a base measurement for my internet connection and here it is about 10ms to Quad9. I am on wifi and my connection goes through two switches and a router and a DSL FTTC modem. That leaves at least 20ms (which is luxury) for processing.
Perhaps we should insist on more from formatting from above. That's probably a post processing thing for ... gAI 8)
Modern home/office internet connections are mostly optimised for throughput but rarely for latency - that's the province of HFT.
You see tales from the kiddies who fixate over "ping" times when trying to optimize their gaming experience. Well that's nice but when on earth do you shoot someone with ICMP?
I can remember calling relos in Australia in the 1970s/80s over old school satellite links from the UK or Germany and it nearly needed radio style conventions.
I've been doing VoIP for quite a while and it is so crap watching people put up with shit sound quality and latency on a Teams/Zoom/etc call as the "new" normal. I wheel out Wireshark and pretend to watch it and then fix up the wifi link or the routing (Teams outside VPN - split tunnelling) or whatever.
I think Microsoft took bandwidth efficiency a bit too far here.
If you really want to have some fun come out to the country side with me where 4G is the only option and 120ms is the best end to end latency you're going to get. Plus your geolocation puts you half the nation away from where you actually are which only expounds the problem.
On the other hand I now have an acquired expertise in making applications that are extremely tolerant of high latency environments.
That's basically me. 80ms on LTE or 25ms on Starlink.
Used to be 1200-1500ms on BGAN, 160ms on 3G.
It has all the latency associated with cell networks combined with all the latency of routing all traffic through AWS.
As an added bonus, about 10% of sites block you outright because they assume a request coming from an AWS origin IP is a bot.
Maybe you are thinking of working around bufferbloat?
Your network needs to be End to End running on 5G-A ( 3GPP Rel 18 ) with full SA ( Stand Alone ) using NR ( New Radio ) only. Right now AFAIK only Mobile networks in China has managed to run that. ( Along with turning on VoNR ) Most other network are still behind in deployment and switching.
That seems to be the theme across all consumer electronics as well. For an average person mid phones are good enough, bargain bin laptops are good enough, almost any TV you can buy today is good enough. People may of course desire higher quality and specific segments will have higher needs, but things being enough may be a problem for tech and infra companies in the next decade.
I say this because we currently use an old 2014 phone as a house phone for the family. It's set to 2G to take calls, and switches to 4g for the actual voice call. We only have to charge it once every 2-3 weeks, if not longer. (Old Samsung phones also had Ultra Power Saving mode which helps with that)
2G is being shutdown though. Once that happens and it's forced into 4G all the time, we'll have to charge it more often. And that sucks. There isn't a single new phone on the market that lasts as long as this old phone with an old battery.
The same principle is why I have my modern personal phone set to 4G instead of 5G. The energy savings are very noticeable.
Just a thought when it comes time to change out that device.
(Alternately, if you don't have the option to use VoWiFi, you could take literally any phone/tablet/etc; install a softphone app on it; get a cheap number from a VoIP number and connect to it; and leave the app running, the wi-fi on, and the cellular radio off. At that point, the device doesn't even need a[n e]SIM card!)
4K video reaches you only because it's compressed to crap. It's "good enough" until it's not. 360p TV was good enough at some point too.
Yes, but I assume when they say the "consumer" they mean everyone not us. Most people i've had in my home couldnt tell you the difference between a 4K bluray at 2 odd meters on a 65" panel vs 1080p.
I can be looking at a screen full of compression artificts that seem like the most obvious thing i've ever seen and ill be surrounded by people going "what are you talking about?"
Even if I can get them to notice, the response is almost always the same.
"Oh...yeah ok I guess I can see. I just assumed it supposed to look like it shrug its just not noticable to me"
Streaming video gets compressed to crap. People are being forced to believe that it is better to have 100% of crap provided in real time instead of waiting just a few extra moments to have the best possible experience.
Here is a trick for movie nights with the family: choose the movie you want to watch, start your torrent and tell everyone "let's go make popcorn!" The 5-10 minutes will get enough of the movie downloaded so you will be able to enjoy a high quality video.
Once every few months I'm in a situation where I want to watch YouTube on mobile or connect my laptop to mobile hotspot, but then I think "I don't need to be terminally online", or in worst-case scenario, I just pay a little bit extra for the data I use, but again, it happens extremely rarely. BTW quite often my phone randomly loses connection, but then I think "eh, god is telling me to interact with the physical world for five minutes".
At home though, it's a different situation, I need to have good internet connection. Currently I have 4Gbps both ways, and I'm thinking of changing to a cheaper plan, because I can't see myself benefitting from anything more than 1Gbps.
In any case though, my internet connection situation is definitely "good enough", and I do not need any upgrades.
Business should learn to earn just good enough to make by.
There are no words to describe how stupid this is.
The trick is that the entity that owns the wires has to provide/upgrade the network at cost, and anyone has the right to run a telco on top of the network.
This creates competition for things like pricing plans, and financial incentives for the companies operating in the space to compete on their ability to build out / upgrade the network (or to not do that, but provide cheaper service).
Common carriers become the barrier to network upgrades. Always. Without fail. Monopolies are a bad idea, whether state or privately owned.
Let me give you 2 examples.
In australia we had Telstra (Formerly Telecom, Formerly Auspost). Testra would resell carriers ADSL services, and they stank. The carriers couldn't justify price increases to upgrade their networks and the whole thing stagnated.
We had a market review, and Telstra was legislatively forced to sell ULL instead. So the non monopolist is now placing their own hardware in Telstra exchanges, which they can upgrade. Which they did. Once they could sell an upgrade (ADSL2+) they could also price in the cost of upgrading peering and transit. We had a huge increase in network speeds. We later forgot this lesson and created the NBN. NBNCo does not sell ULL, and the pennies that ISPs can charge on top of it are causing stagnation again.
ULL works way better than common carrier. In singapore the government just runs glass. They have competition between carriers to provide faster GPON. 2gig 10gig 100gig whatever. Its just a hardware upgrade away.
10 years from now Australia will realise it screwed up with NBNCo. Again. But they wont as easily be able to go to ULL as they did in the past. NBN's fibre isn't built for it. We will have to tear out splitters and install glass.
The actual result is worse than you suggest. A carrier had to take the government/NBNCo to court to get permission to build residential fibre in apartment buildings over the monopoly. We have NBNCo strategically overbuilding other fibre providers and shutting them down (Its an offence to compete with the NBN on the order of importing a couple million bucks of cocaine). Its an absolute handbrake on competition and network upgrades. Innovation is only happening in the gaps left behind by the common carrier.
If the common carrier is doing all the work, what’s the point of the companies on top? What do they add to the system besides cost?
Might as well get rid of them and have a national carrier.
I was stuck with a common carrier for years. I could pick different ISPs, which offered different prices and types of support, but they all used the same connection... which was only stable at lower speeds.
The trick is that this is essentially wireless spectrum. Which can be leased for limited periods of time and can easily allow for a more competitive environment than what natural monopolies allow for.
It's also possible to separate the infrastructure operators from the backhaul operators and thus entirely avoid the issues of capital investment costs by upstart incumbents. When done there's even less reason to tolerate monopolistic practices on either side.
Doesn't make much sense to me to abstract away most of the parts where an entity could build up its competitive advantage and then to pretend like healthy competition could be build on top.
Imagine if one entity did all the t-shirt manufacturing globally but then you congratulated yourself for creating a market based on altered colors and what is printed on top of these t-shirts.
Same for mobile infrastructure would be great as well.
Regulators have other ways to incentivize quality/pricing and can mandate competition at levels of the stack other than the underlying infrastructure.
I wouldn't expect that "only a single network" is the right model for all locations, but it will be for some locations, so you need a regulatory framework that ensures quality/cost in the case of a single network anyway.
(I know that helium's original IoT network mostly failed due to lack of pmf, but idk about their 5G stuff)
Network providers get paid for the bandwidth that flows over their nodes, but the protocol also allows for economically incentivizing network expansion and punishing congestion with subsidization / taxing.
You can unify everyone under the same "network", but the infrastructure providers running it are diverse and in competition.
Make it easy for a new wireless company to spawn while maintaining the infrastructure everyone needs.
This article is essentially arguing innovation is dead in this space and there is no need for bandwidth-related improvements. At the same time, there is no 5G provider without a high-speed cap or throttling for hot spots. What would happen if enough people switched to 5G boxes over cable? Maybe T-Mobile can compete with Comcast?
1. It must be well-run.
2. It must be guaranteed to continue to be well-run.
3. If someone can do it better, they must be allowed to do so - and then their improvements have to be folded into the network somehow if there is to be only one network.
Having 5 competing infrastructures trying to blanket the country means that you end up with a ton of waste and the most populated places get priority as they constantly fight each other for the most valuable markets while neglecting the less profitable fringe
Nationalizing telecom is a great way to reward the tech oligarchs by making the capital investments in giant data centers more valuable. If 10 gig can be delivered cheaply over the air, those hyperscale data centers will end up obsolete if technology continues to advance at the current pace. Why would the companies that represent 30% of the stock markets value want that?
Open-world games such as Cyberpunk 2077 already have hours-long downloads for some users. That's when you load the whole world as one download. Doing it incrementally is worse. Microsoft Flight Simulator 2024 can pull 100 to 200 Mb/sec from the asset servers.
They're just flying over the world, without much ground level detail. Metaverse clients go further. My Second Life client, Sharpview, will download 400Mb/s of content, sustained, if you get on a motorcycle and go zooming around Second Life. The content is coming from AWS via Akamai caches, which can deliver content at such rates. If less bandwidth is available, things are blurry, but it still works. The level of asset detail is such that you can stop driving, go into a convenience store, and read the labels on the items.
GTA 6 multiplayer is coming. That's going to need bandwidth.
The Unreal Engine 5 demo, "The Matrix Awakens", is a download of more than a terabyte. That's before decompression.
The CEO of Intel, during the metaverse boom, said that about 1000x more compute and bandwidth was needed to do a Ready Player One / Matrix quality metaverse. It's not that quite that bad.
For my area all the mobile network home internet options offer plenty of speed, but the bandwidth limitations are a dealbreaker.
Everyone I know still uses their cable/FTTH as their main internet, and mobile network as a hotspot if their main ISP goes down.
The PS5 and Xbox Series S/X both had disks that were incapable of holding a terabyte at the launch of The Matrix Awakens. Not sure where you are getting that info from, but both the X S/X and PS5 were about 30GB in size on disk, and the later packaged PC release is less than 20GB.
The full PC development system might total a TB with all Unreal Engine, Metahumans, City Sample packs, Matrix Awakens code and assets (audio, mocap, etc) but even then the consumer download will be around the 20-30GB size as noted above.
The reason why there isn't as much demand for mobile data as they want is because the carriers have horrendously overpriced it, because they want a business model where they get paid more when you use your phone more. Most consumers work around this business model by just... not using mobile data. Either by downloading everything in advance or deliberately avoiding data-hungry things like video streaming. e.g. I have no interest in paying 10 cents to watch a YouTube video when I'm out of the house, so I'm not going to watch YouTube.
There's a very old article that I can't find anymore which predicted the death of satellite phones, airplane phones, and weirdly enough, 3G; because they were built on the idea of taking places that traditionally don't have network connectivity, and then selling connectivity at exorbitant prices, on the hopes that people desperate for connectivity will pay those prices[1]. This doesn't scale. Obviously 3G did not fail, but it didn't fail predominantly because networks got cheaper to access - not because there was a hidden, untapped market of people who were going to spend tens of dollars per megabyte just to not have to hunt for a phone jack to send an e-mail from their laptop[2].
I get the same vibes from 5G. Oh, yes, sure, we can treat 5G like a landline now and just stream massive amounts of data to it with low latency, but that's a scam. The kinds of scenarios they were pitching, like factories running a bunch of sensors off of 5G, were already possible with properly-spec'd Wi-Fi access points[3]. Everyone in 5G thought they could sell us the same network again but for more money.
[0] While I'm ranting about mobile data usage, I would like to point out that either Android's data usage accounting has gotten significantly worse, or Google Fi's carrier accounting is lying, because they're now consistently about 100-200MB out of sync by the end of the month. Didn't have this problem when I was using an LG G7 ThinQ, but my Pixel 8 Pro does this constantly.
[1] Which it called "permanet", in contrast to the "nearernet" strategy of just waiting until you have a cheap connection and sending everything then.
[2] I'm told similar economics are why you can't buy laptops with cellular modems in them. The licensing agreements that cover cellular SEP only require FRAND pricing on phones and tablets, so only phones and tablets can get affordable cell modems, and Qualcomm treats everything else as a permanet play.
[3] Hell, there's even a 5G spec for "license-assisted access", i.e. spilling 5G radio transmissions into the ISM bands that Wi-Fi normally occupies, so it's literally just weirdly shaped Wi-Fi at this point.
I don't know what you mean. My current laptop (Lenovo L13) has a cellular modem that I don't need. And I am certainly a cost conscious buyer. It's also not the first time that this happened as well.
It's getting pretty reasonable these days, with download speeds reaching 0.5 gbit/sec per link, and latency is acceptable at ~20ms.
The main challenge is the upload speed; pretty much all the ISPs allocate much more spectrum for download rather than upload. If we could improve one thing with future wireless tech, I think upload would be a great candidate.
For 5G, a lot of the spectrum is statically split into downstream and upstream in equal bandwidth. But equal radio bandwidth doesn't mean equal data rates. Downstream speeds are typically higher because multiplexing happens at one fixed point, instead of over multiple, potentially moving transmitters.
Now, it's possible that that raw GB/s with unobstructed LoS is the underlying optimization metric driving these standards, but I would assume it's something different (e.g. tower capex per connected user).
I can grant that a typical usage of wireless bandwidth doesn't require more than 10Mbps. So, what does "even faster buy you"?
The answer is actually pretty simple, at any given frequency you have a limited amount of data that can be transmitted. The more people you have chatting to a tower, the less available bandwidth there is. By having a transmission standard with theoretical capacities in the GB or 10GB, or more you make it so you can service 10, 100, 1000 more customers their 10Mbps content. It makes it cheaper for the carrier to roll out and gives a better experience for the end users.
What I find fascinating is that in a lot of situations mobile phones are now way faster than wired internet for lots of people. My parents never upgraded their home internet despite there being fire available. They have 80MBit via DSL. Their phones however due to regular upgrades now have unlimited 5G and are almost 10 times as fast as their home internet.
It doesn't really change their argument, but to be fair, Netflix has some of the lowest picture quality of any major streaming service on the market, their version of "high-end 4K" is so heavily compressed, it routinely looks worse than a 15 year old 1080p Blu-Ray.
"High-end" 4K video (assuming HEVC) should really be targeting 30 Mb/s average, with peaks up to 50 Mb/s. Not "15 Mb/s".
Also you can deliver well over 1 Gbps over coax or DSL with modern DOCSIS and G.fast respectively. But most countries have started dismantling copper wirelines.
…and we only pay for 500 MBit for my home fiber. (Granted, also 500 Mbit upload.)
(T-Mobile, Southern California)
Is your T-Mobile underprovisioned? Where I am, T-Mobile 5G is 400Mbps at 2am, but slows to 5-10Mbps on weekdays at lunchtime and during rush hours, and on weekends when the bars are full.
Not to mention that the T-Mobile Home Internet router either locks up, or reboots itself at least twice a day.
I put up with the inconvenience because it's either $55 to T-Mobile, $100 to Verizon for even less 5G bandwidth, or $140 the local cable company.
They're constantly running promotions "get free smartglases/video game systems/etc if you sign up for gigabit." Turns out that gigabit is still way more than most people need, even if it's 2025 and you spend hours per day online.
10G internet doesn't make your streaming better, but downloads the latest game much faster. It makes for much less painful transfer of a VM image from a remote datacenter to a local machine.
Which is good and bad. The good part is that it makes it easier for the ISPs to provide -- most people won't be filling that 10G pipe, so you can offer 10G without it raising bandwidth usage much at all. You're just making remote workers really happy when they have to download a terabyte of data on a single, very rare occasion instead of it taking all day.
The bad part is that this comfort is harder to justify. Providing 10G to make life more comfortable the 1% of the time it comes into play still costs money.
I have never come remotely close to downloading anything else -- including games -- at 1Gbps.
The source side certainly has the available pipe, but most (all?) providers see little upside to allowing one client/connection to use that much bandwidth.
Only the newest routers do gigabit over wifi. If most of your devices are wireless, you'll need to make sure they all have wifi 6 or newer chips to use their full potential.
Even if upgrading your router is a one-time cost, it's still enough effort that most people won't bother.
> Consider a very brief history of airspeed in commercial air travel. Passenger aircraft today fly at around 900 kilometers per hour—and have continued to traverse the skies at the same airspeed range for the past five decades. Although supersonic passenger aircraft found a niche from the 1970s through the early 2000s with the Concorde, commercial supersonic transport is no longer available for the mainstream consumer marketplace today.
OK, "Bad Analogy Award of the Year" for that one. Traveling at supersonic speeds had some fundamental problems, primarily being that the energy required to travel at those speeds is so much more than for subsonic aircraft, and thus the price was much higher for supersonic travel, and the problem of sonic booms meant they were forbidden to travel over land. When the Concorde was in service, London to NYC flights were 10-20x more expensive on the Concorde compared to economy class on a conventional jet, meaning the ~4 hours saved flight time was only worth it for the richest (and folks just seeking the novelty of it). There are plenty of people that would still LOVE to fly the Concorde if the price were much cheaper.
That is, the fundamental variable cost of supersonic travel is much higher than for conventional jets (though that may be changing - I saw that pg posted recently that Boom has found a way to get rid of the sonic boom reaching the ground over land), while that's not true for next gen mobile tech, where it's primarily just the upfront investment cost that needs to be recouped.
There are plenty things that *could* require more bandwidth than video, but it's not clear that a large number of people want to use any of them.
With mobile, I bet contention and poor signal are more of an issue. 5G is a noticeable improvement over LTE, and I am not sure they can do much better.
FYI, the air link latency for LTE was given as 4-5 ms. FDD as it's the best here. The 5G improvement to 1ms would require features (URLLC) that nobody implemented and nobody will: too expensive for too niche markets.
The latency in a cellular network is mostly from the core network, not the radio link anymore. Event in 4G.
(telecom engineer, having worked on both 4G and 5G and recently out of the field)
I have never seen this. Where do I have to get 5G service to see these latencies?
If you search "5g latency", Google's AI answer says 1 ms, followed by another quote lift from Thales Group™ saying 4G was 20 ms and 5G is 1ms.
Once you scroll past the automated attempts, you start getting real info.
Actual data is in the "SpeedTest Award Report" PDF, retrieved from https://www.speedtest.net/awards/united_states/ via https://www.speedtest.net/awards/reports/2024/2024_UnitedSta....
Spoiler: 23 ms median for fastest provider, T-mobile.
I can only speculate 5G was so saturated on the initial rollout so it led to congestion and now its stabilized. But latency isn't only affected by distance and hops - congestion matters.
It also takes ages to back up my computer. 18 terabytes of data in my case, and that’s after pruning another 30 terabytes of data as “unnecessary” to back up.
Are you downloading AAA games or backing up your computer over mobile?
Also, I hope you're doing differential backups, in which case it's only the initial backup should be slow. Which it's always going to be for something gargantuan like 18 TB!
One of the issues with this 5g vs 6g is the long-term-evolution of it all -- I have no idea when/where/if-at-all I will see improvements on my mobile devices for all the sub-improvements of 5g or if it's only reserved for certain use cases
There are a ton of other inefficient allocations of spectrum^1, but not all spectrum is suitable for all purposes and the bands for cellular connectivity are highly sought after.
1: https://upload.wikimedia.org/wikipedia/commons/c/c7/United_S...
Lower frequencies have the advantage of longer distances and permeating through obstructions better. I suppose limited bandwidth and considerations of the number of devices coexisting is a limiting factor.
Basically, yes (if you take into account other consideration like radiated power, transmitter consumed power, multipath tolerance, Doppler shift tolerance and so on). Everything is a tradeoff. We could e. g. use higher-order modulation, but that would result in higher peak-to-average power ratio, meaning less efficient transmitter. We could reduce cyclic prefix length, but that would reduce multipath tolerance. And so on.
Another important reason why higher frequencies are preferred is frequency reuse. Longer distance and penetration is not always an advantage for a mobile network. A lot of radio space is wasted in areas where the signal is too weak to be usable but strong enough to interfere with useful signals at the same frequency. In denser areas you want to cram in more base stations, and if the radiation is attenuated quickly with distance, you would need less spectrum space overall.
Exactly. When I was running WiFi for PyCon, I kept the radios lower (on tables) and the power levels at the lower end (especially for 2.4GHz, which a lot of devices still were limited to at the time). Human bodies do a good job of limiting the cell size and interference between adjacent APs in that model. I could count on at least a couple people every conference to track me down and tell me I needed to increase the power on the APs. ;-)
At the end of the day, there is a total speed limit of Mb/s/Hz.
For example, in cities, with a high population density, you could theoretically have a single cell tower providing data for everyone.
However, the speed would be slow, as for a given bandwidth six the data is shared between everyone in the city.
Alternatively, one could have 100 towers, and then the data would only have to be shared by those within range. But for this to work, one of the design constraints is that a smaller range is beneficial, so that multiple towers do not interfere with each other.
It just also supports other bands as well.
mmWave is a flop.
“Hey, this isn’t actually supposed to be used to get work done. Keep doing simple phone stuff!“
1. The whole point of video is that you need all of the data, in order
2. Video is the only thing, really, that requires that much data per second of sensory input.
Mobile devices aren't really getting more pixels. And hardware isn't improving enough the we have spare cycles for decoding/decrypting/rendering 16K video or whatever, and even if it did, the batteries can't handle that. It's obvious to me that we don't need more bandwidth. My phone would be instantly dead if I was downloading and processing data at a gigabit per second.
Other than downloading very large files (why?), I don't think we'll invent any new use cases that can make use of more bits or second.
Lord Kelvin, 1897
Investing in more bandwidth is a massive waste on the scale of billions of dollars when you consider that time and capital could be better spent improving existing coverage, lowering latency, and making the service more reliable. Creating a next gen standard that requires new hardware and new devices to support a use case that doesn't exist and can't materialize with today's devices is just silly.
L4S is on its way, and may finally be the beginning of the end for bufferbloat and congestion, and vendors of mobile devices are in an almost unique position of being able to roll out network stack changes en masse. And just for once, consumer incentives, vendor incentives and network operator incentives all align - and it's incremental, and lacking in incentives for bad actors.
See this blog entry: https://www.ietf.org/blog/banishing-bufferbloat/ for more on L4S and bufferbloat. And this: https://datatracker.ietf.org/meeting/105/materials/slides-10... for a proper technical deep dive.
The development of L4S has been a pincer operation across all levels of the network stack, integrating everything previously understood about latency and congestion in real networks, and one of the most impressive bits of network engineering I've seen in the history of the Internet.
[1]:https://www.nperf.com/en/map/GB/-/2012851.Three-Mobile/signa...
Not sure what's going on there.
For example, at the last investor day that AT&T held, they indicated [0:pdf] that their growth plans are in broadband fiber, not building more 5G capacity to serve a surge in traffic. Reading their charts and knowing how AT&T traditionally worked, I believe that they are going to try to cut the expense of running the 5G network via various optimizations and redirect capital heavily to go after the fiber broadband market instead, using convergence (ahem: price bundling) to win subscribers.
(I bet it really stuck in their craw that Comcast mastered the bundle so well that they even built a successful MVNO on the back of their xfinity customer base and parked their tanks on AT&Ts lawn while the latter was futzing about with HBO.)
However, bundling is a price compression game, not an ARPU growth game. If you start at $100 a month and then begin chipping away with autopay discounts, mobile convergence, free visa gift cards and all that nonsense, pretty soon you are selling broadband for $35 a month and can't make it work except with truly cruddy service, which leads to high churn rates. So we'll see how this turns out.
[0:pdf] https://investors.att.com/~/media/Files/A/ATT-IR-V2/reports-...
So it might make sense for operators to focus on developing fibre regardless.
I guess its a nice quality filter, but still. I can't be the only one but feel like the mobile web experience has taken a nose dive over the last 10 years as we got so many more mobile data users, and mobile web developers forgot how to write for slow internet speeds in the same time.
For those criticising 5G, most mobile network has yet to move to Stand Alone network with NR yet. And after that they will have to refarm all the 4G spectrum to 5G. They could also replace equipment that were 4G / 5G to 5G+ / 5G-A, allowing cheaper Massive MIMO hence much higher capacity. Doing all that while continue to tune the network both front and backend. Despite 5 years into 5G we are still behind in many aspect. Most likely due to COVID.
But it seems some network has NO interest to further improve on current tech. I assume they would do minimum investing once they force everyone to 5G.
It's obvious that mobile data usage is constrained by available user time in mobile usage contexts. People use their mobile devices when they aren't busy doing something else and that time is limited. Most people are at the limit. Previous growth was driven both by adding more per user hours AND by adding new users. The author only showing data for overall bandwidth consumption masks the real drivers of the trends being discussed.
Mobile speed requirements are constrained by being a ~6-inch hand-held screen. You don't need 4K or ultra-res textures on a 6-inch screen.
https://www.reuters.com/business/media-telecom/nokia-ceo-ste...
Perhaps this has something to do with limited mobile bandwidth?
Now imagine we add more bandwidth: what would happen to Comcast and other fiber monopolists if people started replacing fiber with 5G?
Other factors, such as jitter, transmission delay, queuing delay, etc., also impact quality. However, if the delay occurs mid-transmission (e.g., due to network congestion or routing inefficiencies), there’s little that can be done beyond optimizing at the endpoints.
[1] https://www.wikiwand.com/en/articles/Latency_(audio)#Telepho...
This is a terrible example.
If supersonic mass travel could be provided safely and cheaply, demand would be bonkers.
If I could get to Tokyo in an hour for $50, I would visit every weekend.
Overall, the article is sound.
But what a terrible example of demand not filling supply.
If I had a terabit per second, indeed I probably wouldn't use it.
But you can not make travel fast enough.
So you want me to trade my horrible 100mbit line with ok latency for anything between 10 and 120mbit depending on the time of day with a latency if over 50ms guaranteed. And you also get a CGNAT IP and thus no incoming connections because who needs ipv6!
Once users stop demanding higher speeds, you can still get a lot more growth by giving them higher data caps or even unlimited plans.
This will make 5G home routers a very attractive option, particularly when the only other options you have available are cable and/or DSL.
I think wireless providers competing in the same playing field as traditional ISPs will turn out to be an extremely good thing for consumers.
It's metro, which is lower-priority, but still.
Hard pass.
5G is far from ubiquitous as it. Though how would we even know? I feel like my phone is always lying about what type of network it's connecting to and carrier shave the truth with shit like "5Ge" and the like.
I have not, ever, really thought "Yeah, my phone's internet is perfect as-in". I have low-signal areas of my house, if the power goes out the towers are sometimes unusable due to the increased load, etc. I do everything in my power to never use cellular because I find incredibly frustrating and unreliable.
Cell service has literally unlimited headroom to improve (technologically and business use-cases). Maybe we need more 5G and that would fix the problems and we don't need 6G or maybe this article is a gift to fat and lazy telecoms who are masters at "coasting" and "only doing maintenance".
Outside of my home (where I admit I'll never give up my 10 Gbit fiber), I'll always default to using 5G. It's always faster and more stable than any kind of "free wifi" at a coffee shop or hotel or anything, if I'm working out of home, I'm tethering to my phone, racking up 120 GB/mo or so of usage.
Currently 5G doesn't even meet their very own 3 objectives namely:
1) Enhanced Mobile Broadband (eMBB) - eMBB, 2) Massive Machine Type Communication (mMTC) - mMTC, 3) Ultra-Reliable Low Latency Communication (URLLC) - URLLC
The article is just focusing on the first part that's arguably can be met by prior standard the 4G LTE+ if it's only about 1 Gbps bandwidth of the 5G lower range microwave frequency bands or FR1.
For most parts of the world, 5G higher range millimeter waves (mmWave) band or FR2 are not being widely deployed and implemented that can delivered bandwidth much higher than 1 Gbps but then again the wireless transmission ranges and distances are severely limited for mmWave compared to microwave or RF.
One of the notable efforts by 3GPP (5G standards consortium) for objectives part 2 and 3 is the DECT NR+ [1]. It's the first non-cellular 5G standard that can support local mesh wireless networks but it's availability is very limited now although no base stations modifications are required for its usages since it's non-cellular and the backhaul to base stations can be performed by the existing 5G connectivity. I'd imagined that it will be available inside most phones now since the standard already out for several years but it seems only Nordic is interested in the new standard.
The upcoming 6G standards probably need to carry on and focusing on the 2 and 3 objectives of 5G even more so, since it's expected machine to machine (M2M) will surpass the conventional human-to-human (H2H) and even the new (H2M) with the rise of IoT and AI based system for example intelligent transports of (vehicle-to-everything) V2X systems.
[1] DECT NR+: A technical dive into non-cellular 5G (30 comments):
With my Pixel 8 and 5G SA activated (Telefonica Germany) everything is back to normal.
Time to make it cheap!
Nobody would shrug at being able to fly 2x faster. The reason it stopped is because it made lots of noise and was expensive. Not because it was not needed.
I think you'd be hard pressed to find anyone who would not want faster flights if it could be done at reasonable cost.
But if there are little to no applications for faster Internet speeds -- which for the most part there aren't -- then it just kinda doesn't matter.
Complete BS. This misses the critical and common issue of user contention with 4G, which is the main relief 5G brings.
If all you do is compare peak throughput on paper you will miss the real world performance issues. In reality you will never see "good" 4G speeds, there just isn't enough bandwidth to go around in practice due to the lower frequency bands it operates in.
I've got both a 4G and 5G LTE router, both are Cat20 which means in theory can operate at up to 2 Gbit/s. Yet in practice, with any of the carriers in the UK the 4G modem will scape 40 Mbit/s at best in the wee hours, and drop as low as 3 Mbit/s at peak time. The 5G one will give me 600 Mbits all day long... because there is enough bandwidth to go around for all users, this is the key difference.
It could be that mobile applications have hesitated to rollout features like 4K streaming video because most users don't have true 5G coverage. We might not see mobile data growing because we've failed in our 5G infrastructure goals. The political atmosphere might not support building out more 5G networks either.
IOW: Apps could definitely use high bandwidth & low latency networking (think AI acceleration, remote storage, volumetric telepresence etc) if we had it reliably, but our due to wireless transition apps are adopted to stagnating speeds.
The article is about mobile specifically.