But then, I’m not the only one;
“There is an unlimited amount of funding that the company could probably access globally in private markets," Hilmer said, adding that he has personally met many of "a diverse group" interested in SpaceX. Everywhere I travel around the world, investors of all types — individuals, family offices, hedge funds, sovereign wealth funds or private equity — want to get into SpaceX," Hilmer said. "It's almost all investors I talk to."
Of course at the same time I’m happy they aren’t public. The market couldn’t handle the time horizons that SpaceX operates under, nor the mission statement that drives them.
[1] - https://www.cnbc.com/2018/04/13/equidate-spacex-27-billion-v...
SpaceX has no plans right now to colonize Mars. In fact, they keep saying, “Look! We want other companies or nations to step up and plan for how to establish a colony. We’re only going to do it if we absolutely have no other choice.”
SpaceX is truly establishing a financially viable solar transport system that may eventually extend beyond our solar system. This is analogous to the birth of the U.S. railroad system. We don’t know yet what what don’t know is possible.
SpaceX is poised to own space transport outright. That’s major!
Now what does Mars have to do with this? It’s just a helpful organizing goal. People love a good milestone. Something to reach for with meaning. I mean if you’ve been following SpaceX’s 15-year history you’d see that they’re nothing if not methodical in their planning and attainment of milestones.
SpaceX is one of the most well-run companies in the world. And right now they have the best prices, the best technology, the best pace, the best outlook, the best...
They have no competition. Literally. I’d invest the entirety of my lifetime earnings in SpaceX if I could. We’re witnessing historic achievements in the making.
it would also lead to "undercutting" on technology. Look at Bezos - without such a powerful Mars hyper-drive the best he can do is being engine supplier to ULA. Hugely respectable achievement on its own, no doubts, yet nothing close to SpaceX who has already really advanced our civilization and is on track to advance it even further.
In particular i think the Mars mission based POV allows to filter for the best architectures long-term, like modular construction F9/Heavy which wouldn't be necessary the best in the short term of just servicing Earth satellites.
Oops, am I not supposed to admit that?
Depending on the cost of their satellites it might make sense to use launching them as a means of testing the upper limits of reusability of their rockets. IE, they might not want to risk a customer payload on a rocket that has made 10 launches. But if they are going to build 7,518 satellites the marginal cost is likely to be rather low so it might be worth it to push the risk threshold to stretch the number of trips per rocket. Also, it could be a good opportunity to clear out their inventory of pre-block 5 Falcon 9s.
Rather, it's about using spare capacity freed up by the long-foreseen slowdown in the geostationary launch business that has historically been SpaceX's (and everyone else's) bread and butter, without crashing market launch prices.
[0] https://www.youtube.com/watch?feature=youtu.be&v=AdKNCBrkZQ4...
I don't mind the latency of today's fiber or cable Internet, and if it's problem, it can be solved by moving servers closer to users.
That is, what if, instead of bypassing the fiber backbone, Starlink just tried to connect everyone to it? Because I'm assuming that sending a terabit of data between two major cities, like Chennai to NYC is going to be cheaper via undersea fiber.
This could deliver absolutely massively better internet in less populated areas like rural, shipping, etc. but I feel like people are perhaps getting the wrong idea and thinking this could supersede terrestrial networks in urban/surburban areas - in which case there will be a lot of disappointed people.
These [0] are statistics from Verizon showing 90ms RTT for trans-atlantic connections. Trans-pacific is > 100ms. 80ms seems highly competitive for me in this context.
https://www.youtube.com/watch?v=Dar8P3r7GYA
Say what you will about Musk, but the guy is truly ambitious and willing to shoot for the stars ( or mars at the moment ).
It's a shocking contrast to be in or near a city and have broadband speeds, and then be just a few dozen miles outside one and have... literally nothing.
I just loaned my Iridium phone to a friend who was going to the jungle, and although he was able to make the data connection work, even doing email at 2400 baud(!) proved useless. Inmarsat is faster, but vastly more expensive.
Outside of those two, there is no global solution.
You can communicate effectively over 2400 baud if (and only if) you use protocols and services designed for low bandwidth; luckily we have those protocols built and tested, even if we have mostly abandoned some of them.
In the late 90's I did a lot of my grad school coding and implemented a website for a state agency though 9600 baud modems. EMACS was great in those days, precisely because it's a Lisp machine OS masquerading as a text editor. I used to read USENET news, read my email, do my work, shuffle between multiple remotely logged in shells, all through EMACS.
p.s. the story of Iridium and how they recovered is quite amazing.
The V band (or optical links) will also likely be used for the inter-satellite communication.
You do typically need different hardware for using the different frequencies. With some of the more advanced software defined radios, you could use both of the at the same baseband within the same radio. But you will still need some sort of frequency conversion [1]. You will likely need different antennas for each band as well. And to get an efficient system you also want to add filters for each band.
[0] http://happy.emu.id.au/lab/rep/rep/9510/txtspace/9510_032.ht... [1] https://en.wikipedia.org/wiki/Low-noise_block_downconverter
We already have a ton of Geo sat which can do this kind of communication but they are super expensive and have a limited bandwidth.
These satellites will actually have a ton of limitations in how much data they can send around and how they'll have to balance out their signals. Geo are easier to point to because, well they don't move.
But these are going to be moving and changing all the time so you'll have to connect to multiple satellites every day. I'm spit balling here but they'll probably be overhead for 10 minutes? Think about switching your router every 10 minutes. Or you get a rainy day and your signal clarity goes down. Or you are over the equator in a band that is used strictly for GEO.
This is going to be a super cool problem to solve. And I'm sure I don't even understand the half of it.
Edit: Sorry example of router is pretty bad. It's more like running your phone but you have to specifically aim your antenna at each tower that you're passing while driving. The complexity is moving nature of the network and the targeting nature of the antennas. I have 0 clue if phone signals are targeted but I believe they are radial signals and more like a beacon than a laser.
Load balancing these can be a pain as well because if you get too much signal on an antenna it can actually block all signal.
Your cell phone switches towers all the time while you're on the road. My cellphone (republic) switches from WiFi to cellular network mid-call if I'm on it when I leave a building. This is not a new problem.
The hard part here is that you have to target the beam at the satellite. It's not a wide angle beam that's used it's a focused beam. (This might not be true based on what they implement)
But for Geo it's a targeted beam. So this isn't a perfect corollary, and my example was pretty bad lol.
Damn, I guess we ran out of interesting problems to solve then /s.
Networking technology switching is unrelated to a call, which always uses cellular.
Handoff between satellites isn't really that hard. Your phone knows where it is and what birds are overhead (and which are about to appear/disappear) so they can adjust automatically. Routing through the satellite constellation gets kind of hairy, but is also solvable via orbital mapping and some arcane routing algorithms.
The real trick is getting enough link margin with omni antennas to get good bandwidth. As fun as they would look, nobody wants a cell phone with a little cartoon satellite dish on top that whirls around to track birds passing overhead.
What about phased arrays?
Also, why not have a completely different scale of ISPs? How about something the size of a soda vending machine or something the size of a suitcase which contains all the hardware for a very small scale internet service provider?
The last part is, where I think the real fun will emerge. Doing everything above but optimized based on the load of each satellite with a link budget computation involved will be cool. It may prove out that the optimization isn't needed but I think that kinda stuff is fun.
Speak for yourself! ;)
Cellular and WiFi AP roaming are solved problems. For SpaceX, this will be rather straightforward as the design specifies fixed ground stations that will handle negotiating/maintaining upstream connection(s).
> Or you get a rainy day and your signal clarity goes down.
That's more likely to be an issue, but having multiple satellites overhead, plus the fact that the constellation will be located in LEO (thus higher signal strength due to proximity) will help mitigate signal attenuation due to clouds/precipitation.
While you may be right regarding the context in which you said this — satellites — as a blanket statement, that's not true: my smartphone loses Internet connectivity when I leave my house, and takes a few seconds before it connects to the cellular data network.
I'm surprised that phones don't establish a backup 4G connection when Wifi signal strength decreases suddenly, but before it disconnects. This has been talked about for years, but for whatever reason it hasn't been deployed or doesn't work well, at least with Android P.
You're right on the signal degradation being significantly lower. But the LEO aspect means that there might be a higher percent of total loss due to rain (and probably adjacent signal interference).
Either way it's a cool problem.
For the client premises equipment it's a hard problem but entirely solvable. Frequency hoping radios have been around a long time. Link bonding has been around a long time. Which satellites will be in view, where, and when is knowable. I think the harder part will be the backhaul... in space.
I'm curious if they'll use beam shaping antennas for CPE. Surely they won't ship antennas with moving parts right? Also I'm curious about the power requirements of the CPE.
Another problem will be security. This is a known problem with satellite uplinks. They basically spray everyone's traffic all over God's green Earth and if you listen in you used to be able to grab lots of data right out of the sky. These days it should all be encrypted, even then some of the encryption was easily cracked.
Encryption might be handled by default with the https nature of the web now. But it'll just make https even more important going forward. It would also leave a lot of meta data open to sniffing which I don't think people would want either.
It's my understanding that this constellation is going to be significantly lower than GEO, so that's not a problem.
Also, the dead zones are toward the poles, but those are mitigated in phase two.
Very interesting watch.
Based off the paper: https://twitter.com/awm22/status/1044512585599602688
https://www.montanasatellite.com/support/satellite-footprint...
And high delay, which is the main problem.
Starlink and OneWeb are different in that they intend to use a lot of satellites in low orbits to maintain constant coverage. This is technically much harder, not the least because you need thousands of satellites to get reasonably good coverage, but also because the ground station and the satellite transceiver both need to track each other. This was not technically feasible before, but modern AESA antennas can steer their signal without having to move the antenna, and can both transmit and receive multiple simultaneous beams and very rapidly move the beams around when doing time-sharing.
The minimum round-trip time to something very close to you and also close to it's own ground station will be on the order of 10ms. However, where the system really shines is long distance communication. The satellites will pass the signal between each other using lasers, and will get the signal to the other side of the earth much faster than terrestrial fiber, both because laser in vacuum travels substantially faster than one in fibre, and also because the fibres don't get to follow ideal great circle paths.
Independent researches have evaluated the likely latencies of the system, and the results are frankly shocking. For example, today on the existing fiber network, rtt between London and Singapore is ~160ms. On Starlink, the rtt will be ~90ms.
This doesn't check out.
The radius of Earth is 6,300km. Assuming a receiver on the north pole, the total distance should only be sqrt((35786 + 6300)^2 + 6300^2) = 42555km, which isn't notably larger than the 36000km at the equator.
I'm sure you're still right about the latency, but it can't be just distance doing it.
I thought there were regulatory barriers to doing that
Latency for medium to long distances could be better than regular fiber, both because of straigher paths and because speed of light in the relative vacuum of the satellites' orbits is higher than speed of light in fiber.
We don't know yet what kind of equipment the satellites will have, but fundamentally bandwidth is constrained by bandwidth per covered area. I guess they will sell gigabit or better to cargo ships at sea, similarly good speed to rural regions, but nothing interesting to urban areas. Providing fast internet to a city via satellite would be very challenging, providing lighning fast internet to lone people in a desert is much easier and more profitable (no competition).
The big difference with Hughes and SpaceX is that Hughes satellites are geostationary and SpaceX's will be low earth orbit, so SpaceX's transmission distances and times should be much better.
Hughes provided a decent option for very remote areas. SpaceX appears to be putting together a first class option for everyone.
Who knows if they'll actually reach these speeds/latencies in practice though, that remains to be seen.
Not much needs both high bandwidth and low latency.
The latency has the potential to be better than the current internet backbone when going intercontinental due to straighter paths, but it'll be higher when going to your local AWS, probably.
Bandwidth is anyone's guess.
You can see international links that have latencies lower than the existing internet by a significant margin.
Unfortunately neither of those simulations take into account that the first iteration has been changed to only use radio and no laser links.
I wonder if it's exclusively because of that reason, and if so that the reason international customers will enjoy coverage at high northern and southern latitudes is because of the accident of history that the US ended up acquiring Alaska, and that there's some FCC regulation that says you need to provide coverage for all 50 states.
The article mentions the movie Gravity, which is a bit unrealistic as it portrays multiple large bodies in orbit all being at about the same altitude (which isn't the case in reality). That is not the case with this web of satellites. If a chain reaction of collisions does occur, it would cause a field of tiny, fast, deadly debris all orbiting on a similar orbital "plane". It would pretty much blanket the planet. Wouldn't this cause a large issue for anything attempting to reach orbit? What am I missing here?
EDIT: A lot of replies here mentioning the fact that LEO spacecraft decay more quickly than higher orbits. Please note that not all LEO orbits are low enough to guarantee a quick decay without powered retrograde thrust. Stuff can hang up there in LEO a long, long time depending on the actual altitude.
This was a big concern for NASA though. Their typical guidelines were that commercial satellites have to have >90% reliability to deorbit the satellite. But they want to increase this for the SpaceX satellites because if 10% (~400) of the satellites end up failing then they are effectively up there forever.
https://licensing.fcc.gov/myibfs/download.do?attachment_key=...
Part of the problem, though, is that said Xerox machine is traveling at 17,000 miles per hour.
We've already had a collision (https://en.wikipedia.org/wiki/2009_satellite_collision) between two of the ~3-4k satellites in orbit. SpaceX's constellation is planned to be double that number.
That single collision turned two satellites into two thousand high-velocity projectiles, too.
1. Satellites can kill themselves by dropping back into the atmosphere by their own propulsion system
2. Even if they fail, most of these are low enough that the time until they fall is reasonable short.
3. The objects are known and tracked so even if they are out of control, the others will avoid them.
4. We are moving into a world with cheap access to space and lots of satellites we need service satellites anyway and multiple are in development. This could be a legal requirement but even without it will develop because SpaceX doesn't want to have debris flying around on 'their' plane.
https://www.quora.com/Why-dont-satellites-crash-into-each-ot...
It could make orbital planes unusable, but is generally thought not to pose a substantial problem for launching through them, so worst case we just have to start putting satellites a bit higher.
This number surprised me, much lower than I expected. Looked it up and I'm seeing varying numbers, but generally in the 1k-4k ballpark.
Launch approvals are a separate thing.
How? By shooting down satellites?
I was thinking of WW2 radio pinging when I asked. I'm sort of imagining a software simulation that has real-time updates of sat positions - I assume more than 3 in any useful domain - that checks delay/ping against where the sats actually are.
One way to do so requires user equipment that can transmit back to the satellites, but that will already be the case for SpaceX's customers.
You can also use Doppler measurements to get your location, although this tends to be less accurate, especially if you're moving quickly
Just because someone doesn't own something doesn't mean you can just launch stuff up there with no regard for other such things to be launched/already in the air. Things can and do collide, and the ramifications of that are actually much more dangerous when you don't have an atmosphere to pull the pieces down in the event of a collision.
e.g. that's the law.
Edit: Did some more research on competitors. It seems ViaSat currently has 1.1 billion invested in its 2 active GEOs, plus whatever’s currently going in their next generation constellation. Their bargain bin plan goes for $70 a month (fun fact, they too pull the 3 month deal BS landline ISPs do) up to $200 a month. Since it only has a single real competitor it probably services 3% of the US population, so with their midplan of $150 a month over 9.75 million users into a satellite constellation with a lifespan of 8 years they are charging in the neighborhood of %12,700 over their satellite breakeven (1.46 billion month revenue over 11.46 million per month payback).
Bringing that over to a $10 billion constellation with much more premium internet, yikes.
Current US satellite adoption doesn't matter in this case because the new product would not suck. Satellite internet is not a viable competitor today for the various reasons others have stated here. Starlink however would compete directly with my DSL service and since I (and pretty much everyone else) hate our telco, the sky is the limit as they say.
Then, once the antennas are in proper mass production and cheap enough, they have an investment of a couple of $B worth of satellites to pay for. So long as they can provide something no other ISP can (low-latency access in the middle of nowhere), I very much doubt their prices will be lower than those ISPs.