> “We also have different tools so that any state that is persisted through the application is managed in a very particular place in memory. This lets us know it is being properly shared between the computers. What you don’t want is a situation where one of the computers takes a radiation hit, a bit flips, and it’s not in a shared memory with the other computers, and it can kind of run off on its own.”
They have one address range for shared (i.e. subject to syncing across all replicas) memory, and a separate one for non-shared (single-replica) memory.
Cross-replica data is presumably subject to their agreement algorithm, checking that the different computers reach the same (within some error bars) results; you want to arrange things so that there are frequent checkpoints at which the conflict resolution system can say "a bad write happened at this point, I should disregard whatever this computer said from that point until it recovers".
i.e. you want local memory to use as scratch space for performance reasons, but to make sure that there isn't a long runway for errors to happen and decisions to be made before the shared-memory checker notices a mistake. To ensure this happens, you want manual control over which memory allocator handles which data.
Technically, all C++ programs control equipment. Differences include that one program may run for weeks or even years, and is mostly the only program running, or the only one on a core. This happens from microcontrollers on up to servers with a TB of RAM running, say, high-frequency trading, and in networks of hundreds of those, running weather simulations.
The program typically does all its heap allocation at startup. There is no reference-counting std::shared_ptr. You might have lots of std::vector<std::unique_ptr<T>>, std::string, the works, but they all get provisioned in the first second or two, and then just used thereafter. If anything goes wrong, you don't try to do anything clever or sophisticated; you just kill and restart, or even re-boot, and start over from scratch. That is fine if it doesn't happen too often, so you make sure it doesn't.
For communication between programs, some of the memory set up is shared, with a header containing std::atomic<std::uint64_t> sequence counters that each process can watch and compare against its last copy to know when something changed. Most commonly, actual messages show up on a ring buffer, so you don't need to act on them immediately; as long as you pick them up before they get lapped, you're good. If you get lapped, you might need to reset the whole system; so you make sure not to get lapped, by making the ring buffers big enough and by picking up messages soon enough. With big enough ring buffers and careful scheduling, you can leave all the bulk data there and just use it before it gets overwritten, avoiding expensive copies.
Often, once the program starts up, it does no more system calls at all, doing all its work by reading and writing shared memory, and maybe poking at hardware registers. On Linux one usually isolates cores doing this, with "isolcpus=..." on boot, and "nohz_full=...", "rcu_nocbs=...", "rcu_nocb_poll" etc. The ring buffers tend to live in hugepages ("hugepages=50000"), often just files opened in /dev/hugepages. This is all a simpler alternative to a unikernel/parakernel/demikernel/blatherkernel.
You might also have ephemeral processes that run just long enough to do a job and then quit, running on their own pool of cores and using their own pool of memory. This is usually how you administer the system: ssh in, look around, exit.
These posts always have a thread about people wanting to work for spacex, but because of my pseudo-anonymity, I don’t feel bad about starting it: Does anyone know if spacex does fall or spring embedded software internships?
But more importantly, their propulsion allows them to launch at a very low orbit. This is a very good method for a constellation as they can see all the early failures (the beginning of the bathtub curve) happen at a very low orbit and reduce the cost of ground testing.
Their orbital debris management strategy is one of the best I have seen.
Can we please stop using miles etc :( I am tired when people makes mistake like this.
$ units
You have: 340 mi
You want: light ms
* 1.8251859
/ 0.5478894
So, 3.6 ms round-trip. In practice, the route will be slanted, so could just exceed 5 ms. $ sudo apt install units
Once the constellation is mature, certain very-high-paying subscribers will get the packets forwarded from one satellite to the next via laser links, across oceans, before being downlinked, and get there a few ms before packets dawdling along on fiber links below 0.7c, to trigger securities trades ahead of the crowd acting on now-ancient information. The time by fiber from Singapore to New York is on the order of 90 ms, where Starlink ought to get them there in well under 70 ms, leaving a good 20 ms to arbitrage. In investment banking, they say "a microsecond is an eon, a millisecond is an eternity".Even just between New York and London, they can gain a few ms headway, enough to dominate.
It would not be surprising if the US military, and maybe some others, will have access to satellite-to-satellite routing. (They have their own WGS, "Wideband Global SATCOM", but it is GEO, thus high-latency.)
AFAIK, only the polar-orbit nodes have inter-satellite laser links, thus far, so this is a phenomenon of the near future, not the present. Other things to expect in the near future are lofting them with a few TB of storage, to minimize uplink bandwidth by edge-serving Disney and Netflix blockbusters; and multicast downlinks for real-time soccer games and maybe even time-binned shows.
The SpaceX edge here is making this kind of thing cheaper and scalable to the point that a global community of hundreds of millions of users can multiplex signals on it, but for a sufficiently well-funded organization with fewer users, it was already possible.
The broadband itself might be nice or it might be awful. It doesn’t actually matter because that was never the point.
If it does use up most of the low orbit space it is because SpaceX is the only company that is even capable of launching that many satellites and making money on it. Until there is another challenger there is zero point in complaining, because competition is not even an option.
Since it is low orbit it is self limiting problem. Any low orbit satellite has a limited lifetime of a few years without power.
This reply brought to you over a robust, fast, low latency and reliable Starlink connection.
Because $99/month * 12 months * 10 years * 1,000,000,000 customers = $11.88T isn't the point to a man who literally needs a trillion dollars to pull off the grandest mega-project ever.
Anyways, I don't buy that SpaceX will end up consuming too much of a precious resource. Even if it were so, what would be a fair way to share that resource, and who even with?! There's only one announced competitor to Starlink at this time, and they're not even remotely close to being operational. What makes SpaceX able to put up Starlink at such low cost (compared to its earning potential) is that SpaceX has lowered launch costs for itself (and others) by a lot, and they're working to lower those costs even more. What is "fair" when a company works so hard to lower costs and increase availability? Is it to punish them so others get a chance to compete at higher costs??
"This is the world SpaceX’s Starlink program, which has set a goal to provide high-speed broadband internet to locations where access has been unreliable, expensive, or completely unavailable."
Can we be honest for once and be up-front about a corporation's real goals? As if <insert technology here> was going to do anything about the gaping inequality and other social problems that actually matter to people. That your comment gets down-voted is certainly telling about the audience here.