Since this didn't pass the smell test: the author is looking at nameplate capacity, which is a completely useless metric for variable electricity production sources (a solar panel in my sunless basement has the same nameplate capacity as the same panel installed in the Sahara desert).
Looking at actual yearly energy generation data, this is more like 1.5 times the generation of an average nuclear power plant (NL solar production in 2023: 21TWh, US nuclear production in 2021: 778TWh by 54 plants).
Which maybe puts more into perspective the actual risks involved here. I'm not saying there shouldn't be more regulations and significantly better security practices, but otoh you could likely drive a big truck into the right power poles and cause a similar sized outage.
At that moment, the attacker will not only blast the grid with the full output of the solar panels, but they will also put any attached batteries into full discharge mode as well, bypassing any safeties built into the firmware with new firmware. We must consider the worst case, which is that the attacker is trying to not only physically break the inverters, but the batteries, solar panels, blow fuses, and burn out substations. (Consider that if the inverters burn out and start fires, that's a feature for the attacker rather than a bug!)
So yes, not only is it 25 medium sized nuclear power plants, it's probably much higher than that! And worse, that number is growing exponentially with each year of the renewable transition.
This was probably the scariest security expose in a long time. It's much much worse than some zero-day for iphones.
A bad iPhone bug might kill a few people who can't call emergency services, and cause a couple billion of diffuse economic damage across the world. This set of bugs might kill tens of thousands by blowing up substations and causing outages at thousands to millions of homes, businesses, and factories during a heat wave. And the economic damage will not only be much higher, it will be concentrated.
The big risk is turning them all off at the same time, while under maximum load. That will cause a brown-out that no other power generator can pick up that quickly. If the grid frequency drops far enough big parts of the grid will disconnect and cause blackouts to industry or whole areas.
It will take a lot of time to recover from that situation. Especially if it's done to the neighbouring grids as well so they can't step in to pick up some of the load.
Power transformers have a loooooooot of thermal wiggle room before they fail in such a way and usually have non-computerized triggers for associated breakers, and (at least if done to code, which is not a given I'll admit) so do inverters and every other part. If you try to burn them out, the fuses will fail physically before they'll be a fire hazard.
Instead, they're going to get a few guys with guns and shoot some step of transformers and drive away.
The problem with infosec people is they tend to wildly overestimate cyber attack potential and wildly underestimate the equivalent of the 5 dollar wrench attack.
For solar panels, the nameplate capacity is usually also the power generated at the peak production time, which is the moment when an attacker turning off all inverters at the same time would have the most impact.
That is: for an attack (or any other failure), the most important metric is not the total power produced, but the instantaneous power production, which is the amount which has to be absorbed by the "spinning reserve" of other power plants when one power plant suddenly goes offline.
The peak theoretical power output of a solar panel depends on where it's installed, inclination, temperature, elevation, and so on. The actual peak power is going to take weather and dirty panels into account.
1kw nameplate in Ireland (or the Netherlands) is never going to give you an instantaneous 1kw output -- you're going to be lucky to see 60% of that.
So in addition to the other stuff people mentioned, you might be off by another factor of 2 there. They also said “medium sized” so let’s call it 3.
The only new (non-Russian) European design built in the past 15 years is the EPR at 1600 MW. The only new American design built in the past 15 years is the AP1000 which as the name suggests is 1000 MW (technically 1100). AP1000 uses a massively simplified design to try and be much safer than other designs (NRC calculations say something like an order of magnitude) but is not cost competitive against most other forms of power generation. Which is why after Vogtle 3 and 4 there are no plans for more of them in the US.
It's not that EPR is any better- they are actually doing worse in terms of money and time slippage than Vogtle did. Flamanville 3 had it's first concrete poured in 2007 and still hasn't generated a single net watt!
It turns out that the pause in building nuclear reactors in the west from about 1995-2005- both US (which actually was longer, from the early 1980's, after 3 Mile Island things still under construction were finished but nothing new was built) and Western Europe (after Chernobyl following a similar path) basically gutted the nuclear construction industries in both, and they haven't built back up. The Russians kept at it, and the South Koreans have moved in to the market (and China is building a huge number domestically, though I don't think they've built any internationally), but Western Europe and the US are far behind, and after Fukushima Daiishi I strongly suspect the Japanese are in the same boat. Without the trained workers you can't build these in any predictable way, and when you pause construction for a decade you lose all of the trained workers and it's really hard to build that workforce back up again.
Isn't latitude taken into account by grid operators for determining their expected peak output? The owners would otherwise be installing bigger (more expensive) converters than needed, so they'd know this value at least roughly. Even smarter would be to include the angle etc. but not sure what detail it goes into compared to latitude which is a very well-known and exceedingly easy to look up value for an area
I certainly see your point about it not being apples to apples, but on a cloudless summer day, the output afaik genuinely would be the stated figure (less degradation and capacity issues). The country is small enough that it's also not unlikely that we all have a cloudless day at the same time
One might well expect some sun in summer and put some of the used-in-winter gas works into maintenance or, in the future, count on summer production of hydrogen— although hacks are likely a transient issue so I wouldn't foresee significant problems there
The distinction is important, especially in the Netherlands, which has a capacity factor of only about 10%-15%, whereas most of the US will be at least 20%-25%, which is twice as high.
I'm not sure of the typical number of reactors in the Netherlands, but using the US average of 1.6/power plant may not be the most representative comparison.
Too much from production the frequency skyrocket, little production the frequency plunge.
Now classic grids are designed on large areas to average the load for big power plants, this way those plant see small instantaneous change in their output demand, let's say a 50MW power plant see 100-300kW instantaneous change, that's something they can handle quick enough. With massive p.v., eolic etc grid demand might change MUCH more for big power plant, like a 50MW P.P. need to scale back or power up of 10MW suddenly and that's way too much to sustain. When this happen if the demand is too much the frequency plunge, grid dispatcher operators have to cut off large areas to lower the demand (so called rolling blackouts), when the demand drop too quickly the frequency skyrocket and large PP can't scale back fast enough so they simply disconnect. Disconnecting the generation fall and the frequency stabilize, unfortunately most p.v. is grid tied, if a p.p. disconnect most p.v. inverters who have seen the frequency spike disconnect as well creating a cascading effect of quickly alternating too low and too high frequency causing vast area blackouts.
Long story short a potential attack is simply planting a command "at solar noon of 26 June stop injecting to the grid, keep not injecting till solar noon + 5'", with just "1 second or so" (due to eventual time sync issues) all inverters of a certain brand might stop injecting, making the generation fall, a bit of rolling blackouts and large pp compensate quickly. Than the 5' counter stop, all inverters restart injecting en-masse, while the large pp are full power as well, the frequency skyrocket, large pp disconnect causing most grid-tied inverter to follow them, there are large change an entire geographic segment of a grid fall. Interconnection operators in such little time do not know what to do and quickly the blackout might became even larger with almost all interconnection going down to protect active parts of the grid, causing more frequency instability and so more blackouts.
Such attack might led to some days without power.
I get your comparison and the following is probably obvious to most people, but I feel like it really needs to be said: this requires being there, having a truck and the willingness to risk almost certain jail time, whereas taking down all SolarEdge installations on the planet could probably be done by an anonymous teenager in a foreign country with nothing but a computer and TOR client.
(I mention SolarEdge because I just had to deal with one of their systems and it pissed me off, I don't actually know of any vulns)
When it is sunny in the netherlands, it is likely sunny everywhere in NL because of how small the country is.
This is the situation where having so much solar power capacity (kW) is dangerous.
The risk scales with energy output but it would not term nameplate capacity a "completely useless metric".
Every adult in Seattle eventually has to learn that if you have an activity planned on the other side of town, if you cancel it because it’s raining at your house you’re not going to get anything done. You have to phone a friend or just show up and then decide if you’re going to cancel due to weather.
Now to be fair, in the case of Seattle, there’s a mountain that multiplies this effect north versus south. NL doesn’t have that, but if you look at the weather satellite at the time of my writing, there are long narrow strips of precip over England that are taller but much narrower than NL.
Often friends of mine who live in my city report rain when I see none, or no rain when it's raining outside my window. That's to say nothing of a location 30km away, where basically anything can happen. Do we live on the same planet?
> It wasn’t necessary from a technical standpoint to let everything run through the manufacturer’s servers, but it was chosen to do it this way.
(emphasis from article)
I'm working on IoT cloud system. It was chosen to be done this way because netither consumers nor installers have any expertise whatsoever to setup their own network or any devices to be acessible from outside (and they want their panels to be accessible when they are outside their home). I can do it, most readers of HN could do it, but typical consumer or installer can't. Sad but true.
This also makes it simpler from a programming point of view - instead of having separate cloud sync & local control protocols, you just have one local protocol and you merely tunnel it through the (dumb) cloud if you can't connect directly.
> This also makes it simpler from a programming point of view - instead of having separate cloud sync & local control protocols, you just have one local protocol and you merely tunnel it through the (dumb) cloud if you can't connect directly.
Having only cloud protocol is even simpler, I've done all of the above (I do backend and our firmwares).
I have done a controller for a 40 foot container battery and it wasn't like we received any API from Hitachi (battery manufactor). We had to write everything ourselves.
However, for purpose of the security of the nation's power grid, I don't just need my inverter to be secure, I need pretty much everyone's inverter to be secure. If an attack bricks 95% of solar inverters, the fact the nerdiest 5% of users have their inverters airgapped won't stop the grid having a lot of problems.
That said, Apple Homekit integration is local network based, so products that do that and the typical manufacturer cloud system have done both paths.
Homekit is a pain to use without Apple hardware/software, but there you go. (There's a plugin for HomeAssistant, but I'm still classifying that as a pain)
I can understand people wanting to be able to see the metering live, but remote control of the panels just seems like a security incident waiting to happen. I'm quite glad I have a non-internet-connected inverter.
The standard go to a raz pi solution will up and die every few months. And half the time you'll need physical access to get it back. It takes a lot of work to develop an embedded system that has enough reliability.
Like you wouldn't believe.
My most memorable case of insecure IoT devices - wifi socket was sending wifi ssid and password of the network in cleartext in every ping packet to chinese servers.
Large manufacturers would like you to think this. It would provide them a convenient excuse for not even trying to differentiate the market along these lines.
> We (the society) don't even want to change the default password on most things.
Actually.. I just want to use my device _first_ and not go through some manufacturer controlled song and dance of dark patterns.
In my experience, if you don't pre load the user with this garbage, and then wait for them to have an actual _need_ that depends on the feature, they're FAR more compliant with following even lengthy instructions to get it done.
It's more a problem of aligned benefits and timing than anything else.
The sole remark I have against them (beside the not so good software quality it's the impossibility for individual owners to do offline updates, we can upgrade via VRM portal but not downloading fw and flash it locally even if the needed device is on sale, because they offer fw files only to registered vendors.
Fronius (to remain in the EU) have a local WebUI witch need a connection only for fw updates, even if differently from Victron it's not a Debian based system with sources available but a closed source one, they unfortunately offer only a very limited REST API and a very slow ModBUS but still anything con be do locally.
I'm not sure, since I haven't any myself by SMA (Germany) and Enphase (USA) seems to been able to operate offline as well.
Stated that, yes, you are damn very right in saying most installers have no competence, thankfully where I live self-installation is allowed (at least so far), but that's simply demand better UIs and training for them perhaps avoiding the current state of the industry with an immense amount of CRAP at OEM level, with most "state of art" systems not at all designed to be used in good ways (see below) and absurdly high prices to the customer at a level it's not interesting installing p.v... 4 years ago I paid my system 11.500€ for 5kWp/8kWh LFP, the smallest offer to have it designed and built by someone else was ~30.000€ the most expensive ~50.000€ and all the 6 offers I tried shows some unpleasant issues and incompetence.
About OEMs just observe how ABSURD is that there is no damn DC-to-DC direct car charger. Most EVs now have 400V batteries, the same of stationary batteries, with equal BMS comms. Why the hell not sell an MPPT-to-CSS combo direct solution? Ok, we do not ONLY charge from the Sun, than it's perfectly possible have a compo charging station with DC for p.v. and AC for the grid, switching from one to another as needed. It's ~30% energy lost in double conversion.
Why no DC-to-DC high power appliance who still run DC internally (A/C, hot-water heat-pump heaters etc)?
Why not a modern standard protocol for integration of anything instead of building walled gardens?
Long story short OEMs have choose the cloud model partially because most installers are electricians able to use desktop holding the mouse with one hand and clicking with the other, but also because they have no intention to made user-interesting solution in an open market...
Certainly this requires a lot of progress to secure the IOT space, but we can allow the enshitification of clouds to continue.
I call bullshit. They've been conditioned to think that they want it, because all product brochures have it.
What kind of tangible benefit could there be to know how bright the sun is at your home while you're not there? A cool party trick to virtue signal or a break between doomscrolling, I suppose, but it's not like you're gonna jump up and drive back to... what even could you do if you knew?
You gave reason WHY they want it. Maybe consumers were conditioned to want access, but they still want access. If you give them similar devices, they will chose the one which has application or webpage to see how their big investment is actually working. It's not about current state of device, it's all about historical data and month-by-month savings presented as a nice graph. They will check this maybe every week or month (later every several months), but buyers still want to know what their installation did for them.
It reminds me of <https://xkcd.com/627/>, but when you're launching a product that isn't good enough.
It's hard enough to open up a port even with uPNP (typically disabled) and other made-for-purpose tech. Torrent clients end up trying to poke holes and such. Service discovery might work via local UDP broadcast, or it might not. LAN clients might live at 10.* or 192.* or be isolated by default. It's easier to just go onto the public internet and contact some mysterious server. Botnet by design.
Governments should have done the same thing as with digital TV transition(s) : first ban selling devices that can't do IPv6, then ban selling (most) devices advertising they can do IPv4.
I'm thinking about screwing it open and desoldering the wifi module but honestly I'll replace it in the next couple of years so I'd rather not kill myself by making a mistake.
It may be sufficient to just disconnect the antennas from the WiFi module, that will help prevent any network connections.
Compromising an individual one by getting close-range physical access will be a local annoyance but wouldn't scale to a level where it can threaten the grid, so it limits the pool of potential attackers to local vandals (which can achieve their goals easier by just throwing rocks at your panels).
Option 1: Artificially sell the thing as an ongoing cost.
Option 2: Artificially make the depreciation cycle faster. Get consumers to regularly replace it anyway with upgrades or trend changes.
Option 3: Make ongoing money from the item via a side-channel (tvs are great at this one)
Option 4: Manufacture and sell a huge number of different goods across market segments and weather the slow depreciation cycle (Oxo does this).
Option 5: Sell some consumable good you can get recurring revenue from along side the item (Coffee pods, printer ink)
Option 6: Make up the money on maintenance, repairs, and financing. Become a bank.
Option 7: Make your money in some other sustainable profitable business and drop the product once you've gotten what you can for it.
All of these kinda suck and option 1 is easy to implement.
Be careful with this language, especially when you're involving politicians and the non-technical.
The current atrocity of criminally negligent IT infrastructure right now is mostly created and driven by people with diplomas, including from the most prestigious schools. (And a top HN story over the weekend was one of the most famous tech company execs, turned government advisor, advising students at Stanford to behave unethically, and then get enough money to pay lawyers to make the consequences go away.)
And most of the certificates we do have are are individual certifications that are largely nonsense vendor training and lock-in, and these same people are then assembling and operating systems from the criminally negligent vendors. And our IT practices certifications are largely inadequate compliance theatre, to let people off the hook for actual sufficient competence.
My best guess for how to start to fix this is to hold companies accountable. For example, CrowdStrike (not the worst offender, but recent example): treat it as negligence, hold them liable for all costs, which I'd guess might destroy the stock, and make C-suite and upper parts of the org chart fear prison time as a very serious investigation proceeds. I'd guess seeing that the game has changed would start to align investors and executives at other companies. What could follow next (with growing pains) is a big shakeup of the rest of the org chart and practices -- as companies figure out that they have to kill off all the culture of job-hopping, resume-driven-development, Leetcode fratbro culture, IT vendor shop fiefdoms, etc. I'd guess some companies will be wiped out as they flail around, since they'll still have too many people wired to play the old game, who will see no career option other than to try to fake it till they make it at the new, responsible game (ironically, and self-defeatingly, taking the company down with them).
Punishment for mistakes is what led to the Chernobyl disaster.
But I'm concerned about the entire field of software, which doesn't have that sense of responsibility, and I don't see how it would get it. However, software industry -- both companies and workers -- are guided almost entirely by money. To the point that it's often hard to explain to many people in HN discussions on why it would be good to behave in any other way than complete mercenary self interest. So I don't see any way to get alignment other than to link money to it. If people see that as punishment, so be it.
we see competition in cloud/IaaS providers because they actually need to build datacenters and networks and so there's some price floor, but when it comes to "antivirus" CrowdStrike was able to corner the market basically, and downstream from them not a lot of organizations/clients/costumers can justify having actual independent hot-spare backups (or having special procedures for updating CS signatures by only allowing it to phone home on a test env first)
the cultural symptoms you describe in so much detail are basically the froth (the economic inefficiencies afforded) on top of all the actual economic activity that's sloshing around various cost-benefit optimum points.
and it's very hard to move away from this, because in general IT is standardized enough that any business that needs some kind of IT-as-a-service will be basically forced to pick based on cost, and will basically pick whatever others in their sector pick -- and even if there are multiple providers the will usually converge on the same technology (because it's software) -- thus this minimizes the financial risk for clients/customers/downstream, even if the actual global/systemic risk increases.
Knowledge without reasoning is how you get mired in bureaucracy.
BS gatekeeping rituals and compliance-for-sale theatre are arguably just symptoms -- of companies and individuals not being aligned with developing trustworthy systems.
This article shows that we inadvertently introduced new choke points. And of course the global security environment makes it more worrisome.
Renewable and decentralized are different axes.
That's why instead of pushing self consumption and semi-autonomous systems we push grid-tied and cloud-ties crap, to be tied to someone else service, slave of that. It's the "in 2030 you'll own nothing" already a reality in modern cars, connected to the OEM with a much higher access than the formal owner, much modern IoT and cloud+mobile crap. People do not even understand they do now own, until it's too late.
Another simple example: in most of the world banks between them have open standard to automatic exchange transaction, in EU that's OpenBank APIs, with signed XML and JSON feeds. There is NO REASON to block customers for directly use such APIs from a personal desktop client. All banks I know block such usage. So you do not have all your transactions signed by the bank on your iron, you have NOTHING in hand. In case of "serious issues" you have nothing to prove what you have on your bank, what you have done with your money. In the past we have had paper stuff to prove, we now have signed XML/JSON witch is even better than paper being much harder to falsify, but no, we miss because 99% must own nothing.
We have connected cars with a SIM inside, but instead of having the car offering APIs and a client or perhaps even a WebUI, directly to their formal owner we have to pass through their OEM, the real substantial owner. And we can't even disconnect the car. In the EU it's even illegal for new car to be disconnected since the emergency e-call service must be active on all new cars.
And so on.
This focus on paper qualification to mitigate risk seems a very European approach. Not saying it is wrong - it is just not emphasized as strongly elsewhere. And while it seems like a good fit for a slow-moving industry with high expectations of safety, the solar/wind world is not a slow-moving industry.
From experience in this sector though, I think the real issue is a lack of technical awareness and competency with enough breadth to extend into the "digital" domain - often products like these are developed by people from the "power" domain (who don't necessarily recognise off the top of their head that 512-bit RSA is a #badthing and not enough to use to protect aggregated energy systems that are controllable from a single location).
Clearly formal diplomas/certificates are not needed for that - some practical hands-on knowledge and experience would help a lot there.
When a product gets a network interface on it, or runs programmable firmware, we should hear discussions about A/B boot, signatures, key revocation, crypto agility to enable post quantum cryptography algorithms, etc. Instead, the focus will be on low-cost development of a mobile app, controlled via the lowest-possible-cost vendor server back-end API that gets the product shipped to market quickly.
Let's not even go near the "embedded system" mindset of not patching and staying up to date - embedded systems are a good place to meet Linux 2.4 or 2.6, even today... Vendors ship whatever their CPU chipset vendor gives them as a board support package, generally as a "tossed over the wall" lump of code.
I doubt many of these issues (which seem to be commercial/price driven) will be resolved through paperwork, as you say.
How someone would wave a 20 year old piece of paper as evidence that they know how to use solar tech that was developed last year, I don’t know.
I do not allow any system into my environments (at home and at work) that requires a third party data connection function.
There are way too many incidents where a provider, cloud or otherwise which required connection failed for various reasons.
(e.g., Cisco Spark Board, Xerox ConnectKey, Google Cloud Print, WeWork's Connected devices, Lattice Egnines, MS Groove Music Pass, Shyp, Adobe Business Catalyst, Samsara, Zune, FuelBand, Anki Vector Robot, Google Stadia, Pebble)
Despite this, I am very leery of regulating solar power specifically.
You can set up a local network that has no WAN connectivity on it. About anything else is difficult to verify even the most basic of security properties. Certifying is another step up (although you could argue certifying is just a third party saying something passed a finite list of tests) - the real challenge is defining a meaningful certification scheme.
There has been some good work towards consumer IoT device security (i.e. the 13 steps approach from the UK), that covers some of the lowest hanging fruit - https://www.gov.uk/government/publications/code-of-practice-...
The trouble is that these set out principles, but it's hard to validate those principles without having about the same amount of knowledge as required to build an equivalent system in the first place.
If you at least know the system is not connected to a WAN, you can limit the assurance required (look for WiFi funcitonality, new SSIDs, and attempts to connect to open networks), but at a certain point you need to be able to trust the vendor (else they could put a hard-coded "time bomb" into the code for the solutions they develop).
I don't see much value in the academic/theoretical approaches to verification (for a consumer or stakeholder concerned by issues like these), as they tend to operate on an unrealistic set of assumptions (i.e. source code or similar levels of unrealistic access) - the reality is it could take a few days for a good embedded device hacker to even get binary firmware extracted from a device, and source code is likely a dream for products built to the lowest price overseas and imported.
Are you suggesting using smart phones should count in "not allowing it in"? Then yes, I try to where possible. I do not depend on a smart phone. All functionality that are operationally necessary can be done elsewhere without major delays or impact.
Anyway, I agree that there should be a regulation that forbid remote management, and you can only consult data in a read only manner remotely (you could air gap the inverter with the internet gateway using a one way rs232 connection where the inverted just write continuously). And if grid operators need to be able to turn solar off, they should install relays controlled by their infrastructure.
As always, the vulnerability of enabling remote updates. When will people learn? Updates should only be possible if there's a physical switch (not a software switch) on the device. If it's "off", no updates are possible.
Isn't the most devastating attack vector remotely installing malware? With a hardware switch, none of that malware will survive a reboot of the device.
I remember when hard disk drives came with a write-enable jumper. Then, once you've made a backup, the jumper is removed. Then it is impossible to accidentally or maliciously write over your precious backup.
Seems to me that this is very similar to the situation with IoT only with higher stakes. I appreciate this article's presentation of inverter and grid trust.
Beyond trusting customer inverters to do the right thing, I wonder if there is a method for safing a grid at the hardware level. Naive question: could there be a grid provider device that prevents overcurrent or incorrectly clocked cycles?
There isn't really any practical way to prevent overvoltage though. So a rogue controller in charge of all the solar systems in a street might be able to do quite a lot of damage to consumer devices.
A problem from the utility point of view is that they can no longer guarantee that the 240 V side of the distribution system is safe to work on just by tripping a breaker on either side of the distribution transformer. So all work on the 240 V distribution system has to be done with the assumption that the system is live.
Eventually regulations will be updated, if necessary, to deal with large numbers of solar installations on domestic buildings.
To put it more simply: if the phase is wrong, the effect is the same as a short circuit, which fuses and circuit breakers protect against. If the frequency is wrong, the phase will become wrong after a number of cycles.
> There isn't really any practical way to prevent overvoltage though. So a rogue controller in charge of all the solar systems in a street might be able to do quite a lot of damage to consumer devices.
There is, it's called a surge protector or surge protective device (SPD). It converts any overvoltage above a certain level into a short to ground, which then trips the fuse or circuit breaker. It's often used as a protection against lightning-induced currents.
> A problem from the utility point of view is that they can no longer guarantee that the 240 V side of the distribution system is safe to work on just by tripping a breaker on either side of the distribution transformer. So all work on the 240 V distribution system has to be done with the assumption that the system is live.
From what I've seen, the utility workers usually ground the wiring when working on it (they have a special-purpose device for that). Once it's safely connected to ground, it's no longer live.
https://github.com/veganmosfet/Balcony_in_the_cloud
https://github.com/veganmosfet/SolarFlareSec
https://github.com/veganmosfet/CyberEclipse
https://github.com/veganmosfet/SecureWatt
A big mess, but it's getting slowly better...
There is a more important part, while with p.v. we still can go offline, with car's we can't. My car is connected and I can't do NOTHING to manage it, it's managed by it OEM behind me and that's a much bigger threat since single cars can paralyze the nation if properly blocked in critical points of the road network.
At a largest scale that's the reason we can't have a national smart grid but only individual smart microgrid, meaning p.v. should be used only for self-consumption NOT grid-tied like in California.
This results in almost all of the situations threads here address.
In a better timeline, everyone has stable and secure OSs on all their devices, and the default is for everything to be locally networked, with optional monitoring from the outside via a data diode.
They measure the PV panels anyway for MPPT.
It would be interesting to use a solar panel as a sensor for moonbounce of an IR laser, though. You'd have to try at the almost new moon, with a window of a few hours/month ;-)
A data diode applies to a specific connection. It's easy to have a serial port that goes one way.
Preventing any possible input to an already compromised device is much harder. But if your device isn't already compromised then it won't be looking at the input light levels for commands.
This seems likely to be more fruitful than attempting to regulate 400 Chinese panel manufacturers.
What am I missing?
The "plants" in this context are homes and businesses. The junction points between plants and grid are the inverters sold by the panel manufacturers.
What harm on scale is the variable output especially from small p.v. utilities built out of incentives NOT personal power plants, the grid is sized with some large power plants serving a large set of customers, their absorption vary but if the grid is vast (and not too vast) enough variation tend to be slow on average, let's say 50MW PP experience 100-200kW demand variation in very short time. They can compensate easily keeping the grid frequency stable. With a significant amount of grid injecting p.v. variation might be MUCH bigger creating significant stability issues where injection goes up too quickly making the frequency skyrocketing and large PP can't decrease their output fast enough risking disconnection witch in turn might put large p.v. plants offline suddenly creating a cascading effect of large blackouts.
That's the real issue with grid-connected and tied renewables and another reason why we need to go toward self-consumption NOT injection.
Owners of small scale solar panel installations are payed a fixed price per kWh in many EU countries, regardless of the market price. The taxpayers pick up the tab I guess.
this incentivizes better capacity and availability forecasting for solar installations, and preserves the usual dynamics of the open energy market.
..
the problem is with these super small ones, where initially states just let people connect it, because it's green, yey. (but now DSOs started to make connecting waay harder. and regulators are investigating, eg. in Spain. [0])
of course the non-residential installations already usually need aFRR capability. (eg. this is the case in Hungary.)
and there's already a market for "reserves" in the EU. (but the interconnection rate is below the target 15% as far as I know. but still, there are intra-state markets, etc.) and we can see that when solar is high the reserve prices are surging. [1]
[0] https://caneurope.org/content/uploads/2024/04/Rooftop-Solar-...
[1] https://gemenergyanalytics.substack.com/p/european-power-res...
The inverter itself functions perfectly fine without an internet connection, and will display instantaneous power output on the screen. I could just be content with that and look at my monthly power bill to see how much I generated and how much I used each month and never connect it to the internet.
To get any kind of data logging & history from the inverter, it must be internet connected (wifi or ethernet). And all of that is through the manufacturer's website, which constantly nags me to "upgrade to pro" for some obscure feature that I'll never use.
Micro inverter for each panel would be very costly. In 1 MW plant you will have around 4000 panels, communicating with that amount electronic devices would be a headache.
Perhaps that's a component, but one really doesn't need to think about it too hard to identify better explanations for why this particular energy source is held to unusually high regulatory standards.
I don't have an opinion as to whether other large-scale sources of energy should be held to similar standards, but to suggest that solar energy's failure modes are comparable to nuclear energy seems intentionally misleading.
> In the Netherlands alone, these solar panels generate a power output equivalent to at least 25 medium sized nuclear power plants.
> Because everything runs through the manufacturer, they are able to turn all panels on and off. Or install software on the inverters so that the wrong current flows into the grid. Now, a manufacturer won’t do this intentionally, but it is easy enough to mess this up.
> As an interim step, we might need to demand that control panels stick to providing pretty graphs, and make it impossible to remotely switch panels/loaders/batteries on or off.
Basically, if a hacker were to make all batteries (or panels) suddenly switch between full discharge and full charge every second or so, it would tear down the electric grid. Voltage and frequency would swing rapidly, and whatever plants are riding load would struggle.
This could create a massive power outage; but there is a huge risk that this could damage power plants and other infrastructure.
> There are also people that claim that the many Chinese companies managing our power panels for us might intentionally want to harm us. Who knows.
Wait, seriously? The European power system relies on Chinese companies not messing it up remotely? And the debate is over whether the companies will stay nice? For heaven's sake, isn't it obvious that during a war the Chinese government can force them to just destroy the continent's power system remotely? How is this not seen as a extreme continental security risk?
Idaho National Lab is one of those places that researches this. https://inl.gov - their domains are energy (primarily nuclear and integrated) and national security ... and securing the grid is the intersection of that.
And some time back... https://www.wired.com/story/how-30-lines-of-code-blew-up-27-... ( https://web.archive.org/web/20201101002448/https://www.wired... ) . The story is from 2020. The event is from 2007.
The test footage linked in the article is on YouTube - https://youtu.be/LM8kLaJ2NDU
The wikipedia article on the test: https://en.wikipedia.org/wiki/Aurora_Generator_Test
From the wired article the key part of how it broke:
> A protective relay attached to that generator was designed to prevent it from connecting to the rest of the power system without first syncing to that exact rhythm: 60 hertz. But Assante’s hacker in Idaho Falls had just reprogrammed that safeguard device, flipping its logic on its head.
> At 11:33 am and 23 seconds, the protective relay observed that the generator was perfectly synced. But then its corrupted brain did the opposite of what it was meant to do: It opened a circuit breaker to disconnect the machine.
> When the generator was detached from the larger circuit of Idaho National Laboratory’s electrical grid and relieved of the burden of sharing its energy with that vast system, it instantly began to accelerate, spinning faster, like a pack of horses that had been let loose from its carriage. As soon as the protective relay observed that the generator’s rotation had sped up to be fully out of sync with the rest of the grid, its maliciously flipped logic immediately reconnected it to the grid’s machinery.
I'm not sure everyone is really thinking clearly here.
Don't get me wrong, they should get rid of this practice of cloud monitoring. A consumer should be able to access monitoring over the internet without an intermediary. They should, of course, be allowed to contract with an intermediary if that is their desire.
But the security argument?
Yeah, that ship has sailed. Total war, means total war. Your power grid, your internet, your communications, and your fossil fuel deliveries will all see material disruption. I wouldn't count on being able to stop those disruptions by banning a few web sites. (And frankly, during total war, those disruptions would be the least of your problems in any case.)
Best bet for places like Europe, China, the US and Russia is, just don't do total war with each other. If you choose to do it anyway, then you can see what you can expect from that in the documents filed under "Play stupid games, win stupid prizes."
Germany was an enemy once as well
On top of that PVs installed at homes are insignificant to cause such troubles. As the size of installation increases, you will have different connection agreement and certain requirements. You can't install 15 MW and connect through inverters that are used at home, which is 100 kW at most. Even 15 MW is insignificant change for a grid.
There have been plenty of botnets in the past. Some even in the millions of computers. If such a botnet decided to make every node's power draw fluctuate per above, wouldn't this cause the same type of problem? Is there a reason we've never seen this happen despite large enough networks of hacked machines existing?
And that presumes an attacker can switch on/off all 8.4million computers in a small timeframe. 100% of them would need to be on, online and hacked.
I don't think this is a realistic problem.
Tesla F-ing up an OTA update that suddenly switches all charging Tesla's off, is probably a theoretical worse scenario.
I found a figure on Wikipedia saying that the NL's 4.7GW worth of offshore wind capacity is 16% of their total electricity demand nationwide. 4.7/.16 = 30GW total, so this theorized computer load attack would represent about 10% of their grid's total capacity. Can their grid add and shed that much load that quickly? That's the part I doubt.
and sounds like something that could easily occur
The hardware is modular and the software is above any competition. Choose the responsible vendors.
This is because of the market for carbon credits. When you installed your PV panels, someone estimated how much electricity they would generate over the next 10-15 years. Tradable carbon credits were created based on that estimate and went into the marketplace. And for the next 10-15 years they have to verify that the electricity was actually generated, or else someone has to pay back some money. Did you read the fine print on your contract? It is probably you that has to pay it back. You didn't know that one of the "rebates" you got was actually a pre-payment for those credits?!? Should have read the fine print ;)
Oh yeah, BTW: that "rebate" was only your portion of the credits. The installer got some of it (and doesn't have to pay back anything), the person that filled out the paperwork you didn't know existed got some (and doesn't have to pay back anything)...
Cloud-based management platforms should not oversee inverters directly.
Only deep inspection of the silicon and code can improve the situation.
Perhaps Western blocks could develop provably secure silicon IP and code, formally verified, and perform continuous random sampling on imported goods, including full multilayer silicon inspection; publish it for free and refuse to import products that don't cooperate.
Edit: did some research and apparently it varies - many modern inverters can be remotely controlled by manufacturers - if they’re setup to allow it and are internet connected.
The article is still sensationalist about the risks though
>In the increasingly interconnected global economy, the reliance on Cloud Services raises questions about the national security implications of data centers. As these critical economic infrastructure sites, often strategically located underground, underwater, or in remote-cold locales, play a pivotal role, considerations arise regarding the role of military forces in safeguarding their security. While physical security measures and location obscurity provide some protection, the integration of AI into various aspects of daily life and the pervasive influence of cloud-based technologies on devices, as evident in CES GPT-enabled products, further accentuates the importance of these infrastructure sites.
>Notably, instances such as the seizure of a college thesis mapping communication lines in the U.S. underscore the sensitivity of disclosing key communications infrastructure.
>Companies like AWS, running data centers for the Department of Defense (DoD) and Intelligence Community (IC), demonstrate close collaboration between private entities and defense agencies. The question remains: are major cloud service providers actively involved in a national security strategy to protect the private internet infrastructure that underpins the global economy, or does the responsibility solely rest with individual companies?
---
And then I posted this, based on an HNers post about mapping out Nuclear Power Plants:
https://news.ycombinator.com/item?id=41189056
[We can easily map the infrastructure of the cloud and AI -- and their supply chains - and these are increasingly of National Security Concern:]
((Not to mention the actual powerplants being built to exclusively provide datacenter power))
Now, if we add the layers of the SubmarinCableMap [0] DataCenterMap [1] - and we begin to track shipments
And
https://i.imgur.com/zO0yz6J.png -- Left is nuke, top = cables, bottom = datacenters. I went to ImportYeti to look into the NVIDIA shipments: https://i.imgur.com/k9018EC.png
And you look at the suppliers that are coming from Taiwan, such as the water-coolers and power cables to sus out where they may be shipping to, https://i.imgur.com/B5iWFQ1.png -- but instead, it would be better to find shipping lables for datacenters that are receiving containers from Taiwain, and the same suppliers as NVIDIA for things such as power cables. While the free data is out of date on ImportYeti - it gives a good supply line idea for NVIDIA... with the goal to find out which datacenters that are getting such shipments, you can begin to measure the footprint of AI as it grows, and which nuke plants they are likely powered from.
Then, looking into whatever reporting one may access for the consumption/util of the nuke's capacity in various regions, we can estimate the power footprint of growing Global Compute.
DataCenterNews and all sorts of datasets are available - and now the ability to create this crawler/tracker is likely full implementable
https://i.imgur.com/gsM75dz.png https://i.imgur.com/a7nGGKh.png