[1] http://web.mit.edu/~axch/www/phd-thesis.pdf [2] http://web.mit.edu/~axch/www/art.pdf [3] http://groups.csail.mit.edu/mac/users/gjs/propagators/
I can imagine essentially a grid with a moat of computationally very difficult structures to send commands through without help from inside the moat. I can see how you'd, like, send a password hash through and something would come out and "get you" for lack of a better explanation.
But that seems like it would be about as secure as a real moat.... Which is to say, it's still only as safe as your expectations about your opponent's capabilities are accurate. So, you might have some ideas about how malware must do computation, and you can have your little computation immune system working to make that stochastically impossible. But then the malware writers will figure out weirder ways to do computation. Like maybe they give up on penetrating the moat and have the immune system do the nefarious computation. Or whatever. My point is that you still end up in an arms race, except it's an arms race within a complex system so it's even more difficult to have an understanding of your opponent's capabilities than it is inside a von neumann machine.
the system is clear about its capabilities regarding 'correctness'
which seems to imply the programmer's job is to optimise with those built in inaccuracies
but what those inaccuracies afford is what ackley is calling robustness and indefinite scalability
ackley addresses your question directly with a sorting comparison(o) from a later video
the graph fails to exhibit where maxwell's demon(i) horde sort would express itself on the graph, but sorting corruption is addressed in the paper(ii)
The demon horde sort’s performance may be just adequate,
by that measure, but its robustness seems quite impressive.
Figure 23 shows results of one experiment in which we
randomly corrupted site memory with simulated bit errors at
a range of probabilities. Each error occurrence selects a random
site and then flips from one to eight of its 64 atomic bits.
We can see that while channel length helps performance, it
does not help robustness against this system perturbation—
but the system is strikingly robust anyway, tolerating upward
of 10 multibit corruptions per million events with essentially
no visible performance degradation, regardless of channel
length.
Above about 50 errors/Mevent the system reliably falls
apart—and the pathology appears to run a reliable course..
unintended performance appears to be the reason for the advent of this systembut going further as to deal with unintended performance of hardware
if you have a multi core system running an incorrectly programmed sort and one core fails the whole thing shuts down, but an incorrectly programmed sorter in the demon horde will keep functioning even with failed cores, affording the opportunity to adjust while performing
(o) https://youtu.be/7hwO8Q_TyCA?t=688
(i) https://en.wikipedia.org/wiki/Maxwell%27s_demon
(ii) http://comjnl.oxfordjournals.org/content/56/12/1450.full.pdf...
in regard to demon horde the paper(o) states 'channel size increases performance'
but this should be uncontroversial
more resources give better results
it will be an optimisation problem to determine necessary resource alocation for desired results
the shape of the simulation was stated(i) to be a simplified representation of the functionality
the paper discusses 'a movable membrane whose contents cannot diffuse out', the figure is an almost organic shape, much more adaptable than the rectangular simulation demonstration
(o) comjnl.oxfordjournals.org/content/56/12/1450.full.pdf+htmlb
* larger numbers -- for example large numbers of tiny processor / memory cells on a single chip (thousands to millions?)
* more dimensions, either symmetric (lattice) or asymmetric (hyper-pyramid that gets more sparse as you go up)
* more complex cells -- more memory, processor power, bandwidth
* specialization -- heterogeneous cells