eg.:
- https://gfycat.com/JaggedIdealFrillneckedlizard
demo - http://hipku.gabrielmartin.net
explanation - http://gabrielmartin.net/projects/hipku/
Throw in some ambiguous adjectives and you should have a large enough namespace that matches up with common image contents.
Anyway, people would probably just start saving the bits on this computers after first job or two. Which would be an amusing result for being just a convoluted interface to a remote hard drive, but it's conceptually less interesting then actually using distributed human memory as a digital storage medium...
Email attachments used to be a great way a while ago but nowadays using multiple gdrive/dropbox/onedrive accounts is much easier.
They are easy to create in large numbers (especially if your ISP has dynamic IPS) and as long as you're even a little bit careful, nearly impossible to ban. Add some redundancy across different services to that and a $2 VPS that gives you tons of upload bandwidth and you've got yourself as many TBs of free,fast and reliable online storage as you want.
I spent so much time as a teenager with no money and some python skills coding storage solutions like that. I'd say it was to store movies and tv shows for myself but in retrospect I mostly did it because it was so much fun to develop.
Combine that with the fact that data which is encrypted looks practically like static, and you could potentially overlay it on top of an existing video of something mundane.
You'd need to use strong ECC to get past the lossy encoding, but as things like QR codes show, that is not so hard.
The audio channel is also usable...
Probably not the most efficient but easy and fast and the resulting images would look... interesting. For large files, the decoding would be difficult mostly just due to reading the image of so many pixels into memory. So, that's when I began fixing the image size to a smaller size and having multiple images that I would later convert to 60fps video. I could then use ffmpeg to convert images to frames and frames back to images.
I had no practical use for this but, was a fun project on a rainy afternoon.
", he became convinced of something he came to call the Dust Theory, which holds that there is no difference, even in principle, between physics and mathematics, and that all mathematically possible structures exist, among them our physics and therefore our spacetime. These structures are being computed, in the manner of a program on a universal Turing machine, using something Durham refers to as "dust" which is a generic, vague term describing anything which can be interpreted to represent information; and therefore, that the only thing that matters is that a mathematical structure be self-consistent and, as such, computable. As long as a mathematical structure is possibly computable, then it is being computed on some dust, though it does not matter what dust actually is, only that there be a possible interpretation where such a computation is taking place somehow. The dust theory implies, as such, that all possible universes exist and are equally real, emerging spontaneously from their own mathematical self-consistency."
Great book!
http://www.askamathematician.com/2009/11/since-pi-is-infinit...
Also, assuming that it is, if 'start as position X and read Y bits from pi' produced an illegal image (top secret document, abuse images, etc), what would be the legality of trading such information?
See http://jthuraisamy.github.io/markovTextStego.js/ and https://github.com/hmoraldo/markovTextStego
EDIT: also, Wikipedia never deletes anything. Even if your "edits" get reverted, you can still find them via the history page. Hmmm.
deleted pages are not visible to people with less than sysop rights (on enwp), and multiple methods are always available to deal with troublesome people, ranging from revision deletion to blocks and eventually ISP contact.
For Usenet you could depend on widespread resilien distribution + reasonably long retention periods for a lot of groups (but risked having messages killed by admins if too obvious spam).
For e-mail, anything reflecting your e-mail back can be used to juggle data: Send messages with attachment, refuse to accept the inbound reflected messages for a couple of days to let the other party store the data for you while they retry, then accept the message and instantly send it back out again.
Then there's the old Linus Torvalds quote:
"Backups are for wimps. Real men upload their data to an FTP site and have everyone else mirror it."
https://github.com/alfg/jot with demo.
perfectly for small pieces of immutable data!
That is, for a given storage medium, all you have to do is implement methods for "write key-value pair" and "read value at key", and you get to piggyback off that medium for your storage.
Interesting about DNS stores is they save a round trip, so it's not just a weird abuse of the protocol to store content, it's also potentially a performance optimisation.
I think this was from an old BOFH.
OT: Let archive.org save your pages, people!
http://en.wikipedia.org/wiki/Delay_line_memory#Mercury_delay...
I hadn't thought of reddit, as the abuse would be clearly visible, but I had used back then that Gmail Drive some guy had implemented using emails for storage, and it led me to think a lot of the Google Systems had non-obvious "unlimited" storage options.
For instance, I don't know if that's still the case, but Google Calendar surely seemed pretty fit for abuse: while calendar entries were limited in size, you could have as many as you wanted. And calendars can be private, so it's even better.
The problem with such systems will be the integrity of your data, when you start being forced to chunk things up. If they change one thing under your feet, you're a bit screwed. Also you have to detect all the undocumented pitfalls (e.g. forbidden characters in an edit field).
Everyone who used IRC as a command server from years past. It turns out that things useful for human communication tend to be useful for computer communication.
Usenet, Email... hell... I'm sure BBS would have been used if modems were popular enough back in the day.
tl;dr: the researchers discovered that MediaWiki instances were good soft targets.
[0] https://www.cs.cmu.edu/~pavlo/static/slides/graffiti-dc401-o...
> Concluding Remarks > Off probation at the end of this semester!
http://www.reddit.com/r/programming/comments/38kn2g/redditst...
Yeah minus the encryption (well that's not 100% true as you could post encrypted files and people do but it's less of a part of the "protocol" than it is in this example). The beauty of newsgroups is they are replicated to other NNTP servers. Distributed file stores fascinate me (I know this reddit protocol is not distributed or rather it's wholly owned by 1 entity even if the data is distributed across datacenters) and I'm very excited to see where things like IPFS [0], freenet [1], internet2 [2], etc turn out.
[0] http://ipfs.io/
First, I suspect it's lacking a secure integrity check (MAC), so is weak against chosen ciphertext attacks.
def encrypt(self, plaintext):
plaintext = self.pad(plaintext)
iv = Random.new().read(AES.block_size)
cipher = AES.new(self.key, AES.MODE_CBC, iv)
return iv + cipher.encrypt(plaintext)
I'm also not sure about his padding of zeros to attain the AES block size - was there a more secure padding? def pad(self, s):
return s + b"\0" * (AES.block_size - len(s) % AES.block_size)For example, here's a base64'd tiny jpeg of me: http://pastebin.com/VTLBG3Ji
Key is derived from a single SHA256 (can be brute-forced very rapidly), cyphertext isn't authenticated (can be tampered with or corrupted without anything noticing), and the padding function is broken (strips trailing NULLs, so no good for binary files).
(Ideally, it would be slightly more elegant than just renaming a zip file.)
[0] https://help.imgur.com/hc/en-us/articles/201424706-How-does-...
[0] http://a858.soulsphere.org/
Edit:[1] shows that this is most likely a false-positive.
[1] https://www.reddit.com/r/Solving_A858/comments/24vml1/mime_t...
Now if ISP's would start offering their own cached usable versions of reddit we would be getting somewhere :)
I used to run the rust servers sub. I would have people post JSON posts, which i would then spider and generate a JSON DB from, and created a UI (see the gh-pages branch) to grab the JSON and present a searchable/filterable way of finding servers that are relevant to you.
Another improvement might be not to send base64 abracadabra, but instead send some readable texts (autogenerated or fragments from wikipedia) and encode message as a slight deviations (typos, etc) using steganography. But it would require a lot of messages to transmit enough data.
Neat proof-of-concept though
The first was a new business where we would go to trade shows, conventions, hell even fast food places, and just collect as many free beverages, condiments, napkins et cetera as possible. Then we'd sell them online.
The other one didn't do much better. We'd go to a Lowes Tool Rental, and just rent a bunch of tools and then re-rent them out of our truck in the parking lot. They had to have them back an hour before Lowes closed for the night.
Our current business model is, we go to bars and hit on people, and if we get their phone numbers, we add it to a subscription service where other people can have access to it.
Honestly, I feel we're no more in the wrong than RedditStorage is.. /s
Some people still don't know what a password is? =D
nice little engineering work though. kudos.