NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
People Keep Inventing Prolly Trees (dolthub.com)
compressedgas 2 days ago [-]
This article does not mention Jumbostore (Kave Eshghi, Mark Lillibridge, Lawrence Wilcock, Guillaume Belrose, and Rycharde Hawkes) which used content defined chunking recursively on the chunk list of a content defined chunked file in 2007. This is exactly what a Prolly Tree is.
aboodman 3 hours ago [-]
I was aware of this kind of structure when I coined 'prolly tree'. It's the same thing bup was doing, which I referenced in our design docs:

https://github.com/attic-labs/noms/blob/master/doc/intro.md#...

The reason I thought a new name was warranted is that a prolly tree stores structured data (a sorted set of k/v pairs, like a b-tree), not blob data. And it has the same interface and utility as a b-tree.

Is it a huge difference? No. A pretty minor adaptation of an existing idea. But still different enough to warrant a different name IMO.

lawlessone 4 hours ago [-]
Amazing! all these people reinvented my SuperMegaTree!
ChadNauseam 3 hours ago [-]
Haha, this is funny. I've been obsessed with rolling-hash based chunking since I read about it in the dat paper. I didn't realize there was a tree version, but it is a natural extension.

I have a related cryptosystem that I came up with, but is so obvious I'm sure someone else has invented it first. The idea is to back up a file like so: first, do a rolling-hash based chunking, then encrypt each chunk where the key is the hash of that chunk. Then, upload the chunks to the server, along with a file (encrypted by your personal key) that contains the information needed to decrypt each chunk and reassemble them. If multiple users used this strategy, any files they have in common would result in the same chunks being uploaded. This would let the server provider deduplicate those files (saving space), without giving the server provider the ability to read the files. (Unless they already know exactly which file they're looking for, and just want to test whether you're storing it.)

Tangent: why is it that downloading a large file is such a bad experience on the internet? If you lose internet halfway through, the connection is closed and you're just screwed. I don't think it should be a requirement, but it would be nice if there was some protocol understood by browsers and web servers that would be able to break-up and re-assemble a download request into a prolly tree, so I could pick up downloading where I left off, or only download what changed since the last time I downloaded something.

RainyDayTmrw 42 minutes ago [-]
AES-GCM-SIV[1] does something similar to your per chunk derived key, except that AES-GCM-SIV expects the key to be user-provided, and the IV is synthetic - hence Synthetic IV mode.

What's your threat model? This has "interesting"[3] properties. For example, given a file, the provider can figure out who has the file. Or, given a file, an arbitrary user can figure out if some other user already has the file. Users may even be able to "teleport" files to each other, like the infamous Dropbox Dropship[2].

I suspect why no one has tried this is many-fold: (1) Most providers want to store plaintext. Those few providers who don't want to store plaintext, whether for secrecy or deniability reasons, also don't want to store anything else correlatable, either. (2) Space is cheap. (3) Providers like being able to charge for space. Since providers sell space at a markup, they almost want you to use more space, not less.

[1]: https://en.wikipedia.org/wiki/AES-GCM-SIV [2]: https://en.wikipedia.org/wiki/Dropship_(software) [3]: "Interesting" is not a word you want associated with your cryptography usage, to say the least.

Retr0id 2 hours ago [-]
> If you lose internet halfway through, the connection is closed and you're just screwed. [...] it would be nice if there was some protocol understood by browsers and web servers

HTTP Range Requests solve this without any clever logic, if mutually supported.

nicoburns 2 hours ago [-]
Bittorrent is the protocol you're looking for. Unfortunately not widely adopted for the use cases you are talking about.
theLiminator 2 hours ago [-]
Sounds similar to IPFS.
wakawaka28 3 hours ago [-]
I think the cost of processing stuff that way would far exceed the cost of downloading the entire file again. You can already resume downloads from a byte offset if the server supports it, and that probably covers 99% of the cases where you would actually want to resume a download of a single file. Partial updates are rarely possible for large files anyway, as they are often compressed. If the host wants to make partial updates make sense then they could serve over rsync.
iamwil 2 hours ago [-]
Anyone know if editing a prolly tree requires reconstructing the entire tree from the leaves again? All the examples I've ever seen in a wild reconstruct from the bottom up. Presumably, you can leave the untouched leaves intact, and the reconstruct parent nodes whose hashes have changed due to the changed leaves. I ended up doing an implementation of this, and wondered if it's of any interest or value to others?
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 04:23:49 GMT+0000 (UTC) with Wasmer Edge.