• Is there a way to get higher redundancy then that:

    "The uploading process is as follows. First, the file is striped into chunks of 40MB. Reed-Solomon erasure coding is then applied on each chunk, expanding them into 30 pieces of 4MB. Erasure coding is like an M-of-N multisig protocol, but for data: out of N total pieces, only M are needed to recover the full 40MB chunk. This ensures a high level of redundancy, much greater than traditional replication."


  • AFAIK this is hardcoded today, and making it configurable is not on the roadmap anytime soon. Why would you need more? They say it's like 99.99999% or more (it's in their papers somewhere).

    Also did you check these links?

Log in to reply