Question for the developers on the generation of Skylinks

I just finished listening to the podcast with Matt Sevey. In it he brings up the fact that if a portal were to block a file, simply re-uploading the file would not work as you would just get the same Skylink back which was blocked.

So this got me thinking about how this could tie into network usage efficiency by reducing duplicates within the network. Am I correct in thinking that if I were to upload an image to, then e-mail the image to a friend who then uploads it to using their computer, we would both end up with the same Skylink? Thus despite a file being uploaded by two individual people there would only exist one copy on the network because the data is the exact same… Please someone tell me I am right, it would be so friggin cool!

Well not really. I’m pretty sure the current portal stack isn’t smart enough to know if a file has already been uploaded, so it’ll just upload another copy under the same set of contracts. On top of that, if you upload to a seperate portal, it will most definitely upload it’s own copy of the same data in order to ensure up-time.

Though on the host side there is tooling for duplicate files called “virtual sectors.” So when duplicate data is uploaded to a host, it still takes payment for the data, but it only stores one copy.

1 Like