Jump to content

Constant checking for new chunk compression methods while seeding


allG

Recommended Posts

It would use a lot of CPU, however less bandwidth in the long run. It would also need to be optional for those that want minimal cpu usage.

My suggestion is to use open-source compression algorithms that use multiple algorithms depending on the compression:cpu_time ratio nominated.

There would have to be a default predetermined list of popular compression algorithms that work effectively for the majority of torrent file types and can be shared and updated via a .torrent.

As the default list of compression algorithms for each chunk is shared, each seeder (leechers, too?) attempts new methods (random? ai?) of compression for each chunk (or the same algorithm for multiple chunks in some cases) and if the compression method size plus the compressed size is less than the original uncompressed size, share the new algorithm and it's numbered chunk location.

A method to share algorithms that have already been tried among the hive would result in a huge waste of bandwidth and is susceptible to torrent poisoning so this part is not recommended, but if this could be worked out efficiently then this could save a lot of redundant cpu time. Perhaps it could work with a huge pre-determined numbered list shared via the hive and only downloading the specific algorithm needed to compress/decompress, performing data validation, and disregarding/flagging bad algorithms.

I've seen .iso files with a lot of whitespace, so this would cut down on a lot of unnecessary bandwidth when downloading such files, but has potential to compress already-compressed files further.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...