I’m zipping large numbers of files (easily in the tens of thousands), with total data amounts generally in the multi-gigabyte range. I’m also having to use the file.CopyTo to copy the files to a zippedFile object in the archive since the files are all scattered throughout different directories. Preserving the original directory paths of the files… I’m working with a file listing, with full pathname, returned by a search query.
I’m also using a “using (new AutoBatchUpdate(archive))” block around my zipping code to speed things up. The problem is two-fold:
1. it is storing files temporarily in the user temp dir which leaves open the possibility of running out of disk space during the zipping process.
2. at the end of the process there is an incredibly long wait time as the archive runs through the EndUpdate() process.
So my question is this: is there a good methodology for zipping up extremely large quantities of files?
I was thinking that it might be possible to set the archive temp directory to a MemoryFolder and then have the EndUpdate run after a certain number of bytes are processed, but I’m not sure if this makes sense or is possible. Would the MemoryFolder get cleared out, freeing up the used RAM, after the EndUpdate call?
Any other ideas or suggestions?
Imported from legacy forums. Posted by steve (had 2098 views)