So I was thinking about this a little today. I’m wondering what the effect of an error has on a real time zip operation. It is conceivable that from a potentially unreliable source, such as FTP or HTTP that mid way through downloading a stream and writing it to a real time zip operation, that something causes the download to pause or terminate completely. Let’s say that you’ve go two separate source streams (FTP1 and FTP2) being streamed into a single real time zip stream. Let’s say that FTP1 fails mid way through, while FTP2 keeps on going. But let’s also say that, because we are using the real time zip feature, we are then streaming that zip somewhere else, like uploading it via FTP3. If some of the bytes from FTP1 were already written, how would it tell FTP3 to hold on, and back up, and wait for a new stream from FTP1?
It just seems to me that one potential con of having a real time zip operation is that if something goes wrong in the process, it is potentially a total loss and has to start over, negating any initial benefit of having (and paying for) a real time zip component. It seems the benefit of the traditional “non-real time zip” is that it is only zipping when complete files (or streams) are already available.
I really do find this component interesting, but I’m a little hesitant to purchase and place this into our production environment given the concerns I just mentioned. Can someone please chime in here and make me a little more comfortable about this and maybe explain why I’m totally off on my assumptions?
Imported from legacy forums. Posted by Chris (had 662 views)