Hey Bartosz,
Thanks very much. No luck on the change to zlib.output_compression unfortunately, but that did help me a bunch in at least knowing where to look in the code to debug. It turned out that we only had the server allocating 67 megs of memory for PHP, so as soon as files were larger than that, the script failed. We decided instead of upping the memory allocation (because we’d always have hypothetical ceiling), we did the download calls slightly differently in the da_download_attachment function. Instead of fopen and reading the entire file, we used a single header call:
header( ‘Location: http://nameofsite.com/wp-content/uploads/’.$attachment ) ;
Obviously hardcoding the URL of the site is sloppy, but good enough for a quick fix. I wonder, is there a benefit to reading the whole file as you currently have it? Or might you be able to do something similar to the above and just header link to the path of the file so you don’t run into memory constraints?
Thanks again for being so responsive and helping me work out a solution. Really appreciate it!