Here you will find the updated scripts for detaching attachments (detachments). This should work for any mail server that can run
filters or milters. The detach script takes mail from STDIN and outputs to STDOUT. My examples are for Postfix. You can read a similar writeup at developertoolshed.com.
After looking at a few different detachment options I modified Ryan Hamilton's detach
scripts.
detach.pl - 13K
detachit.pl - 4.1K
hardlink.py - 22k
index.cgi - 3.4K
delete.cgi - 458b
Detailed patch informaton
detach.pl
index.cgi and delete.cgi
hardlink.py
For a simple Postfix setup these directions should be enough.
Aaron C. de Bruyn (from the link above) says:
In master.cf, I add '-o content_filter=detach' to the SMTP service, and add the detach service further down in master.cf. detach unix - n n - - pipe flags=Rq user=list argv=/usr/local/bin/detachit $(sender) $(recipient)
/usr/local/bin/detachit
#!/bin/sh # # detachit: Pipe postfix messages through detach # sender=$1 shift recip="[email protected]" if [ "$#" -eq 1 ]; then /usr/local/bin/detach -d /var/www/webmail/detach --aggressive -w https://enamel.welovesmiles.com/detach else /usr/local/bin/detach -d /var/www/webmail/detach --aggressive -w https://enamel.welovesmiles.com/detach fi | /usr/sbin/sendmail -i -f $sender -- $recip exit $?
I had a more complex setup and spent some time reading the Postfix filter docs.
Those docs tell you to edit main.cf. Do not do this. You will create a loop and your mail will be rejected with 'Too Many Hops'. This post by mouss explains why and how to fix it. Basically do not edit
main.cf for this.
The requirements I was given:
So, my detachit script has:
My script will not work for you as-is. You will need to edit it - possibly heavily. At the top you will need to chage:
# $weburi = URI prefix, ex: https://example.com/files # $webdir = file path, ex: /var/www/files # @domains = list of local domains (header check) # @attachlist = list of recipients to never detach (command line args check - match from /etc/aliases) # @detachlist = list of recipients to always detach (command line args check - match from /etc/aliases) # $attach_size = minimum attachment size to detach (in bytes)
An example config:
my $weburi = 'https://files.example.com'; my $webdir = '/home/example/files'; my @domains = qw(example.org example.com example.net); my @attachlist = qw(externalclient-example.com); my @detachlist = qw(entire.company); my $attach_size = 3145728;
You also need to search for this and change it or comment it out. All incoming email goes through our Barracuda so flagging incoming mail was easy.
# Check for incoming mail (from barracuda) if ($header =~ /\[10\.0\.0\.25\]/) {
Now that I had the scripts ready it was time for Postfix to use them. All email goes through ClamAV and I definitely want to scan the files before removing them so I needed to insert detachit after the virus scanner. To do this I added a section for detachit at the bottom of master.cf:
detach unix - n n - - pipe flags=Rq user=list null_sender= argv=/usr/local/bin/detachit -f ${sender} -- ${recipient}
The ClamAV filter was already defined in the smtp service.
smtp inet n - - - - smtpd [snip] -o content_filter=scan:127.0.0.1:10026 [snip]
I left that alone. There was another section defined for the mail coming back.
# For injecting mail back into postfix from the filter 127.0.0.1:10025 inet n - n - 16 smtpd [snip] -o content_filter= [snip]
I had to make two changes to insert the filter without causing a loop. For the 127.0.0.1:10025 service I changed the content filter to:
-o content_filter=detach:dummy
For the pickup service I changed:
pickup fifo n - - 60 1 pickup
to:
pickup fifo n - - 60 1 pickup -o content_filter= -o receive_override_options=no_header_body_checks
This setup means that mail sent through the local 'sendmail' binary will not have the attachments detached. There are no shell users on the mail server so that was not a problem. If webmail is used, make sure it connects via smtp and does not run a local sendmail binary.
There are two additional things that I do to help with the attachment space. I remove files more than 30 days old and I hardlink duplicate files. I use my modified hardlink script so that new duplicate files are not prematurely deleted. The original script came from here.
cleanup.sh
#!/bin/bash echo # Delete old files COUNT=$(find /home/example/files -mindepth 2 -mtime +30 -print -delete | wc -l) echo "find deleted $COUNT files" # Hardlink duplicate files python /usr/local/bin/hardlink.py -v 0 -t /home/example/files/ echo
I run that script once per night and pipe the output to a file. If you use index.cgi and delete.cgi you will want to set their permissions such that this script does not delete them after 30 days.
April 7 2012 output:
find: cannot delete /home/example/files/index/index.cgi: Permission denied find: cannot delete /home/example/files/index/.htaccess: Permission denied find: cannot delete /home/example/files/index/delete.cgi: Permission denied [snip] find deleted 109 files Hard linking Statistics: Files Hardlinked this run: [snip] Directories : 2780 Regular files : 1023 Comparisons : 1619 Hardlinked this run : 6 Total hardlinks : 68 Bytes saved this run : 31371571 (29.918 mebibytes) Total bytes saved : 420746864 (401.255 mebibytes) Total run time : 8.47399711609 seconds
It does not take much time and does its job.