All this is open for discussion, I’m not saying it’s the right or wrong way to do something. It just happens to be the way I do things.
Something that came out of developing Datasentiment was sending huge amounts of email from the system. The original plan was to use a Java Messaging System (JMS) bridge to scale up and down as need be as emails were sent.
All well in theory but it does cause a problem if the server dies or you need to restart it. Also running a huge enterprise server like Glassfish comes with another set of problems, memory usage.
So I scaled everything back to Tomcat and relied on two very simple tools. The database and a cron job.
When a lot of email are generated from a web app I try to ensure that it’s saved to the database for anything thinks of sending it. If any process dies while sending then I can restart the process where it left off. All my email type tables (all my SMS and Apple Push Notifications work in the same way) there’s a sendflag which is essentially a timestamp it the message was sent.
The email table can be as easy as a name, email address, subject, body and when it was sent. If you want to get more elaborate then it’s going to take more designing (attachments and inline images etc).
The cron job. I have a COMPLETELY seperate program running the cron job, I could have opted to have it trigger the webapp system and run from there. It all depends on your view for disaster recovery. I don’t want to reboot a whole web server again just because emails aren’t sending out. The cron job program gathers 100 emails to be sent where the sendflag is -1 (ie there’s no timestamp to say it’s been sent).
Lastly the other win is that this system is a lot easier to test that piddling around with mock objects or quick servers to mimic email server behaviour. The results are sitting in the database already.
If there’s one thing the hacks a user off is pressing the big button on a form only having to wait for the next page to load because an email is waiting to send.
One response to “[Trenches tip] Try not to send email from within web applications.”
Yep, definitely worth deferring any time consuming task until later.I’m currently working on a system which makes *many* requests to an external API and then stores the results in a db for reporting purposes. Each request can take 30s or more to complete.Instead of having one monolithic process doing all the extract-transform-load stuff I’m using a simple queuing system built on top of the db to defer the fetching until whenever. Now it doesn’t matter if the main process falls over as I have a record of the outstanding and failed jobs and can just start the worker up again. Another win is that I can spin up multiple workers and process the jobs in parallell.