Back to main menu

IT & Engineering

How connection pooling helped us cut delivery time in half, offer opportunistic TLS

This post was written by Russell Jones, a software developer at Mailgun responsible for open-sourcing our mime-parsing library flanker. Today he’s going to blog about how we optimized outbound connections and reduced sending time while implementing opportunistic TLS. The same technique can be used to optimize everything from web crawling to high-throughput external APIs, but we’ll discuss SMTP as an example.

PUBLISHED ON

PUBLISHED ON

This post was written by Russell Jones, a software developer at Mailgun responsible for open-sourcing our mime-parsing library flanker. Today he’s going to blog about how we optimized outbound connections and reduced sending time while implementing opportunistic TLS. The same technique can be used to optimize everything from web crawling to high-throughput external APIs, but we’ll discuss SMTP as an example.

In January of 2014

We decided that we needed to refocus on our core sending pipeline to reduce downtime and increase performance. That means scaling and refactoring portions of Mailgun to achieve our goals. I’ve focused on the mail delivery part of the sending pipeline, the last step an email takes in the Mailgun sending pipeline, where we transmit the actual message to the intended recipient, and that’s what I’ll be talking about today.

Our objectives were simple. Stay ahead of our growth so that as we continue to add customers and send more mail, our customers don’t experience any downtime or slowdown in delivery speed.

We had a couple of concrete goals:

  • Reduce the delivery time of an email.

  • Reduce throttling we experience from recipient Email Service Providers (ESPs).

  • Improve security by encrypting email delivery whenever possible (opportunistic TLS).

  • Use monitoring to gain insight into new SMTP engine so we can better track delivery time and throttling.

Solution

Our original SMTP engine was simple, for every email we would pull out of our delivery queue, we would open a connection to the server we were trying to deliver to, send the message, and then close the connection.

While incredibly simple and effective, at the scale Mailgun operates now, this technique was wasteful. For every message we were sending, we had the overhead of a TCP handshake, SMTP handshake, and if we were delivering over TLS we had the TLS handshake. To give you some data, it would often take us over a minute and a half to deliver a message if we were trying to deliver over TLS. This is one of the reasons why we had not rolled out opportunistic TLS earlier – it was just too costly.

When we sat down and started thinking about improving delivery, we wanted to reduce the time it took to send a message as well as provide opportunistic TLS to our customers. Our solution was to send multiple messages per connection and use connection pooling to reuse already existing connections. Because Mailgun sends so much mail, finding a connection that is already open wasn’t a problem, and it allowed us to eliminate the connection establishment overhead.

What does that let us do?

This allows us to amortize the cost of the TCP and TLS handshake over multiple messages driving down its cost while increasing delivery speed. The messages that could take a minute and a half or more to deliver now take under roughly 600 ms to deliver. It also allows us to fine-tune IP to recipient ESP sending rates, which are critically important in email delivery, and increase overall delivery while reducing the cost on ourselves and ESPs. Being a better citizen in the email world leads to lower throttling, less resource utilization, and better delivery for customers

We also monitor everything from delivery rate, memory usage, to delivery time. This has allowed us to stay ahead of the health of the SMTP engine. We can now detect problems before they occur so customers are not impacted. This new data also helps us fight spammers, a never-ending battle, and also help narrow down where throttling is occurring so that we can improve Mailgun in other areas to reduce throttling and decrease delivery time.

Where we are going from here

The next step logical step is to work on revamping our sending rate algorithms. Now that we have better delivery and monitoring, we can see when and where throttling occurs and what changes affect throttling. However, no amount of algorithmic changes from our end can beat having good traffic. High-quality traffic trumps everything where email delivery is concerned. That is why our other big investment in 2014 is improving our reputation system to make it more accurate and to provide more data to our customers.

More to come on that front. Till then…

Happy Sending!

Sign Up

It's easy to get started. And it's free.

See what you can accomplish with the world’s best email delivery platform.

Related readings

Which SMTP port should I use? Understanding ports 25, 465 & 587

It's a common question that we receive here at Sinch Mailgun about SMTP port numbers. To ensure connectivity to our Simple Mail Transfer Protocol (SMTP) endpoint, Mailgun...

Read More

SMTP: Exploring port 587

When it comes to email delivery, not all SMTP ports are created equal. While port 25 might be the oldest port in the book, and port 465 had its moment in the sun, it's port 587 that...

Read More

Prevent your emails from going to spam: Plus 10 tips to fix it

Emails that land in the spam folder end up wasting all of that time and effort from your email marketing program – and they certainly won’t get any results. Sometimes, it can...

Read More

Popular posts

Email inbox.

Email

4 min

Build Laravel 10 email authentication with Mailgun and Digital Ocean

Read More

Mailgun statistics.

Product

4 min

Sending email using the Mailgun PHP API

Read More

Statistics on deliverability.

Deliverability

5 min

Here’s everything you need to know about DNS blocklists

Read More

See what you can accomplish with the world's best email delivery platform. It's easy to get started.Let's get sending
CTA icon