Back to main menu


What happened yesterday and what we are doing about it

An overview of a sending delay at Mailgun in 2013. Read more...



Reported on September 19, 2013.

Yesterday was a bad day for Mailgun customers. We experienced significant delays and duplication of emails from approximately 20:40 pm UTC to 04:40 UTC. While we don’t have any evidence of lost messages, the delays sending emails were significant. This is unacceptable so we want to provide you an outline of what happened and what we are doing to protect against this happening again.

What happened?

At 20:40 UTC, we received a spike in messages that triggered a series of cascading failures in our system. These spikes are usually handled by Mailgun without problems, but in this particular case it triggered a bug in Mailgun that slowed down our Riak clusters by overloading them with unnecessary requests and consuming excessive storage. As we introduced more load on the cluster it triggered garbage collection on nodes that made the situation worse. Riak survived (thanks Basho team for writing robust software), but resulted in significant message delays. In order to recover from the immediate issue, we restarted several processes which, in some cases, caused duplicate messages to be sent.

Message delays

As the overall performance of the system went down, messages were queued, but not delivered at normal speed. We identified the bug and rolled out the fix, but it took us a while to get clusters back to their normal state and clear out the backlogged queue of messages.

Duplicate messages

As our delivery nodes slowed down, our monit scripts started killing delivery nodes and restarting them as part of our emergency recovery procedure. This ungraceful shut down and restart caused duplicate messages to be delivered for a small number of customers as some messages were sent but not marked as delivered in our system and were retried after the process restarted.

What we are doing about it?

This level of performance degradation for Mailgun is unacceptable. Our customers trust us to deliver their mission critical emails, and we let them down. As a result of this outage, we are going to implement some changes.

More resilient architecture

First of all, we have identified the bug in our system that caused the slow down and rolled out a fix. In addition, we are in the process of re-architecting our core storage and routing processes so that they are more fault tolerant and will perform better in these situations.

Better communication of issues

This event has made it clear that the Mailgun’s status page is not always an accurate reflection of Mailgun status. Though our API and SMTP services were technically “available” yesterday, significant email delays are, in practice, a service impacting event and we should be transparent about that. As a result, we’ve already moved our status page from pingdom to, so that we can provide a single place for incident alerts. You will be able to subscribe to alerts via SMS, webhook, Twitter or email so you know the moment Mailgun is experiencing issues. Longer term, we will be adding information about Mailgun’s queue size and other metrics that are more descriptive regarding performance. In addition, in each customer’s Mailgun control panel, we will be adding more details about each customer’s own queue size and performance.

Making it right

We believe that Mailgun should always be available and performant. Significant email delays do not meet this criteria. We do offer an SLA and while this technically did not qualify as an outage, if this affected your business, we’d like to make it right. You can send an email to and we can discuss an appropriate credit as compensation for this issue.

All in all, yesterday was a tough day for our customers and for us. We are very sorry for this issue and we are determined to do everything in our power to make sure a situation like this does not happen again.

The Mailgunners

Related readings

The golden age of scammers: AI-powered phishing

Long live the prince of Nigeria, he had a good run. Gone is the age where scammers wield the same mediocre power as a snake oil salesman, reliant on their own persuasion and...

Read more

An expanded Mailgun product suite to transform email deliverability

Today marks a special day for Sinch Mailgun. For over a decade, our focus has been to provide the best email experience for businesses all around the world. Now, we take...

Read more

What are SYN flood attacks and how can you defend against them?

“We’re under attack!” It’s a line that could very well be taken directly from Star Wars or The Matrix, but it’s also a cyber security reality. These attacks are not only sneaky but can be...

Read more

Popular posts

Email inbox.

Build Laravel 10 email authentication with Mailgun and Digital Ocean

When it was first released, Laravel version 5.7 added a new capability to verify user’s emails. If you’ve ever run php artisan make:auth within a Laravel app you’ll know the...

Read more

Mailgun statistics.

Sending email using the Mailgun PHP API

It’s been a while since the Mailgun PHP SDK came around, and we’ve seen lots of changes: new functionalities, new integrations built on top, new API endpoints…yet the core of PHP...

Read more

Statistics on deliverability.

Here’s everything you need to know about DNS blocklists

The word “blocklist” can almost seem like something out of a movie – a little dramatic, silly, and a little unreal. Unfortunately, in the real world, blocklists are definitely something you...

Read more

See what you can accomplish with the world's best email delivery platform. It's easy to get started.Let's get sending
CTA icon