Pseudonymization and You – Optimizing Data Protection

Written by Natalie Hays

Categories: Security

4 minute read time

 

Everyone hated the privacy policy email armageddon, businesses included. Not because some of their emails were going straight to spam, but because a lot of businesses had to take a second look at their security measures. Revamping security measures can suck – but losing a ton of personal information is the absolute worst.

 

What can we do?

Businesses can reduce their risk of having information stolen by implementing the right practices for data protection – and some are pretty simple to get behind. One method we’re pretty fond of is Pseudonymization, which is the fancy way of saying sensitive data camouflage. Pseudonymization replaces identifying information in a data record with fake identifiers (pseudonyms) which makes it difficult to trace any given data point. If you think about it, it’s kind of like that fake myspace you had in 2007.

 

Sounds complicated, why do this?

The great thing about Pseudonymization is that it’s the same data just under an assumed name, or in this instance, a very long string of characters. While no one protection is enough on its own, combining it with other practices like encryption, hashing, or tokenization help reduce the risk of re-identification.

Applying pseudonymization to your data is relatively simple, and there is more than one way to accomplish it. In this example, we’ll be looking at the Logstash Fingerprint filter plugin, but you can also try a generic file script using a Ruby filter plugin if this doesn’t work out for you. Both methods will mask the username and IP fields, so keep that in mind!

 

Implementing Pseudonymization

Before we get started, grab some Mountain Dew because nothing makes you feel more like a computer mastermind than questionable soda choices. Once you’ve cracked open that cold one, download the files in the repository to a local directory. Here is some code from GitHub that makes it easier to download the files individually:

curl -O https://raw.githubusercontent.com/elastic/examples/master/Miscellaneous/gdpr/pseudonymization/docker-compose.yml

curl -O https://raw.githubusercontent.com/elastic/examples/master/Miscellaneous/gdpr/pseudonymization/logstash_fingerprint.conf

curl -O https://raw.githubusercontent.com/elastic/examples/master/Miscellaneous/gdpr/pseudonymization/logstash_script_fingerprint.conf

curl -O https://raw.githubusercontent.com/elastic/examples/master/Miscellaneous/gdpr/pseudonymization/pipelines.yml

curl -O https://raw.githubusercontent.com/elastic/examples/master/Miscellaneous/gdpr/pseudonymization/pseudonymise.rb

curl -O https://raw.githubusercontent.com/elastic/examples/master/Miscellaneous/gdpr/pseudonymization/Dockerfile

curl -O https://raw.githubusercontent.com/elastic/examples/master/Miscellaneous/gdpr/pseudonymization/sample_docs

 

Check that the directory with the downloaded files is shared with the docker. Then go into the directory and execute the following command:

ELASTIC_PASSWORD=changeme TAG=6.2.2 docker-compose up.

 

Look for the logline below. This will tell you Logstash has started and can now accept data.

logstash_1 | [2018-03-20T12:40:33,638][INFO ][logstash.agent ] Pipelines running {:count=>2, :pipelines=>["fingerprint_filter", "ruby_filter"]}

 

Now take a sip, babes. We’re almost there.

 

For the Fingerprint filter plugin, execute the following command:

cat sample_docs | nc localhost 5000

 

Ta-da! You can now inspect and use the data! The pseudonymized information will be indexed to an events index which you can access through the following query:

curl "http://localhost:9200/events/_search?pretty" -u elastic:changeme

 

It should look a little something like this:

{

        "_index": "events",

        "_type": "doc",

        "_id": "tQOjQ2IBED8Jv9YVVDxs",

        "_score": 1,

        "_source": {

          "host": "gateway",

          "user_agent": "Mozilla/5.0 (Macintosh; PPC Mac OS X 10_6_7) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.790.0 Safari/535.1",

          "job_title": "Electrical Engineer",

          "username": "95b88d8d477e18a8acca833e7bcbd2c5d5f646b29e2d1c9604a1d930e2f63313",

          "@timestamp": "2018-03-20T13:39:59.799Z",

          "ip": "e85022a9801b356dd8c3ed6b2e02f0061a3aeea5bbad15a9ff4aed35b5bb3a42",

          "source": "ruby_pipeline",

          "city": "Komsomol’skiy",

          "title": "Mr",

          "country_code": "UZ",

          "@version": "1",

          "gender": "Female",

          "country": "Uzbekistan",

          "port": 41126

        }

     }

 

Cool, right? But wait, there’s more! The key-value pair lookups are in an identifies index, which you can access with this query:

curl "http://localhost:9200/identities/_search?pretty" -u elastic:changeme

 

If you’re wondering what that looks like, take a peek below:

  {

    "_index": "identities",

    "_type": "doc",

    "_id": "1924d02bd98a46c795cb2a925b98a22ae59c563e0de49f4ba4aa49e6cab072ad",

    "_score": 1,

    "_source": {

      "key": "1924d02bd98a46c795cb2a925b98a22ae59c563e0de49f4ba4aa49e6cab072ad",

      "value": "174.145.248.21",

      "tags": [

        "identities"

      ],

      "@timestamp": "2018-03-20T13:39:59.957Z",

      "@version": "1",

      "source": "ruby_pipeline"

    }

  }

 

You should always have 200 documents in a pseudonym index no matter how many times you index the data. There is one document for each unique value in the table and in this case, we have the username and IP address. Need to reidentify a value? You can look it up by ID in the identities index.

 

ICYMI – this is what a pseudonymized value looks like:

6efda88d5338599ef1cc29df5dad8da681984580dc1f7f495dcf17ebcf7191f8

 

If you need the original value, you can get it with this command:

curl "http://localhost:9200/identities/doc/6efda88d5338599ef1cc29df5dad8da681984580dc1f7f495dcf17ebcf7191f8?pretty" -u elastic:changeme

 

Conclusion

BAM! Pseudonymization! It’s like the witness protection program for data – we’re a big fan. All of that pseudonymized data makes it difficult for bad actors to do anything with it even if they’re good at what they do. With a solid data retention policy your risk for theft can be drastically minimized, and who doesn’t love that? If you’re curious about Mailgun’s data processing, check out our website! We get real technical with email, real fast.

Modified on: August 31, 2018

Stay up-to-date with our blog & new email resources

We'll let you know when we add new email resources and blog posts. We promise not to spam you.