Papertrail Knowledge Base

Permanent log archives


Each night, Papertrail automatically uploads your log messages and metadata to Amazon's cloud storage service, S3. Papertrail stores one copy in our S3 bucket, and optionally, also stores a copy in a bucket that you provide. You have full control of this archive - it's tied to your AWS account.

Already use S3? Jump to "Create and share an S3 bucket."


For most services, Papertrail creates one file per day in tab-separated value format, gzip compressed. For higher-volume plans (above about 50 GB/month of logs, though the specifics vary), Papertrail creates one file per hour so the files are of a manageable size.

Each file is named under a path (key prefix) provided to Papertrail, typically papertrail/logs/<xxx> where <xxx> is an ID. For example, February 25, 2011 is:


Days are from midnight to midnight UTC. Alternatively, an hourly archive file for 3 PM UTC would be:


Each line contains one message. The fields are ordered:

id generated_at received_at source_id 
source_name source_ip facility_name severity_name program 

Here's an example (tabs converted to linebreaks for readability):

2011-02-10 00:19:36 -0800
2011-02-10 00:19:36 -0800
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor

Fields are delimited by tabs, so an actual line looks like this:

50342052\t2011-02-10 00:19:36 -0800\t2011-02-10 00:19:36 -0800\t42424\tmysystem\t208.122.34.202\tUser\tInfo\ttestprogram\tLorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor

To learn more about the meaning of each column, see response field descriptions in HTTP API.

The tab-separated value (TSV) format is easy to parse and the directory-per-day structure make it easy to load and analyze a single day's records.

Usage example


You can retrieve download links to the Papertrail S3 bucket archives using your Papertrail HTTP API key. The URL format is simple and predictable.

Downloading a single archive

On Linux, you can download yesterday's archive using:

curl -silent --no-include -o `date -u --date='1 day ago' +%Y-%m-%d`.tsv.gz -L \
    -H "X-Papertrail-Token: YOUR HTTP API KEY" \`date -u --date='1 day ago' +%Y-%m-%d`/download`

As you can see, there's quite a lot going on in that one line. The main parts are:

-o `date -u --date='1 day ago' +%Y-%m-%d`.tsv.gz - Downloads the archive to a file with yesterday's date (UTC) in the format YYYY-MM-DD.ts.gz

-H "X-Papertrail-Token: YOUR HTTP API KEY" - Authenticates the request via your API token which can be found under your profile.

Downloading multiple archives

To download multiple daily archives in one go, use:

seq 0 X | xargs -I {} date -u --date='{} day ago' +%Y-%m-%d | \
    xargs -I {} curl --progress-bar -f --no-include -o {}.tsv.gz \
    -L -H "X-Papertrail-Token: YOUR HTTP API KEY"{}/download

Where X is the number of days + 1 that you wish to download.

To specify a start date, for example: 10th August 2013, change date -u --date='{} day ago' +%Y-%m-%d to date -u --date='2013-08-10 {} day ago' +%Y-%m-%d.

Your API token can be found under your profile.

Presuming that the downloaded files have file names such as 2013-08-18.tsv.gz, multiple archives can be searched through using:

gzcat 2013-08-* | grep SEARCH_TERM

On some distributions, you may need to substitute gzcat for zcat.

More information on the HTTP API is available here.


To find an entry in a particular archive, use commands such as:

gzcat 2011-02-25.tsv.gz | grep Something

gzcat 2011-02-25.tsv.gz | grep Something | awk -F \t '{print $5 " " $9 " " $10 }'

The files are generic gzipped TSV files, so after un-gzipping them, anything capable of working with a text file can work with them.


To transfer multiple archives from Papertrail's S3 bucket to a custom bucket, use the download command mentioned above, and then upload them to another bucket using:

s3cmd put --recursive path/to/archives/ s3://

where path/to/archives/ is the local directory where all the archives are stored, and is the bucket and path of the target S3 storage location.


Here's how to sign up for Amazon Web Services, create a bucket for log archives, and share write-only access to Papertrail for nightly uploads.

Sign up for Amazon Web Services

Skip this step if you already have an AWS account, like for Amazon EC2, S3, or another AWS product.

Activate Amazon S3

Skip this step if your AWS account is already activated for S3.

Create and share an S3 bucket

Note: After submission, Amazon's management console may change the grantee name to aws or another label different from what was entered. This is expected.

Amazon also has instructions for editing bucket permissions.

Tell Papertrail the bucket name

On Account, enable S3 archive copies and provide the S3 bucket name.

Papertrail will perform a test upload as part of saving the bucket name (and will then delete the test file).


Sharing bucket access in AWS Management Console (the bucket name and existing bucket user have been obscured):

Papertrail S3 archive copy settings:




Why does Papertrail support S3 but not Glacier?

Papertrail supports S3 rather than Glacier because: