Import Messages from Logstash

This solution describes how to import logs from Logstash into Scalyr.


1. To set up Logstash to send data to Scalyr you will first need to install the Scalyr output plugin.

cd /usr/share
bin/logstash-plugin install logstash-output-scalyr

2. Add the Scalyr output plugin configuration to your logstash config file (`logstash-simple.conf` if following Logstash documentation). You will need a Scalyr Write Logs API key. If you do not already have one, you can provision one on the Scalyr website.

output {
  scalyr {
    api_write_token => "<your API token here>"

If the CA bundle is not located at the default `/etc/ssl/certs/ca-bundle.crt`, it may be necessary to configure `ssl_ca_bundle_path`.

3. Restart Logstash

Plugin Configuration

These are options you might want to configure

Field Type Default Value Description
api_write_token string None A log write API token for your Scalyr account. This field is required.
scalyr_server string Which server events will be uploaded to. For EU customers this should be changed to ``.
ssl_ca_bundle_path string /etc/ssl/certs/ca-bundle.crt Path to SSL bundle file.
use_hostname_for_serverhost boolean false If true a `serverHost` field will be added to each upload request with the value of the hostname the plugin is running on.
flatten_nested_values boolean false If true, nested values will be flattened (which changes keys to delimiter-separated concatenation of all nested keys)
flatten_tags boolean false If true, the 'tags' field will be flattened into key-values where each key is a tag and each value is set to `flat_tag_value`.
flat_tag_value any 1 See `flatten_tags` description.
flat_tag_prefix string tag_ If `flatten_tags` is true, the flattened keys will be prefixed with the value.
compression_type string deflate What compression to use for the Scalyr request. Valid options are bz2, deflate or None.
compression_level int 6 Compression level, higher means smaller messages but more processing.

Logstash Configuration

The Scalyr server generally expects events to be sent to it in order, not doing so may result in out of order or duplicate log lines in Scalyr. If you wish to avoid this, you should run your Logstash pipeline configured with `pipeline.workers: 1`, or `-w 1` if configuring from the command line.

Small batch sizes negatively affect the throughput of this plugin, it is recommended to increase the value of `pipeline.batch.size` above the default of `125`.

There are several fields in an event that are important to Scalyr that you may want to populate with a filter.

1. `message`, should contain the main information or the raw data of the event and will show up prominently in the Scalyr UI.

2. `serverHost` and `logfile`, these should be the hostname and filename of the originating logfile. If possible, these have their own widgets to search against in the Scalyr UI.

3. `parser`, this field tells Scalyr what parser to use when processing the event for things like parsing extra fields out of the `message`. Similar effects can be achived with Logstash filters but you can save CPU by letting Scalyr handle the processing!

You can use the `mutate` filter to add these fields or rename existing fields to them. Here is an example of a filter configuration you can use to add these fields:

filter {
    mutate {
        add_field => { "parser" => "logstash_parser" }
        add_field => { "serverHost" => "my hostname" }
        rename => { "path" => "logfile" }
        rename => { "data" => "message" }