Dashboards

A dashboard is a page containing one or more graphs and/or panels. Graphs include line, area, bar and stacked bar charts; pie charts; and breakdown graphs. Panels include tables, a number returned from a query, and markdown.

Dashboards are used to group together and save useful information for viewing at a glance. For instance, you might create a dashboard that shows high-level health information for all of your servers, and an additional dashboard per subsystem showing additional details.

Quick Reference

The Dashboards dropdown in the navigation bar provides access to all of your dashboards. It is alphabetically-arranged by dashboard title. Clicking Dashboards > View All will take you to the Dashboards overview, a searchable page which lists each dashboard and its content, based on the title of your graph/panel.

(1) The title of your dashboard. The Dashboards dropdown in the navigation bar is alphabetically-arranged by these titles. A title is specified when you add a new dashboard. See (9) below if you wish to rename the dashboard.

(2) The time range for all graphs on your dashboard. The text to the left of this button displays the specific time range and the time zone. By default, the last four hours are displayed. You can customize this default via the duration JSON property, explained below.

Click the button to change the time range:

You can select a preset to quickly search a range, or enter a Custom range via the Start and End boxes. You can enter a time (e.g. 14:30 or 5:05 AM), a date (May 23), or date and time (5/14/2016 2:00 PM), using a wide variety of formats. Shortcuts like 5d/ 5h/ 5m/ 5s indicate five days/hours/minutes/seconds. The End time assumes NOW, so entering 5m for the Start time and hitting Enter will search the last five minutes. Using the + shortcut for the End time, for example +24h or +1d, will search from the Start time to one-day later.

See the Date/Time Reference for a complete list of options.

(3) Each graph on a dashboard can be repositioned. Hover your cursor near the top of the graph and a hand will appear. Click to grab and reposition the graph. Other graphs or tables will automatically adjust their positioning as you move your selection. See (9) to adjust and lock the layout, as well as set the number of columns per page.

(4) Each graph on a dashboard is resizable. Move your cursor to the bottom or to the right or to the bottom-right edge of the graph, and resize arrows will appear. Click and drag to resize.

(5) Additional options for each graph are available in the upper-right corner of each graph. You can enlarge the graph to full screen on the page, and you can click the spyglass to view individual plots in Search, where you can explore and modify the plot.

Click More to access the following options:

  • Edit Graph/Panel will take you to the dialog for each graph type, described in (8). For Graphs (area/line/bar) you can change the Title, Type (area/line/bar), and Lines (straight or smooth). You can also Add, Edit and Delete individuals plots. For Panels (pie chart/table panel/number panel/markdown panel) you can change the Title, Type (pie/table/number/markdown), and the PowerQuery Filter used to select data appropriate for the panel type.
  • Download PNG to out a PNG of an area/line/bar/pie graph.
  • Edit JSON to view the JSON snippet for the graph/panel of interest. See (9) if you wish to view the JSON for the entire dashboard page.
  • Clone will create a clone of a the graph or panel. This is useful for side-by-side comparison plots, or to edit the Filter of a graph/panel without altering the original.
  • Delete to delete a graph or panel.

(6) As you move your cursor over the graph point-level information is displayed. For example, the stacked area graph in the above image shows the mean value for each plot. (Most plots involve many events per second, so we present the mean value of nearby events, rather than a single value. For bar charts, we present the mean value over the time-span of the bar.)

(7) Clicking on each plot allows you to Hide/Show and Edit the plot. Hint: if your line graphs are too busy we suggest switching to area charts. They are additively stacked with multiple plots, and the Y-axis displays the cumulative value of all plots. This allows for clear viewing and quick visual comparisons across plots.

(8) Click + to Add a Panel/Graph. Scalyr has six types that you can add:

  • Graph. Select this to create a line, area, bar or stacked bar chart.
  • Breakdown. Select this to graph event volume broken down by a field, or a field broken down by another field. For example, when graphing data from a web access log, you could break it down by URL or user-agent. See Breakdown Graphs for more information.
  • Pie/Donut Chart. Select this to create a pie or donut chart using PowerQueries.
  • Table. Select this to create a table using PowerQueries.
  • Number. A number panel displays a single number from a PowerQuery. For example, you may wish to monitor the 95th percentile of your MySQL query time.
  • Markdown. Select this type to create a panel where you can add and format text in markdown.

For more information on adding each particular graph/panel type to your dashboard see Adding Graphs from Dashboards below.

(9) Click ... to access More Actions:

  • View Full Screen This shows your dashboard as a full screen. There is a Refresh button to the upper-right where you can set a refresh time for the dashboard. Click the X to exit full screen.
  • Edit JSON Here is where you can view, write and manage the JSON for the entire dashboard, as opposed to the JSON snippet for each graph in (5).
  • Copy Link Creates and copies a short URL for sharing.
  • Help Click and you will be taken to this documentation page for reference.
  • Reset Layout Click here to reset your layout, where you select how many columns per page-width you wish the graphs/panels to snap to.
  • Lock Layout Click here to lock your layout. If you wish to completely disable drag-and-drop capabiliity for a dashboard page, which hides locking/unlocking options from this menu, see the options JSON property below.
  • Rename Dashboard Click to rename the dashboard page.
  • Delete Dashboard Click to delete the dashboard page.

Built-in Dashboards

Scalyr comes with many built-in dashboards. Some are available depending upon the configuration of your system. For example, if you are using an Apache server, a built-in dashboard will summarize data from the web server's access log, plus metrics collected using the apache_monitor monitor plugin. Similarly, there are configuration-specific dashboards for Docker events and per-process resource metrics; Kubernetes events and pod metrics; MySQL metrics; NGINX access logs and metrics; PostgreSQL metrics; and Windows system and process metrics.

Note that Scalyr places the values of built-in metrics in a value field, and the corresponding names in a metric field.

The following dashboards are universal:

System Dashboard — displays metrics for an individual host, such as CPU and RAM usage, free disk space, and network bandwidth. This dashboard will contain data for each host on which you've installed the Scalyr Agent.

Web Server Dashboard — displays metrics for a web (HTTP) server, such as request rate, status codes, and response times and sizes. To use this dashboard, you must configure the Scalyr Agent to upload your web access logs (see the Analyze Access Logs solution page).

Many servers do not log response times by default. This is easily fixed by adjusting the log pattern. For Apache, use a log format of "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D". For Tomcat, edit the AccessLogValve directive in the server.xml configuration file — it should look something like this:

    <Valve className="org.apache.catalina.valves.AccessLogValve"
           directory="logs" prefix="access" suffix=".log"
           pattern='%h %l %u %t "%r" %s %b "%{Referer}i" "%{User-Agent}i" %D'
           resolveHosts="false" />

After updating the server configuration, you will usually need to restart your web server.

Paths Dashboard — shows an overview of all traffic to your web servers, organized by request page. You can view HTTP requests and HTTP status codes (2xx through 5xx), as well as the average and total request size in bytes. To use this dashboard, you must configure the Scalyr Agent to upload your web access logs. See the Analyze Access Logs page for more information.

Servers Dashboard — lists all servers on which you have installed the Scalyr Agent, with some basic system metrics for each: CPU load, Free Root Disk and /mnt, Network in and out (bytes per second), and web hits per hour.

Monitors Dashboard — lists all of your HTTP Monitors. You can view HTTP status codes (2xx through 5xx), requests that have timed-out, and response sizes. You can also view the average and maximum completion times per monitor. See the Monitors reference for details.

Linux Process Metrics — displays metrics for an individual process or application. Metrics for the agent are enabled by default, and there are a number of options regarding configuration. See Linux Process Metrics for more information.

Adding Graphs

There are several ways to add a graph to a dashboard. The easiest way is from the Search view page. We recommend this approach because you can easily explore and adjust your filtering to identify the specific data you wish to visualize. You can also explore and adjust graph options to identify the specific graph you wish to plot. See Graphs for more information on this process.

Alternatively, you can add graphs from Dashboards, either through the GUI or by scripting your graph in JSON. In general, the GUI expects you to know beforehand how to filter your data to achieve the graph you want. (You can re-edit the graph through the GUI, but it is faster to use Search view).

For advanced users, authoring your graph in JSON can be a very efficient way to add and edit graphs, power queries, data tables and reports. See Editing Dashboards in JSON below for a thorough explanation.

To add a graph from Dashboards see (8) in the Quick Reference. Detailed instructions for the following six panel types are presented below.

Line, Area, Bar and Stacked Bar Graphs

Select this type to create a line, area, bar or stacked bar chart.

Enter the Title of your graph and select its Type. Note that area and bar charts are additively stacked with multiple plots, thus the Y-axis displays the cumulative value of all plots. This allows for clear viewing and quick visual comparisons across plots. Stacked bar charts are simple bar graphs for single plots, and stacked when multiple plots are added. For area and line graphs you can choose between straight or smoothed lines.

Then select Add Plot:

This panel is separated into two tabs: Basic Function, and JSON Editor. The basic function tab allows you to select a Function to apply to a Field. The Graph Functions section below explains each function in the dropdown. The Filter allows you to further narrow the data being plotted.

For example, when graphing the 95th percentile latency of MySQL query time, you would select 95th %ile for the Function, timeMs for the Field, and (QUERY) (serverHost contains 'appserver') for the Filter. This filter will select only the timeMs values for QUERY events on the application servers.

You can additionally specify a Label and a Color for the plot.

The JSON Editor tab allows you to script each plot in JSON. For example, the JSON of the above user CPU Usage is:

{
	"filter": "(QUERY) (serverHost contains 'appserver')",
	"facet": "p95(timeMs)",
	"label": "95th %tile"
}

Note that the filter property is the same as the Filter entered via the Basic Function tab. Likewise, the facet property applies the p95() function to the field of interest, timeMs.

Authoring graphs in JSON lets you define more involved queries, for example applying a data transformation to a custom filter:

count(logfile contains 'access' status >= 500) / count(logfile contains 'access') * 100

See the sections below, beginning with Editing Dashboards in JSON, for a thorough guide to authoring dashboards in JSON. For more details on the Scalyr query language, including complex expressions like the above example, see Query Language.

Breakdown Graphs

Select this type to graph event volume broken down by a field, or a field broken down by another field. See (10) in Graphs for more information on this graph type.

Enter a Title for your graph and select the Type and Lines. Then select the Breakdown field via the dropdown. If you want to break down event volume by the breakdown field, leave the Field dropdown empty. Alternatively, if you want to break down a field by another field, select it via the Field dropdown. Filter allows you to specify a search filter to extract the events you wish to plot.

For example, selecting serverHost for Breakdown, leaving Field empty, and entering status=='failure' for Filter will graph the volume of events containing "status=='failure'", broken down by each server in serverHost. Alternatively, selecting serverHost for Breakdown, time for Field, and QUERY for Filter will graph time broken down by serverHost for all events containing "QUERY".

Breakdown graphs are often slow to load, and we recommend them only for research and data exploration, whenever possible. For Dashboard and Alert use, we recommend standard graphs. Note that if you are interested in a specific value or set of values in the breakdownFacet, you can add these as individual labels in a standard graph. For example, in the above configuration file, you can use a standard graph with labels for each host of interest. See the discussion of breakdown graphs in Timeout Tips for more information.

See the sections below, beginning with Editing Dashboards in JSON for a thorough guide to authoring dashboards in JSON, including Breakdown Graphs in JSON.

Pie and Donut Charts

Pie and donut charts are created using PowerQueries. Donut graphs are pie charts with the center removed.

Enter a Tile for your chart and then a PowerQueries Filter that returns a text column and a numeric column. For example status >= 0 status <= 999 | group count() by status will display status codes in a text column, and the count of events for each status code in a numeric column.

See the sections below, beginning with Editing Dashboards in JSON for a thorough guide to authoring dashboards in JSON, including Pie and Donut Charts in JSON.

Tables

Select this type to create a table using PowerQueries.

Enter a Tile and then a PowerQueries Filter to generate the table. For example, a table of Activity by IP address would have the Filter:

serverHost contains 'web' logfile='/var/log/nginx/access.log'
| group "Requests"=count(), "Login Attempts"=count(uriPath=='/login'), "Average Time (ms)"=average(time), "Bandwidth Consumed"=sum(bytes) by ip
| sort -Requests

This filters for access logs on your 'web' servers, then uses the group by command to group a count of requests, a count of login attempts, the average of the time field, and a sum of the bytes field. These are all grouped by the ip field to generate a table of requests, login attempts, average time (ms), and bandwidth consumed for each IP address, sorted from the highest number of requests to the lowest.

See the sections below, beginning with Editing Dashboards in JSON, for a thorough guide to authoring dashboards in JSON, including Tables. You can also consult PowerQueries.

Number Panels

Select this panel type to display a single number from a PowerQuery.

Enter a Tile and then a PowerQueries Filter to generate a single numeric value. For example, (QUERY) (serverHost contains 'appserver') | group p95(timeMs) will filter for events containing the word "QUERY", and where the serverHost field contains 'appserver'. The group command is then used with the p95() function to extract the 95% percentile of the timeMs field from the filtered events. The result is the 95th percentile latency of MySQL query time (ms).

Numbers are rounded to the nearest integer by default. You can specify decimal precision, as well as the units displayed, via the JSON file. You can also define a suffix (text displayed to the right of the number), and change the colors of the text and background, via the JSON.

See the sections below, beginning with Editing Dashboards in JSON, for a thorough guide to authoring dashboards in JSON. You can also consult the PowerQueries documentation.

Markdown Panels

Select this type to create a panel where you can add and format text in markdown, a lightweight styling language that is easily converted into HTML.

Enter a Title and then enter your GitHub flavored markdown. See the GitHub Guide to Mastering Markdown for more information.

Editing Dashboards in JSON

A dashboard is specified by a simple configuration file in an augmented JSON format. This topic describes the configuration syntax. See Configuration Files for more information on Scalyr configuration files.

When authoring dashboards in JSON we recommend using the design pattern explained below in the Hidden Parameters section. Using this pattern facilitates correct, clean, mutable dashboard design.

Dashboard Syntax

Here is an example of a dashboard configuration file:

{
  duration: "4h",
  description: "This is visible below the title of the dashboard page",
  graphs: [
      {
        title: "Free Disk Space",
        graphStyle: "line",
        filter: "min(value where source='tsdb' metric='df.1kblocks.free' host='host1')"
      },
      {
        title: "Free Disk Space",
        graphStyle: "line",
        facet: "min(value)",
        filter: "source='tsdb' metric='df.1kblocks.free' host='host1'"
      },
      {
        title: "CPU Usage",
        graphStyle: "stacked",
        plots: [
          {
            label: "user",
            filter: "mean(value where source='tsdb' metric='proc.stat.cpu_rate' type='user')"
          }, {
            label: "system",
            filter: "mean(value where source='tsdb' metric='proc.stat.cpu_rate' type='system')"
          }, {
            label: "I/O",
            filter: "mean(value where source='tsdb' metric='proc.stat.cpu_rate' type='iowait')"
          }
        ]
      }
    ]
  }

As explained in (5) and (9) above in the Quick Reference, you can view the JSON for the entire dashboard page, or for each graph object enclosed in curly brackets.

Top-level properties affecting the overall page are:

  • duration. This property specifies the default time range displayed in the dashboard. You can enter a range in seconds, minutes, hours, days, or weeks, with or without abbreviation. For example, "30m", "30 minutes", "4 hours", "1 day", etc.
  • description. This property displays a description of the dashboard page just below the dashboard title.
  • graphs. Each graph or panel on the dashboard page is an object nested within this property. In the above example there are three graph objects, each enclosed in curly brackets. Note how the third object is a graph with multiple plots.
  • options. To disable drag-and-drop capability for the page and hide locking/unlocking options from the menu, add options: {"layout": {"fixed": 1}}. When inspecting the JSON file for a dashboard page you will see additional properties under options such as {"layout": {"columns": 3}}. These properties govern the positioning of each graph/panel on the dashboard page, and are best adjusted via the GUI. See (3), (4) and (9) above in the Quick Reference.

Second-level properties, nested within the graphs property, include:

  • title. The title of the graph, visible in the upper-right corner of each graph.
  • graphStyle. Set this to "line " for a line graph; "stacked" for a stacked area graph; "stacked-bar" for a bar graph or stacked bar graph with multiple plots; "pie" for a pie chart; "donut" for a donut chart; "table" for a table panel; "number" for a number panel; "distribution" for a distribution graph; and "markdown" for a markdown panel. When omitted or empty ("") in graphs, the default is "line". When omitted or empty ("") in panels utilizing PowerQueries via the query property (described below), the default is "table". It's good practice to include this property for all graphs/panels.
  • plots. Use this property to display multiple plots in a graph object. In the above example there are three graph objects, each enclosed in curly brackets. Note how the third object is a graph with multiple plots.
  • lineSmoothing. This governs the behavior of line graphs and area graphs. It can be set to "straightLines" or "smoothCurves". The property is optional.
  • barWidth and numBars. These optional properties govern the bar width for bar or stacked bar graphs. You can easily edit these through the GUI - see 5 above in the Quick Refernece. barWidth is expressed as a unit of time, for example, "30m", "30 minutes", "4h", "1 day", etc. numBars is expressed as an integer ranging from one to 200. Only one property should be added per graph: if both are, barWidth takes precedence. Leaving either empty ("") defaults to 24 bars per time range.
  • ymin. A minimum value for the Y axis. Set "ymin": 0 for a zero-based graph. This property is optional.
  • ymax. A maximum value for the Y axis. This property is optional.
  • layout. This property is automatically added to each graph/panel, along with subproperties h, w, x, and y, to govern the size and position of each graph/panel on the dashboard page. You can easily size and position a graph/panel through the GUI - see 3 and 4 above in the Quick Reference. You can also adjust the h and w properties via the JSON to size your graph as desired. w governs the graph/panel width, and Scalyr uses a grid system that is 60 units wide. For example, w: 60 will size your graph width to 100% of the dashboard page, and w: 30 will size your graph to 50% of the dashboard page. The h property governs the graph/panel height and is expressed in grid units, with "14" being approximately 1/2 a full page height. Note that x and y should only be adjusted through the GUI.

Properties nested within plots (third-level) for multiple plots, or within graphs (second-level) for single plots, include:

  • label. The name of each plot, visible at the bottom of each graph.
  • facet. The Function(Field) to be graphed, for example "min(value)" in the above second graph. When set to facet: "rate" event volume (events per second) is graphed. When a field is specified instead of function(field) the default is mean(), for example setting "value" in the above second graph will default to "mean(value)". Note that a function can be applied to a field in the filter property: in this case facet is optional. For example, in the configuration file above, the first two graphs ("Average Free Disk Space") are effectively identical. When facet is not specified and a function is not applied via the filter property, facet defaults to "rate". See Graph Functions below for a list of functions you can apply.
  • filter. A search filter to extract the events of interest for graphing. The facet property can be omitted when a Function is applied to the field of interest in filter. For example, the first two graphs ("Average Free Disk Space") in the above configuration file are effectively identical. See the Query Language reference for more on the syntax you can employ in filter.
  • color. The color for each plot is automatic when not specified. If you wish to define a color for a plot, specify it in standard #RRGGBB hex syntax, for example #FF0000 for bright red.

This is not an exhaustive list of properties. Additional properties are explained below in the context of the graph, panel, parameter, or table where the property applies.

Graph Functions

Functions can be applied via the facet or filter properties, explained in the section above.

Function Meaning
rate Setting facet: "rate" graphs event volume (events per second, or per bar-span for bar graphs) over time.
count(field) The number of events matching the filter
mean(field) Average (this is the default function if none is specified)
min(field) Smallest value
max(field) Largest value
sumPerSecond(field) The "smoothed" sum of all values per second. For instance, if you have a field responseSize which records the number of bytes returned by some operation, then sumPerSecond(responsesize) will graph the bandwidth consumed by this operation, in bytes per second. (We divide the time period of your graph into a number of time spans, sum all values per time span, and then divide by the time span in seconds to get an average sum per second, per time span. Note that graphed values are exact over brief time periods (100 seconds, for example), and effectively smoothed over longer time periods.)
median(field) The median (50th percentile) value.
p10(field) The 10th percentile value.
p50(field) The 50th percentile value.
p90(field) The 90th percentile value.
p95(field) The 95th percentile value.
p99(field) The 99th percentile value.
p999(field) The 99.9th percentile value.
p(field, n) The Nth percentile value. For instance, p(value, 80) gives the 80th percentile.
fraction(expr) The fraction (from 0 to 1) of events which match the given expression. For instance, fraction(status >= 500 status <= 599) is the fraction of requests which have a status in the 5xx range. You can use any query expression, as documented in the earlier sections of this page.

The Timeshift Operator

In addition to the graph functions above you can utilize the timeshift operator with the filter property to graph values prior to the time range of your graph. This facilitates comparisons across time and is equivalent to the Compare button in Graph view (see 15 in Graphs). For example, you can monitor your current disk usage over the past four hours against disk usage over four hours from one week ago, to look for trends in capacity.

The syntax has the following format:

Function(Field [timeshift TimeInterval] where] Filter)
  • Function is one of the Graph Functions listed in the table above.
  • Field is the name of a numeric field you wish to graph.
  • TimeInterval is the amount of time you wish to back-plot your graph. For example, 1d to plot values 24 hours prior to the time range of your graph. (The unit can be 'seconds', 'minutes', 'hours', 'days', 'weeks', or their abbreviations.)
  • Filter is a filter expression, in Scalyr query language.

The first graph in the example below plots the mean of the time field for access logs, and the second plots the mean time from one day ago:

{
  "graphStyle": "line",
  "title": "Timeshift Operator Example",
  "plots": [
    {
      "filter": "mean(time where dataset='accesslog')",
      "color": "",
      "label": "mean",
    },
    {
      "filter": "mean(time timeshift 1d where dataset='accesslog')",
      "color": "",
      "label": "timeshift mean"
    },
  ]
}

Line, Bar and Area Graphs in JSON

The graphStyle property determines whether your graph is a line, area, bar or stacked bar chart. Set this property to line for a line graph, and "stacked" for an area chart. Setting this property to "stacked_bar" yields a bar graph for single plots, and a stacked bar graph for multiple plots. When omitted or empty ("") graphStyle defaults to "line".

Note that area and bar charts are additively stacked with multiple plots. The Y-axis will display the cumulative value of all plots. This allows for clear viewing of each plot and quick visual comparisons across plots.

The example below displays a stacked area graph of user, system, nice, and iowait CPU usage:

{
  "graphStyle": "stacked",
  "label": "CPU usage",
  "facet": "mean(value)", // "value" is the field storing the numeric value of "metric"
  "plots": [
    {
      "filter": "source='tsdb' serverHost=#serverHost# metric='proc.stat.cpu_rate' type='user'",
      "label": "user"
    },
    {
      "filter": "source='tsdb' serverHost=#serverHost# metric='proc.stat.cpu_rate' type='system'",
      "label": "system"
    },
    {
      "filter": "source='tsdb' serverHost=#serverHost# metric='proc.stat.cpu_rate' type='nice'",
      "label": "nice"
    },
    {
      "filter": "source='tsdb' serverHost=#serverHost# metric='proc.stat.cpu_rate' type='iowait'",
      "label": "iowait"
    }
  ]
}

Pie and Donut Charts in JSON

To create a pie or donut chart in JSON, set the graphStyle property to pie or donut. Then write the query property to specify a PowerQuery that returns a text column and a numeric column. For example, the JSON snippet below creates a pie chart of HTTP status codes:

{
  "graphStyle": "pie",
  "query": "status >= 0 status <= 999 | group count() by status",
  "title": "HTTP Status"
}

See PowerQueries for more information on PowerQueries syntax.

Tables in JSON

To create a table in JSON, either omit the graphStyle property, or leave it empty. Then write the query property to specify a PowerQuery that creates a table. In the example below, query filters for access logs on your 'web' servers, then pipes matching events into the group by command to group a count of requests, a count of login attempts, the average of the time field, and a sum of the bytes field. These are grouped by the ip field to generate a table of requests, login attempts, average time (ms), and bandwidth consumed for each IP address, sorted from the highest number of requests to the lowest.

{
  "query": "serverHost contains 'web' $logfile='/var/log/nginx/access.log' | group "Requests"=count(), "Login Attempts" = count(uriPath == '/login'), "Average Time (ms)"= average(time), "Bandwidth Consumed"=sum(bytes) by ip | sort -Requests",
  "title": "Activity by IP address"
}

See PowerQueries for more information on PowerQueries syntax.

Number Panels in JSON

To create a Number panel displaying a single number from a PowerQuery, set the graphStyle property to number. Then write the query property to specify a PowerQuery that returns a single value. For example, the JSON snippet below returns the 95th percentile latency for MySQL query times:

{
  "graphStyle": "number",
  "title": "MySQL Query Time",
  "query": "(QUERY) (serverHost contains 'appserver') | group p95(timeMs)",
  "options": {
    "format": "K",
    "precision": "3",
    "suffix": "this is my suffix",
    "color": "blue",
    "backgroundColor": "red"
  },
}

You can set additional, optional properties via options:

  • format allows you to specify the units ("KB" or "K", "M", "G", "T", and "P" ) for your number. Defaults to "auto", which examines the number and picks the best fit.
  • precision specifies the decimal precision for your number. The default is "0" unless the number is single digit, in which case precision is set to "1".
  • suffix adds text to the right of your displayed number.
  • color defaults to the dashboard text color. You can set another color using HTML Color Names, or #RRGGBB hex syntax, for example #FF0000 for bright red.
  • backgroundColor defaults to the dashboard background color. You can set another color using HTML Color Names, or #RRGGBB hex syntax, for example #FF0000 for bright red.

See PowerQueries for more information on PowerQueries syntax.

Breakdown Graphs in JSON

This graph type breaks down event volume by a field, or a field by another field. The breakdownFacet property is required, specifying the field to be broken down by. If you set the facet property to empty (""), you will break down event volume by breakdownFacet. If you specify a field in facet you will break down a plot of that field by breakdownFacet.

The example below contains two breakdown graphs. The first plots the volume of events containing "status=='failure'", broken down by each server in serverHost. The second graphs time broken down by serverHost for all events containing "QUERY".

{
  "duration": "4h"

  "graphs": [

    {
      "graphStyle": "line",
      "lineSmoothing": "straightLines",
      "breakdownFacet": "serverHost",
      "title": "Failure by Server",
      "plots": [
        {
          "filter": "status=='failure'",
          "facet": ""
        }
      ]
    },

    {
      "graphStyle": "line",
      "lineSmoothing": "straightLines",
      "breakdownFacet": "serverHost",
      "title": "Query Time by Server",
      "plots": [
        {
          "filter": "QUERY",
          "facet": "time"
        }
      ]
    }
  ]
}

Breakdown graphs are often slow to load, and we recommend them only for research and data exploration, whenever possible. For Dashboard and Alert use, we recommend standard graphs. Note that if you are interested in a specific value or set of values in the breakdownFacet, you can add these as individual labels in a standard graph. For example, in the above configuration file, you can use a standard graph with labels for each host of interest. See the discussion of breakdown graphs in Timeout Tips for more information.

Distribution Graphs in JSON

Distributions of numerical fields can be viewed in Graph view and saved to your Dashboard via the Save button.

You can script these graphs in JSON by setting the graphStyle property to "distribution". Set the facet property to identify the field you wish to graph a distribution of. The filter property is optional. In the example below we filter for MySQL QUERY events on the application servers, then graph a distribution of the timeMs field. The result is a distribution of MySQL query time.

{
  graphs: [
    {
      "title": "Distribution of MySQL Query Time",
      "graphStyle": "distribution",
      "filter": "(QUERY) (serverHost contains 'appserver')",
      "facet": "timeMs"
    },
  ]
}

Dashboard Parameters

Sometimes you may want to use the same dashboard to view different sets of data. For instance, you might build a dashboard that shows information for a specific server or data center, and then want to use it for other servers or data centers. Dashboard Parameters are a simple mechanism for applying a dashboard to multiple data sets.

To use dashboard parameters, add a "parameters" section to the dashboard definition file, and reference those parameters in your graphs. For example:

{
  parameters: [
    { name: "region", values: ["westCoast", "eastCoast"] },
    { name: "host", defaultValue: "host1" }
  ],

  graphs: [
    {
      label: "Free Disk Space on #region# / #host#",
      filter: "min(value where source='tsdb' metric='df.1kblocks.free' region='#region#' host='#host#')"
    }, {
      label: "Free Disk Space on #region# / #host#",
      facet: "value",
      filter: "source='tsdb' metric='df.1kblocks.free' region='#region#' host='#host#'"
    }, {
      label: "CPU Usage on #region# / #host#",
      plots: [
        {
          label: "user",
          filter: "mean(value where source='tsdb' metric='proc.stat.cpu_rate' type='user' region='#region#' host='#host#')"
        }, {
          label: "system",
          filter: "mean(value where source='tsdb' metric='proc.stat.cpu_rate' type='system' region='#region#' host='#host#')"
        }, {
          label: "I/O",
          filter: "mean(value where source='tsdb' metric='proc.stat.cpu_rate' type='iowait' region='#region#' host='#host#')"
        }
      ]
    }
  ]
}

In this dashboard, we have defined two parameters, "region" and "host". Those parameters are then substituted into graph titles and filter expressions using the syntax #parameter#, e.g. #region#. You can use parameters in a graph label, plot label, and filter expression.

When viewing this dashboard, there will be "region" and "host" fields to fill out. The "region" field, since it defines a values list, will have a dropdown element where you can toggle between values. The element will default to the first value specified in the configuration file ("eastCoast" in this example). The "host" field, since it has a defaultValue field, will have a free form text input element. It will be set to the value in defaultValue ("host1" in this example) unless another value is typed into the input field.

Sometimes you may want to give a parameter option a label that is different than the internal value used in queries. You can do this by turning each option into a dictionary with "label" and "value" fields:

  parameters: [
    { name: "region", values:
        [
          { label: "East Coast", value: "us-east-1"},
          { label: "West Coast", value: "us-west-1"}
        ]
    }
  ],

Hidden Parameters (and Better Dashboard Design)

Parameters can also be used as placeholders for complex query syntax. When used in this way the parameter takes on a single value and a dropdown element for toggling is unnecessary. You can manage the per-parameter visibility of this element through a nested options property for each parameter, which defaults to "visible". To hide a parameter, set options: {display: "hidden"}.

We encourage the use of hidden parameters as a design pattern because they make dashboards easier to write and mutate. In the example below, note how four parameters are used as placeholders for syntax and thus hidden. Also note how the parameter named "filter" has the defaultValue field set, allowing you to interactively refine the filter field for each plot referencing this parameter via text input on the Dashboards page.

{
  parameters: [
    // Add interactivity through free form text input
    { name : "filter", defaultValue: "" },
    // Typing savers: write query syntax once and reuse by "name" where needed
    { "name": "Success",        options: {display: "hidden"},  values: [ { label: "placeholder" , value: "status >= 200 status <= 299" } ] } ,
    { "name": "Redirect",       options: {display: "hidden"},  values: [ { label: "placeholder" , value: "status >= 300 status <= 399" } ] } ,
    { "name": "Client Errors",  options: {display: "hidden"},  values: [ { label: "placeholder" , value: "status >= 400 status <= 499" } ] } ,
    { "name": "Server Errors",  options: {display: "hidden"},  values: [ { label: "placeholder" , value: "status >= 500 status <= 599" } ] } ,
  ],

  graphs: [

    // sumPerSecond of each HTTP Status Category

    { label: "HTTP Status (sumPerSecond)", plots: [
    { label: "200s",  filter: "sumPerSecond(status where #filter# #Successful#)"    } ,
    { label: "300s",  filter: "sumPerSecond(status where #filter# #Redirects#)"     } ,
    { label: "400s",  filter: "sumPerSecond(status where #filter# #Client Errors#)" } ,
    { label: "500s",  filter: "sumPerSecond(status where #filter# #Server Errors#)" } , ] },
  ],
}

Defining Parameters Using Data Tables

You can place a list of parameter values in a separate file, called a "data table". This allows you to use the same list in multiple dashboards.

To create a data table:

  1. Go to the User menu on the right and choose Config Files.
  2. Click Create New File.
  3. Name the file /datatables/TABLENAME. For the table name, choose a simple identifier (no spaces or punctuation).
  4. Type or paste the table content (see below).
  5. Click Update File.

The file should look something like this:

{
  values: [
    { label: "value 1", value: "value for label 1" },
    { label: "value 2", value: "value for label 2" },
    { label: "value 3", value: "value for label 3" }
  ]
}

As a shortcut, if the name and value are the same, you can just enter a string:

{
  values: [
    "value 1",
    "value 2",
    { label: "value 3", value: "value for label 3" }
  ]
}

You can use this parameter list in a dashboard as follows:

parameters: [
  { name: "Parameter 1", values: ["__datatable(TABLENAME)"] }
],

The table name here should match the name you used when creating the file.

Per-Server Dashboards

You can make a dashboard to show data from any selected server. To do this, define a dashboard parameter with the special value __serverHosts. This will automatically be replaced by a list of all servers which have sent logs in the last 24 hours. For example:

{
  parameters: [
    {
      name: "host",
      values: ["__serverHosts"]
    }
  ],

  graphs: [
    {
      label: "CPU load average",
      plots: [
        {
          filter: "mean(value where source='tsdb' host='#host#' metric='proc.loadavg.1min')",
          label: "1 min avg"
        }
      ]
    }
  ]
}

If your dashboard is only applicable to certain servers, use a filter expression to restrict the servers listed in that dashboard. Some examples:

    // List all servers whose hostname contains "frontend"
    values: ["__serverHosts[host contains 'frontend']"]

    // List all servers whose agent configuration includes a server-level field
    // named "scope", with value "staging".
    values: ["__serverHosts[scope == 'staging']"]

    // List all servers having logs tagged with parser name "xxx".
    values: ["__serverHosts['parser:xxx']"]

    // List all servers where "xxx" appears anywhere in the file name
    // or parser name of any log.
    values: ["__serverHosts['xxx']"]

You can use the full Scalyr query language to select servers. Your filter expression can reference host (the server's hostname), serverIP (the server's IP address), and any server-level fields defined in the Scalyr Agent configuration. In addition, you can select based on log files and log parsers, using the text search syntax. When you use a text search filter, each server is treated as having the following text:

[parser:xxx] [parser:yyy] [log:aaa] [log:bbb] ...

listing each log file for that server, and any parsers associated with those log files in the Scalyr Agent configuration.

You can replace __serverHosts with __serverHostsQ to wrap the server names in single quotes. This makes it syntactically possible to write a dashboard that can show data from any selected server, but can also aggregate data across servers:

{
  parameters: [
    {
      name: "host",
      values: [
        { label: "Average (all servers)", value: "*"},
        "__serverHostsQ"
      ]
    }
  ],

  graphs: [
    {
      label: "CPU load average",
      plots: [
        {
          filter: "mean(value where source='tsdb' host=#host# metric='proc.loadavg.1min')",
          label: "1 min avg"
        }
      ]
    }
  ]
}

Reports

We now recommend the use of PowerQueries for report generation, as it is much faster and more powerful. However, linking in PowerQueries is not yet enabled. Below we discuss how to generate reports via dashboard JSON if you wish.

Reports allow you to summarize data about a collection of entities and present it in a table. The entities can be anything that is mentioned in a log - servers, URLs, error messages, IP addresses, etc.

Reports can use embedded parameters just like any other part of a dashboard. This allows you to define a single report, and use it to view data from different servers, data centers, or other choices. See the Dashboard Parameters section for details.

To create a report, follow these steps:

1. Make a new dashboard. (From the Dashboards menu, select New Dashboard, and enter a name.)

2. Create a report specification in the dashboard. Here is a simple example:

{
  graphs: [
    {
      title: "HTTP requests, by path",
      keys: [
        { label: "Path", attribute: "uriPath" }
      ],
      columns: [
        {
          label: "Count",
          filter: "dataset='accesslog'",
          function: "count"
        }, {
          label: "Average Size",
          filter: "dataset='accesslog'",
          attribute: "bytes",
          function: "mean"
        }, {
          label: "Total Size",
          filter: "dataset='accesslog'",
          attribute: "bytes",
          function: "sum"
        }
      ],
      sort: [ "-Count" ]
    }
  ]
}

3. To view the report, click the "View Dashboard" link in the dashboard editor.

The sample report summarizes all unique URLs served by a web site. For each URL, it shows the number of requests, the average response size, and the total size of all responses. It assumes that you are importing web access logs and using our standard access log parser. For further examples, look at the Servers Dashboard or Paths Dashboard, and click the Edit Dashboard link to view the source code.

The keys clause indicates how the report data should be grouped and organized - similar to the GROUP BY clause in an SQL query. In this example, there is a single key. It is labeled "Path" in the report view, and comes from a parsed field named "uriPath" in the logs. You can specify more than one key; for instance, servers could be grouped by data center and hostname.

The report will display up to 1000 distinct rows. If you would like to reduce the number of rows (for instance, to make room for other elements in your dashboard), specify a maxRows setting. For instance:

      title: "20 most common HTTP request paths",
      maxRows: 20,
      keys: [
        { label: "Path", attribute: "uriPath" }
      ],
      ...

The columns clause specifies the data in the report. Each entry creates one column in the report table. (Each key also gets a column.) A column can have the following fields:

  • label - the label for this column in the report table.
  • filter - a log query used to select data for this column. Uses the same syntax as used in Search view.
  • function - how to summarize the data for each table cell. Functions are listed below.
  • attribute - which log field to apply the function to. No field is used if the function is count.
  • href - optional; allows you to attach a custom link to cells in this column, as described below.
  • maxDisplayLength - optional; limits the number of characters displayed in this column.

The following functions are supported:

  • count - the total number of log messages matching the filter
  • latest - the field value from the most recent log message matching the filter
  • min, max - the smallest or largest value
  • mean - the average value
  • sum - the total of all values
  • sumPerSecond - the sum of all values, per second. For instance, sumPerSecond of HTTP response sizes gives the outgoing bandwidth in bytes per second. (We divide the time period of your graph into a number of time spans (one per graphed point), sum all values per time span, and then divide by the time span in seconds to get an average sum per second, per time span. Note that graphed values are exact over brief time periods (100 seconds, for example), and effectively smoothed over longer time periods as each graphed point may represent many seconds worth of data).
  • slopePerSecond - the difference between the oldest and newest matching log message, divided by the number of seconds between them. This shows the rate of change; it's useful for metrics like "free disk space".
  • breakdown - generates a breakdown of the values in a field (see below)

The sort clause specifies the order in which rows are displayed. You can specify one or more columns to sort on. Each entry is a column label, optionally preceded by a minus sign for descending sort. You can omit the sort clause, in which case the report is sorted on the key columns.

Currently, reports always summarize the last hour's worth of logs. If you would like other options, let us know.

Complex Expressions

You can perform arithmetic computations in a report, using the +, -, * , and / operators. Uses include scaling values to their most natural units (e.g. disk space as gigabytes), and computing ratios such as "errors as a fraction of all web requests". To use this feature, replace the "filter", "attribute", and "function" fields of a column specification with a single "expression" field. An example:

  expression: "latest(value where source='tsdb' metric='df.1kblocks.free' mount='/') / 1024"

You can perform arithmetic on queries (such as the "latest" query in the example), constants like 1024, or a combination of the two. All report functions are supported: count, latest, min, max, mean, sum, sumPerSecond, and slopePerSecond.

Breakdowns

The breakdown function generates multiple columns, one for each unique value in the attribute. For instance, when applied to the HTTP status in a web access log, this generates a breakdown of traffic by status code. Here is a simple report definition using a breakdown:

{
  title: "HTTP requests, by path and status",
  keys: [
    { label: "Path", attribute: "uriPath", maxDisplayLength: 100 }
  ],
  columns: [
    {
      filter: "dataset='accesslog'",
      attribute: "status",
      function: "breakdown",
      includeTotal: true
    }
  ]
}

This will generate a table looking something like this:

Path 200 201 502 Total
/index.html 361 11 29 401
/foo 11 1 12

The following options can be included with a breakdown column:

  • displayPercentages - instead of displaying the number of matches for each value, shows the percentage within that row.
  • includeTotal - adds a "Total" column, showing the total number of matches for that row.
  • maxDisplayedValues - The maximum number of distinct values to show. Defaults to 10; can range from 1 to 20. Any additional values will be grouped into an "Other" column.

Note: reports using the breakdown operator are currently limited to 24 hours of data, and 1000 output rows.

Links

You can click on any cell in a table (except for columns defined using "expression", or static columns, unless they also provide an href field). This will lead to a graph or log view of the data summarized in that cell.

You can override the normal click behavior for any column by adding an href field to the column specification. (You can even attach an href to a key column; key columns are not otherwise clickable.) The href field contains a standard URL specifying the page to load when clicking on cells in that column. The href can contain variables to be filled from the row data, using the syntax #attr#. For example, an href can be added to the Path column to link to a log search for requests with that path:

{
  label: "Path",
  attribute: "uriPath",
  href: "events?filter=uriPath%3D%27#uriPath#%27"
}

If you are referencing a field in the row key, specify the field name, as it appears in the attribute field of an entry in the keys list. If you are referencing a computed column, specify the column label, as it appears in the label field of an entry in the columns list.

You can also use the special fields #startTime# and #endTime# to reference the time period covered by the report. For instance:

{
  label: "description",
  attribute: "description",
  href: "events?filter=uriPath%3D%27#uriPath#%27&startTime%3D#startTime#&endTime%3D#endTime#"
}

Static Columns

A column can have a staticValue instead of the filter, function, and attribute fields:

{
  label: "Dashboard",
  staticValue: "link",
  href: "dash?page=system&param_serverHost=%27#host#%27"
}

This column will always contain the text "link", and will link to the System dashboard for the host whose name is in the host key for this row.

Data tables in static columns

You can use a data table to define mappings for use in static columns. For instance, you can map a status code to a meaningful message.

Data tables were discussed earlier for use in dashboard parameters. To create a data table for use in a static column:

  1. Go to the User menu on the right and choose Config Files.
  2. Click Create New File.
  3. Name the file /datatables/TABLENAME. For the table name, choose a simple identifier (no spaces or punctuation).
  4. Type or paste the table content (see below).
  5. Click Update File.

The file should look something like this:

{
  "200": "OK",
  "404": "Not Found",
  ...
}

To use this in a report, create a static column with a staticValue like this:

{
  label: "Status",
  staticValue: "#datatable(TABLENAME,status)#",
},

This example will take the value of the "status" field (which must be one of the report's key fields), look it up in the data table, and display the value.