test-action-debian-package/node_modules/pino/docs/transports.md
Dawid Dziurla 9308795b8b
update
2020-03-26 15:37:35 +01:00

12 KiB

Transports

A "transport" for Pino is supplementary tool which consumes Pino logs.

Consider the following example:

const split = require('split2')
const pump = require('pump')
const through = require('through2')

const myTransport = through.obj(function (chunk, enc, cb) {
  // do the necessary
  console.log(chunk)
  cb()
})

pump(process.stdin, split(JSON.parse), myTransport)

The above defines our "transport" as the file my-transport-process.js.

Logs can now be consumed using shell piping:

node my-app-which-logs-stuff-to-stdout.js | node my-transport-process.js

Ideally, a transport should consume logs in a separate process to the application, Using transports in the same process causes unnecessary load and slows down Node's single threaded event loop.

In-process transports

Pino does not natively support in-process transports.

Pino does not support in-process transports because Node processes are single threaded processes (ignoring some technical details). Given this restriction, one of the methods Pino employs to achieve its speed is to purposefully offload the handling of logs, and their ultimate destination, to external processes so that the threading capabilities of the OS can be used (or other CPUs).

One consequence of this methodology is that "error" logs do not get written to stderr. However, since Pino logs are in a parseable format, it is possible to use tools like pino-tee or jq to work with the logs. For example, to view only logs marked as "error" logs:

$ node an-app.js | jq 'select(.level == 50)'

In short, the way Pino generates logs:

  1. Reduces the impact of logging on an application to the absolute minimum.
  2. Gives greater flexibility in how logs are processed and stored.

Given all of the above, Pino recommends out-of-process log processing.

However, it is possible to wrap Pino and perform processing in-process. For an example of this, see pino-multi-stream.

Known Transports

PR's to this document are welcome for any new transports!

pino-applicationinsights

The pino-applicationinsights module is a transport that will forward logs to Azure Application Insights.

Given an application foo that logs via pino, you would use pino-applicationinsights like so:

$ node foo | pino-applicationinsights --key blablabla

For full documentation of command line switches read readme

pino-azuretable

The pino-azuretable module is a transport that will forward logs to the Azure Table Storage.

Given an application foo that logs via pino, you would use pino-azuretable like so:

$ node foo | pino-azuretable --account storageaccount --key blablabla

For full documentation of command line switches read readme

pino-cloudwatch

pino-cloudwatch is a transport that buffers and forwards logs to Amazon CloudWatch.

$ node app.js | pino-cloudwatch --group my-log-group

pino-couch

pino-couch uploads each log line as a CouchDB document.

$ node app.js | pino-couch -U https://couch-server -d mylogs

pino-datadog

The pino-datadog module is a transport that will forward logs to DataDog through it's API.

Given an application foo that logs via pino, you would use pino-datadog like so:

$ node foo | pino-datadog --key blablabla

For full documentation of command line switches read readme

pino-elasticsearch

pino-elasticsearch uploads the log lines in bulk to Elasticsearch, to be displayed in Kibana.

It is extremely simple to use and setup

$ node app.js | pino-elasticsearch

Assuming Elasticsearch is running on localhost.

To connect to an external elasticsearch instance (recommended for production):

$ node app.js | pino-elasticsearch --node http://192.168.1.42:9200

Assuming Elasticsearch is running on 192.168.1.42.

To connect to AWS Elasticsearch:

$ node app.js | pino-elasticsearch --node https://es-url.us-east-1.es.amazonaws.com --es-version 6

Then create an index pattern on 'pino' (the default index key for pino-elasticsearch) on the Kibana instance.

pino-mq

The pino-mq transport will take all messages received on process.stdin and send them over a message bus using JSON serialization.

This useful for:

  • moving backpressure from application to broker
  • transforming messages pressure to another component
node app.js | pino-mq -u "amqp://guest:guest@localhost/" -q "pino-logs"

Alternatively a configuration file can be used:

node app.js | pino-mq -c pino-mq.json

A base configuration file can be initialized with:

pino-mq -g

For full documentation of command line switches and configuration see the pino-mq readme

pino-papertrail

pino-papertrail is a transport that will forward logs to the papertrail log service through an UDPv4 socket.

Given an application foo that logs via pino, and a papertrail destination that collects logs on port UDP 12345 on address bar.papertrailapp.com, you would use pino-papertrail like so:

node yourapp.js | pino-papertrail --host bar.papertrailapp.com --port 12345 --appname foo

for full documentation of command line switches read readme

pino-mysql

pino-mysql loads pino logs into MySQL and MariaDB.

$ node app.js | pino-mysql -c db-configuration.json

pino-mysql can extract and save log fields into corresponding database field and/or save the entire log stream as a JSON Data Type.

For full documentation and command line switches read the readme.

pino-redis

pino-redis loads pino logs into Redis.

$ node app.js | pino-redis -U redis://username:password@localhost:6379

pino-sentry

pino-sentry loads pino logs into Sentry.

$ node app.js | pino-sentry --dsn=https://******@sentry.io/12345

For full documentation of command line switches see the pino-sentry readme

pino-socket

pino-socket is a transport that will forward logs to a IPv4 UDP or TCP socket.

As an example, use socat to fake a listener:

$ socat -v udp4-recvfrom:6000,fork exec:'/bin/cat'

Then run an application that uses pino for logging:

$ node app.js | pino-socket -p 6000

Logs from the application should be observed on both consoles.

Logstash

The pino-socket module can also be used to upload logs to Logstash via:

$ node app.js | pino-socket -a 127.0.0.1 -p 5000 -m tcp

Assuming logstash is running on the same host and configured as follows:

input {
  tcp {
    port => 5000
  }
}

filter {
  json {
    source => "message"
  }
}

output {
  elasticsearch {
    hosts => "127.0.0.1:9200"
  }
}

See https://www.elastic.co/guide/en/kibana/current/setup.html to learn how to setup Kibana.

For Docker users, see https://github.com/deviantony/docker-elk to setup an ELK stack.

pino-stackdriver

The pino-stackdriver module is a transport that will forward logs to the Google Stackdriver log service through it's API.

Given an application foo that logs via pino, a stackdriver log project bar and credentials in the file /credentials.json, you would use pino-stackdriver like so:

$ node foo | pino-stackdriver --project bar --credentials /credentials.json

For full documentation of command line switches read readme

pino-syslog

pino-syslog is a transforming transport that converts pino NDJSON logs to RFC3164 compatible log messages. The pino-syslog module does not forward the logs anywhere, it merely re-writes the messages to stdout. But when used in combination with pino-socket the log messages can be relayed to a syslog server:

$ node app.js | pino-syslog | pino-socket -a syslog.example.com

Example output for the "hello world" log:

<134>Apr  1 16:44:58 MacBook-Pro-3 none[94473]: {"pid":94473,"hostname":"MacBook-Pro-3","level":30,"msg":"hello world","time":1459529098958,"v":1}

pino-websocket

pino-websocket is a transport that will forward each log line to a websocket server.

$ node app.js | pino-websocket -a my-websocket-server.example.com -p 3004

For full documentation of command line switches read readme

pino-http-send

pino-http-send is a configurable and low overhead transport that will batch logs and send to a specified URL.

$ node app.js | pino-http-send -u http://localhost:8080/logs