This commit is contained in:
Dawid Dziurla 2020-03-26 15:37:35 +01:00
parent d9becc67b6
commit 9308795b8b
No known key found for this signature in database
GPG key ID: 7B6D8368172E9B0B
964 changed files with 104265 additions and 16 deletions

868
node_modules/pino/docs/api.md generated vendored Normal file
View file

@ -0,0 +1,868 @@
# API
* [pino() => logger](#export)
* [options](#options)
* [destination](#destination)
* [destination\[Symbol.for('pino.metadata')\]](#metadata)
* [Logger Instance](#logger)
* [logger.trace()](#trace)
* [logger.debug()](#debug)
* [logger.info()](#info)
* [logger.warn()](#warn)
* [logger.error()](#error)
* [logger.fatal()](#fatal)
* [logger.child()](#child)
* [logger.bindings()](#bindings)
* [logger.flush()](#flush)
* [logger.level](#level)
* [logger.isLevelEnabled()](#islevelenabled)
* [logger.levels](#levels)
* [logger\[Symbol.for('pino.serializers')\]](#serializers)
* [Event: 'level-change'](#level-change)
* [logger.version](#version)
* [logger.LOG_VERSION](#log_version)
* [Statics](#statics)
* [pino.destination()](#pino-destination)
* [pino.extreme()](#pino-extreme)
* [pino.final()](#pino-final)
* [pino.stdSerializers](#pino-stdserializers)
* [pino.stdTimeFunctions](#pino-stdtimefunctions)
* [pino.symbols](#pino-symbols)
* [pino.version](#pino-version)
* [pino.LOG_VERSION](#pino-LOG_VERSION)
<a id="export"></a>
## `pino([options], [destination]) => logger`
The exported `pino` function takes two optional arguments,
[`options`](#options) and [`destination`](#destination) and
returns a [logger instance](#logger).
<a id=options></a>
### `options` (Object)
#### `name` (String)
Default: `undefined`
The name of the logger. When set adds a `name` field to every JSON line logged.
#### `level` (String)
Default: `'info'`
One of `'fatal'`, `'error'`, `'warn'`, `'info`', `'debug'`, `'trace'` or `'silent'`.
Additional levels can be added to the instance via the `customLevels` option.
* See [`customLevels` option](#opt-customlevels)
<a id=opt-customlevels></a>
#### `customLevels` (Object)
Default: `undefined`
Use this option to define additional logging levels.
The keys of the object correspond the namespace of the log level,
and the values should be the numerical value of the level.
```js
const logger = pino({
customLevels: {
foo: 35
}
})
logger.foo('hi')
```
<a id=opt-useOnlyCustomLevels></a>
#### `useOnlyCustomLevels` (Boolean)
Default: `false`
Use this option to only use defined `customLevels` and omit Pino's levels.
Logger's default `level` must be changed to a value in `customLevels` in order to use `useOnlyCustomLevels`
Warning: this option may not be supported by downstream transports.
```js
const logger = pino({
customLevels: {
foo: 35
},
useOnlyCustomLevels: true,
level: 'foo'
})
logger.foo('hi')
logger.info('hello') // Will throw an error saying info in not found in logger object
```
#### `mixin` (Function):
Default: `undefined`
If provided, the `mixin` function is called each time one of the active
logging methods is called. The function must synchronously return an
object. The properties of the returned object will be added to the
logged JSON.
```js
let n = 0
const logger = pino({
mixin () {
return { line: ++n }
}
})
logger.info('hello')
// {"level":30,"time":1573664685466,"pid":78742,"hostname":"x","line":1,"msg":"hello","v":1}
logger.info('world')
// {"level":30,"time":1573664685469,"pid":78742,"hostname":"x","line":2,"msg":"world","v":1}
```
#### `redact` (Array | Object):
Default: `undefined`
As an array, the `redact` option specifies paths that should
have their values redacted from any log output.
Each path must be a string using a syntax which corresponds to JavaScript dot and bracket notation.
If an object is supplied, three options can be specified:
* `paths` (array): Required. An array of paths. See [redaction - Path Syntax ⇗](/docs/redaction.md#paths) for specifics.
* `censor` (String|Function|Undefined): Optional. When supplied as a String the `censor` option will overwrite keys which are to be redacted. When set to `undefined` the the key will be removed entirely from the object.
The `censor` option may also be a mapping function. The (synchronous) mapping function is called with the unredacted value. The value returned from the mapping function becomes the applied censor value. Default: `'[Redacted]'`
value synchronously.
Default: `'[Redacted]'`
* `remove` (Boolean): Optional. Instead of censoring the value, remove both the key and the value. Default: `false`
**WARNING**: Never allow user input to define redacted paths.
* See the [redaction ⇗](/docs/redaction.md) documentation.
* See [fast-redact#caveat ⇗](http://github.com/davidmarkclements/fast-redact#caveat)
<a id=opt-serializers></a>
#### `serializers` (Object)
Default: `{err: pino.stdSerializers.err}`
An object containing functions for custom serialization of objects.
These functions should return an JSONifiable object and they
should never throw. When logging an object, each top-level property
matching the exact key of a serializer will be serialized using the defined serializer.
* See [pino.stdSerializers](#pino-stdserializers)
##### `serializers[Symbol.for('pino.*')]` (Function)
Default: `undefined`
The `serializers` object may contain a key which is the global symbol: `Symbol.for('pino.*')`.
This will act upon the complete log object rather than corresponding to a particular key.
#### `base` (Object)
Default: `{pid: process.pid, hostname: os.hostname}`
Key-value object added as child logger to each log line.
Set to `null` to avoid adding `pid`, `hostname` and `name` properties to each log.
#### `enabled` (Boolean)
Default: `true`
Set to `false` to disable logging.
#### `crlf` (Boolean)
Default: `false`
Set to `true` to logs newline delimited JSON with `\r\n` instead of `\n`.
<a id=opt-timestamp></a>
#### `timestamp` (Boolean | Function)
Default: `true`
Enables or disables the inclusion of a timestamp in the
log message. If a function is supplied, it must synchronously return a JSON string
representation of the time, e.g. `,"time":1493426328206` (which is the default).
If set to `false`, no timestamp will be included in the output.
See [stdTimeFunctions](#pino-stdtimefunctions) for a set of available functions
for passing in as a value for this option.
**Caution**: attempting to format time in-process will significantly impact logging performance.
<a id=opt-messagekey></a>
#### `messageKey` (String)
Default: `'msg'`
The string key for the 'message' in the JSON object.
<a id=opt-nestedkey></a>
#### `nestedKey` (String)
Default: `null`
If there's a chance that objects being logged have properties that conflict with those from pino itself (`level`, `timestamp`, `v`, `pid`, etc)
and duplicate keys in your log records are undesirable, pino can be configured with a `nestedKey` option that causes any `object`s that are logged
to be placed under a key whose name is the value of `nestedKey`.
This way, when searching something like Kibana for values, one can consistently search under the configured `nestedKey` value instead of the root log record keys.
For example,
```js
const logger = require('pino')({
nestedKey: 'payload'
})
const thing = { level: 'hi', time: 'never', foo: 'bar'} // has pino-conflicting properties!
logger.info(thing)
// logs the following:
// {"level":30,"time":1578357790020,"pid":91736,"hostname":"x","payload":{"level":"hi","time":"never","foo":"bar"},"v":1}
```
In this way, logged objects' properties don't conflict with pino's standard logging properties,
and searching for logged objects can start from a consistent path.
<a id=prettyPrint></a>
#### `prettyPrint` (Boolean | Object)
Default: `false`
Enables pretty printing log logs. This is intended for non-production
configurations. This may be set to a configuration object as outlined in the
[`pino-pretty` documentation](https://github.com/pinojs/pino-pretty).
The options object may additionally contain a `prettifier` property to define
which prettifier module to use. When not present, `prettifier` defaults to
`'pino-pretty'`. Regardless of the value, the specified prettifier module
must be installed as a separate dependency:
```sh
npm install pino-pretty
```
<a id="useLevelLabels"></a>
#### `useLevelLabels` (Boolean)
Default: `false`
Enables printing of level labels instead of level values in the printed logs.
Warning: this option may not be supported by downstream transports.
<a id="changeLevelName"></a>
#### `changeLevelName` (String) - DEPRECATED
Use `levelKey` instead. This will be removed in v7.
<a id="levelKey"></a>
#### `levelKey` (String)
Default: `'level'`
Changes the property `level` to any string value you pass in:
```js
const logger = pino({
levelKey: 'priority'
})
logger.info('hello world')
// {"priority":30,"time":1531257112193,"msg":"hello world","pid":55956,"hostname":"x","v":1}
```
#### `browser` (Object)
Browser only, may have `asObject` and `write` keys. This option is separately
documented in the [Browser API ⇗](/docs/browser.md) documentation.
* See [Browser API ⇗](/docs/browser.md)
<a id="destination"></a>
### `destination` (SonicBoom | WritableStream | String)
Default: `pino.destination(1)` (STDOUT)
The `destination` parameter, at a minimum must be an object with a `write` method.
An ordinary Node.js `stream` can be passed as the destination (such as the result
of `fs.createWriteStream`) but for peak log writing performance it is strongly
recommended to use `pino.destination` or `pino.extreme` to create the destination stream.
```js
// pino.destination(1) by default
const stdoutLogger = require('pino')()
// destination param may be in first position when no options:
const fileLogger = require('pino')( pino.destination('/log/path'))
// use the stderr file handle to log to stderr:
const opts = {name: 'my-logger'}
const stderrLogger = require('pino')(opts, pino.destination(2))
// automatic wrapping in pino.destination
const fileLogger = require('pino')('/log/path')
```
However, there are some special instances where `pino.destination` is not used as the default:
+ When something, e.g a process manager, has monkey-patched `process.stdout.write`.
In these cases `process.stdout` is used instead.
* See [`pino.destination`](#pino-destination)
* See [`pino.extreme`](#pino-extreme)
<a id="metadata"></a>
#### `destination[Symbol.for('pino.metadata')]`
Default: `false`
Using the global symbol `Symbol.for('pino.metadata')` as a key on the `destination` parameter and
setting the key it to `true`, indicates that the following properties should be
set on the `destination` object after each log line is written:
* the last logging level as `destination.lastLevel`
* the last logging message as `destination.lastMsg`
* the last logging object as `destination.lastObj`
* the last time as `destination.lastTime`, which will be the partial string returned
by the time function.
* the last logger instance as `destination.lastLogger` (to support child
loggers)
For a full reference for using `Symbol.for('pino.metadata')`, see the [`pino-multi-stream` ⇗](https://github.com/pinojs/pino-multi-stream)
module.
The following is a succinct usage example:
```js
const dest = pino.destination('/dev/null')
dest[Symbol.for('pino.metadata')] = true
const logger = pino(dest)
logger.info({a: 1}, 'hi')
const { lastMsg, lastLevel, lastObj, lastTime} = dest
console.log(
'Logged message "%s" at level %d with object %o at time %s',
lastMsg, lastLevel, lastObj, lastTime
) // Logged message "hi" at level 30 with object { a: 1 } at time 1531590545089
```
* See [`pino-multi-stream` ⇗](https://github.com/pinojs/pino-multi-stream)
<a id="logger"></a>
## Logger Instance
The logger instance is the object returned by the main exported
[`pino`](#export) function.
The primary purpose of the logger instance is to provide logging methods.
The default logging methods are `trace`, `debug`, `info`, `warn`, `error`, and `fatal`.
Each logging method has the following signature:
`([mergingObject], [message], [...interpolationValues])`.
The parameters are explained below using the `logger.info` method but the same applies to all logging methods.
### Logging Method Parameters
<a id=mergingobject></a>
#### `mergingObject` (Object)
An object can optionally be supplied as the first parameter. Each enumerable key and value
of the `mergingObject` is copied in to the JSON log line.
```js
logger.info({MIX: {IN: true}})
// {"level":30,"time":1531254555820,"pid":55956,"hostname":"x","MIX":{"IN":true},"v":1}
```
<a id=message></a>
#### `message` (String)
A `message` string can optionally be supplied as the first parameter, or
as the second parameter after supplying a `mergingObject`.
By default, the contents of the `message` parameter will be merged into the
JSON log line under the `msg` key:
```js
logger.info('hello world')
// {"level":30,"time":1531257112193,"msg":"hello world","pid":55956,"hostname":"x","v":1}
```
The `message` parameter takes precedence over the `mergedObject`.
That is, if a `mergedObject` contains a `msg` property, and a `message` parameter
is supplied in addition, the `msg` property in the output log will be the value of
the `message` parameter not the value of the `msg` property on the `mergedObject`.
The `messageKey` option can be used at instantiation time to change the namespace
from `msg` to another string as preferred.
The `message` string may contain a printf style string with support for
the following placeholders:
* `%s` string placeholder
* `%d` digit placeholder
* `%O`, `%o` and `%j` object placeholder
Values supplied as additional arguments to the logger method will
then be interpolated accordingly.
* See [`messageKey` pino option](#opt-messagekey)
* See [`...interpolationValues` log method parameter](#interpolationvalues)
<a id=interpolationvalues></a>
#### `...interpolationValues` (Any)
All arguments supplied after `message` are serialized and interpolated according
to any supplied printf-style placeholders (`%s`, `%d`, `%o`|`%O`|`%j`)
or else concatenated together with the `message` string to form the final
output `msg` value for the JSON log line.
```js
logger.info('hello', 'world')
// {"level":30,"time":1531257618044,"msg":"hello world","pid":55956,"hostname":"x","v":1}
```
```js
logger.info('hello', {worldly: 1})
// {"level":30,"time":1531257797727,"msg":"hello {\"worldly\":1}","pid":55956,"hostname":"x","v":1}
```
```js
logger.info('%o hello', {worldly: 1})
// {"level":30,"time":1531257826880,"msg":"{\"worldly\":1} hello","pid":55956,"hostname":"x","v":1}
```
* See [`message` log method parameter](#message)
<a id="trace"></a>
### `logger.trace([mergingObject], [message], [...interpolationValues])`
Write a `'trace'` level log, if the configured [`level`](#level) allows for it.
* See [`mergingObject` log method parameter](#mergingobject)
* See [`message` log method parameter](#message)
* See [`...interpolationValues` log method parameter](#interpolationvalues)
<a id="debug"></a>
### `logger.debug([mergingObject], [message], [...interpolationValues])`
Write a `'debug'` level log, if the configured `level` allows for it.
* See [`mergingObject` log method parameter](#mergingobject)
* See [`message` log method parameter](#message)
* See [`...interpolationValues` log method parameter](#interpolationvalues)
<a id="info"></a>
### `logger.info([mergingObject], [message], [...interpolationValues])`
Write an `'info'` level log, if the configured `level` allows for it.
* See [`mergingObject` log method parameter](#mergingobject)
* See [`message` log method parameter](#message)
* See [`...interpolationValues` log method parameter](#interpolationvalues)
<a id="warn"></a>
### `logger.warn([mergingObject], [message], [...interpolationValues])`
Write a `'warn'` level log, if the configured `level` allows for it.
* See [`mergingObject` log method parameter](#mergingobject)
* See [`message` log method parameter](#message)
* See [`...interpolationValues` log method parameter](#interpolationvalues)
<a id="error"></a>
### `logger.error([mergingObject], [message], [...interpolationValues])`
Write a `'error'` level log, if the configured `level` allows for it.
* See [`mergingObject` log method parameter](#mergingobject)
* See [`message` log method parameter](#message)
* See [`...interpolationValues` log method parameter](#interpolationvalues)
<a id="fatal"></a>
### `logger.fatal([mergingObject], [message], [...interpolationValues])`
Write a `'fatal'` level log, if the configured `level` allows for it.
Since `'fatal'` level messages are intended to be logged just prior to the process exiting the `fatal`
method will always sync flush the destination.
Therefore it's important not to misuse `fatal` since
it will cause performance overhead if used for any
other purpose than writing final log messages before
the process crashes or exits.
* See [`mergingObject` log method parameter](#mergingobject)
* See [`message` log method parameter](#message)
* See [`...interpolationValues` log method parameter](#interpolationvalues)
<a id="child"></a>
### `logger.child(bindings) => logger`
The `logger.child` method allows for the creation of stateful loggers,
where key-value pairs can be pinned to a logger causing them to be output
on every log line.
Child loggers use the same output stream as the parent and inherit
the current log level of the parent at the time they are spawned.
The log level of a child is mutable. It can be set independently
of the parent either by setting the [`level`](#level) accessor after creating
the child logger or using the reserved [`bindings.level`](#bindingslevel-string) key.
#### `bindings` (Object)
An object of key-value pairs to include in every log line output
via the returned child logger.
```js
const child = logger.child({ MIX: {IN: 'always'} })
child.info('hello')
// {"level":30,"time":1531258616689,"msg":"hello","pid":64849,"hostname":"x","MIX":{"IN":"always"},"v":1}
child.info('child!')
// {"level":30,"time":1531258617401,"msg":"child!","pid":64849,"hostname":"x","MIX":{"IN":"always"},"v":1}
```
The `bindings` object may contain any key except for reserved configuration keys `level` and `serializers`.
##### `bindings.level` (String)
If a `level` property is present in the `bindings` object passed to `logger.child`
it will override the child logger level.
```js
const logger = pino()
logger.debug('nope') // will not log, since default level is info
const child = logger.child({foo: 'bar', level: 'debug'})
child.debug('debug!') // will log as the `level` property set the level to debug
```
##### `bindings.serializers` (Object)
Child loggers inherit the [serializers](#opt-serializers) from the parent logger.
Setting the `serializers` key of the `bindings` object will override
any configured parent serializers.
```js
const logger = require('pino')()
logger.info({test: 'will appear'})
// {"level":30,"time":1531259759482,"pid":67930,"hostname":"x","test":"will appear","v":1}
const child = logger.child({serializers: {test: () => `child-only serializer`}})
child.info({test: 'will be overwritten'})
// {"level":30,"time":1531259784008,"pid":67930,"hostname":"x","test":"child-only serializer","v":1}
```
* See [`serializers` option](#opt-serializers)
* See [pino.stdSerializers](#pino-stdSerializers)
<a id="bindings"></a>
### `logger.bindings()`
Returns an object containing all the current bindings, cloned from the ones passed in via `logger.child()`.
```js
const child = logger.child({ foo: 'bar' })
console.log(child.bindings())
// { foo: 'bar' }
const anotherChild = child.child({ MIX: { IN: 'always' } })
console.log(anotherChild.bindings())
// { foo: 'bar', MIX: { IN: 'always' } }
```
<a id="flush"></a>
### `logger.flush()`
Flushes the content of the buffer when using a `pino.extreme` destination.
This is an asynchronous, fire and forget, operation.
The use case is primarily for Extreme mode logging, which may hold up to
4KiB of logs. The `logger.flush` method can be used to flush the logs
on an long interval, say ten seconds. Such a strategy can provide an
optimium balance between extremely efficient logging at high demand periods
and safer logging at low demand periods.
* See [`pino.extreme`](#pino-extreme)
* See [`destination` parameter](#destination)
* See [Extreme mode ⇗](/docs/extreme.md)
<a id="level"></a>
### `logger.level` (String) [Getter/Setter]
Set this property to the desired logging level.
The core levels and their values are as follows:
| | | | | | | | |
|:-----------|-------|-------|------|------|-------|-------|---------:|
| **Level:** | trace | debug | info | warn | error | fatal | silent |
| **Value:** | 10 | 20 | 30 | 40 | 50 | 60 | Infinity |
The logging level is a *minimum* level based on the associated value of that level.
For instance if `logger.level` is `info` *(30)* then `info` *(30)*, `warn` *(40)*, `error` *(50)* and `fatal` *(60)* log methods will be enabled but the `trace` *(10)* and `debug` *(20)* methods, being less than 30, will not.
The `silent` logging level is a specialized level which will disable all logging,
there is no `silent` log method.
<a id="islevelenabled"></a>
### `logger.isLevelEnabled(level)`
A utility method for determining if a given log level will write to the destination.
#### `level` (String)
The given level to check against:
```js
if (logger.isLevelEnabled('debug')) logger.debug('conditional log')
```
#### `levelLabel` (String)
Defines the method name of the new level.
* See [`logger.level`](#level)
#### `levelValue` (Number)
Defines the associated minimum threshold value for the level, and
therefore where it sits in order of priority among other levels.
* See [`logger.level`](#level)
<a id="levelVal"></a>
### `logger.levelVal` (Number)
Supplies the integer value for the current logging level.
```js
if (logger.levelVal === 30) {
console.log('logger level is `info`')
}
```
<a id="levels"></a>
### `logger.levels` (Object)
Levels are mapped to values to determine the minimum threshold that a
logging method should be enabled at (see [`logger.level`](#level)).
The `logger.levels` property holds the mappings between levels and values,
and vice versa.
```sh
$ node -p "require('pino')().levels"
```
```js
{ labels:
{ '10': 'trace',
'20': 'debug',
'30': 'info',
'40': 'warn',
'50': 'error',
'60': 'fatal' },
values:
{ fatal: 60, error: 50, warn: 40, info: 30, debug: 20, trace: 10 } }
```
* See [`logger.level`](#level)
<a id="serializers"></a>
### logger\[Symbol.for('pino.serializers')\]
Returns the serializers as applied to the current logger instance. If a child logger did not
register it's own serializer upon instantiation the serializers of the parent will be returned.
<a id="level-change"></a>
### Event: 'level-change'
The logger instance is also an [`EventEmitter ⇗`](https://nodejs.org/dist/latest/docs/api/events.html#events_class_eventemitter)
A listener function can be attached to a logger via the `level-change` event
The listener is passed four arguments:
* `levelLabel` the new level string, e.g `trace`
* `levelValue`  the new level number, e.g `10`
* `previousLevelLabel` the prior level string, e.g `info`
* `previousLevelValue` the prior level numbebr, e.g `30`
```js
const logger = require('pino')()
logger.on('level-change', (lvl, val, prevLvl, prevVal) => {
console.log('%s (%d) was changed to %s (%d)', lvl, val, prevLvl, prevVal)
})
logger.level = 'trace' // trigger event
```
<a id="version"></a>
### `logger.version` (String)
Exposes the Pino package version. Also available on the exported `pino` function.
* See [`pino.version`](#pino-version)
<a id="log_version"></a>
### `logger.LOG_VERSION` (Number)
Holds the current log format version as output in the `v` property of each log record.
Also available on the exported `pino` function.
* See [`pino.LOG_VERSION`](#pino-LOG_VERSION)
## Statics
<a id="pino-destination"></a>
### `pino.destination([target]) => SonicBoom`
Create a Pino Destination instance: a stream-like object with
significantly more throughput (over 30%) than a standard Node.js stream.
```js
const pino = require('pino')
const logger = pino(pino.destination('./my-file'))
const logger2 = pino(pino.destination())
```
The `pino.destination` method may be passed a file path or a numerical file descriptor.
By default, `pino.destination` will use `process.stdout.fd` (1) as the file descriptor.
`pino.destination` is implemented on [`sonic-boom` ⇗](https://github.com/mcollina/sonic-boom).
A `pino.destination` instance can also be used to reopen closed files
(for example, for some log rotation scenarios), see [Reopening log files](/docs/help.md#reopening).
* See [`destination` parameter](#destination)
* See [`sonic-boom` ⇗](https://github.com/mcollina/sonic-boom)
* See [Reopening log files](/docs/help.md#reopening)
<a id="pino-extreme"></a>
### `pino.extreme([target]) => SonicBoom`
Create an extreme mode destination. This yields an additional 60% performance boost.
There are trade-offs that should be understood before usage.
```js
const pino = require('pino')
const logger = pino(pino.extreme('./my-file'))
const logger2 = pino(pino.extreme())
```
The `pino.extreme` method may be passed a file path or a numerical file descriptor.
By default, `pino.extreme` will use `process.stdout.fd` (1) as the file descriptor.
`pino.extreme` is implemented with the [`sonic-boom` ⇗](https://github.com/mcollina/sonic-boom)
module.
A `pino.extreme` instance can also be used to reopen closed files
(for example, for some log rotation scenarios), see [Reopening log files](/docs/help.md#reopening).
On AWS Lambda we recommend to call `extreme.flushSync()` at the end
of each function execution to avoid losing data.
* See [`destination` parameter](#destination)
* See [`sonic-boom` ⇗](https://github.com/mcollina/sonic-boom)
* See [Extreme mode ⇗](/docs/extreme.md)
* See [Reopening log files](/docs/help.md#reopening)
<a id="pino-final"></a>
### `pino.final(logger, [handler]) => Function | FinalLogger`
The `pino.final` method can be used to acquire a final logger instance
or create an exit listener function.
The `finalLogger` is a specialist logger that synchronously flushes
on every write. This is important to guarantee final log writes,
both when using `pino.extreme` target.
Since final log writes cannot be guaranteed with normal Node.js streams,
if the `destination` parameter of the `logger` supplied to `pino.final`
is a Node.js stream `pino.final` will throw.
The use of `pino.final` with `pino.destination` is not needed, as
`pino.destination` writes things synchronously.
#### `pino.final(logger, handler) => Function`
In this case the `pino.final` method supplies an exit listener function that can be
supplied to process exit events such as `exit`, `uncaughtException`,
`SIGHUP` and so on.
The exit listener function will call the supplied `handler` function
with an error object (or else `null`), a `finalLogger` instance followed
by any additional arguments the `handler` may be called with.
```js
process.on('uncaughtException', pino.final(logger, (err, finalLogger) => {
finalLogger.error(err, 'uncaughtException')
process.exit(1)
}))
```
#### `pino.final(logger) => FinalLogger`
In this case the `pino.final` method returns a finalLogger instance.
```js
var finalLogger = pino.final(logger)
finalLogger.info('exiting...')
```
* See [`destination` parameter](#destination)
* See [Exit logging help](/docs/help.md#exit-logging)
* See [Extreme mode ⇗](/docs/extreme.md)
* See [Log loss prevention ⇗](/docs/extreme.md#log-loss-prevention)
<a id="pino-stdserializers"></a>
### `pino.stdSerializers` (Object)
The `pino.stdSerializers` object provides functions for serializing objects common to many projects. The standard serializers are directly imported from [pino-std-serializers](https://github.com/pinojs/pino-std-serializers).
* See [pino-std-serializers ⇗](https://github.com/pinojs/pino-std-serializers)
<a id="pino-stdtimefunctions"></a>
### `pino.stdTimeFunctions` (Object)
The [`timestamp`](#opt-timestamp) option can accept a function which determines the
`timestamp` value in a log line.
The `pino.stdTimeFunctions` object provides a very small set of common functions for generating the
`timestamp` property. These consist of the following
* `pino.stdTimeFunctions.epochTime`: Milliseconds since Unix epoch (Default)
* `pino.stdTimeFunctions.unixTime`: Seconds since Unix epoch
* `pino.stdTimeFunctions.nullTime`: Clears timestamp property (Used when `timestamp: false`)
* `pino.stdTimeFunctions.isoTime`: ISO 8601-formatted time in UTC
* See [`timestamp` option](#opt-timestamp)
<a id="pino-symbols"></a>
### `pino.symbols` (Object)
For integration purposes with ecosystem and third party libraries `pino.symbols`
exposes the symbols used to hold non-public state and methods on the logger instance.
Access to the symbols allows logger state to be adjusted, and methods to be overridden or
proxied for performant integration where necessary.
The `pino.symbols` object is intended for library implementers and shouldn't be utilized
for general use.
<a id="pino-version"></a>
### `pino.version` (String)
Exposes the Pino package version. Also available on the logger instance.
* See [`logger.version`](#version)
<a id="pino-log_version"></a>
### `pino.LOG_VERSION` (Number)
Holds the current log format version as output in the `v` property of each log record. Also available on the logger instance.
* See [`logger.LOG_VERSION`](#log_version)

58
node_modules/pino/docs/benchmarks.md generated vendored Normal file
View file

@ -0,0 +1,58 @@
# Benchmarks
`pino.info('hello world')`:
```
BASIC benchmark averages
Bunyan average: 549.042ms
Winston average: 467.873ms
Bole average: 201.529ms
Debug average: 253.724ms
LogLevel average: 282.653ms
Pino average: 188.956ms
PinoExtreme average: 108.809ms
```
`pino.info({'hello': 'world'})`:
```
OBJECT benchmark averages
BunyanObj average: 564.363ms
WinstonObj average: 464.824ms
BoleObj average: 230.220ms
LogLevelObject average: 474.857ms
PinoObj average: 201.442ms
PinoUnsafeObj average: 202.687ms
PinoExtremeObj average: 108.689ms
PinoUnsafeExtremeObj average: 106.718ms
```
`pino.info(aBigDeeplyNestedObject)`:
```
DEEPOBJECT benchmark averages
BunyanDeepObj average: 5293.279ms
WinstonDeepObj average: 9020.292ms
BoleDeepObj average: 9169.043ms
LogLevelDeepObj average: 15260.917ms
PinoDeepObj average: 8467.807ms
PinoUnsafeDeepObj average: 6159.227ms
PinoExtremeDeepObj average: 8354.557ms
PinoUnsafeExtremeDeepObj average: 6214.073ms
```
`pino.info('hello %s %j %d', 'world', {obj: true}, 4, {another: 'obj'})`:
```
BunyanInterpolateExtra average: 778.408ms
WinstonInterpolateExtra average: 627.956ms
BoleInterpolateExtra average: 429.757ms
PinoInterpolateExtra average: 316.043ms
PinoUnsafeInterpolateExtra average: 316.809ms
PinoExtremeInterpolateExtra average: 218.468ms
PinoUnsafeExtremeInterpolateExtra average: 215.040ms
```
For a fair comparison, [LogLevel](http://npm.im/loglevel) was extended
to include a timestamp and [bole](http://npm.im/bole) had
`fastTime` mode switched on.

199
node_modules/pino/docs/browser.md generated vendored Normal file
View file

@ -0,0 +1,199 @@
# Browser API
Pino is compatible with [`browserify`](http://npm.im/browserify) for browser side usage:
This can be useful with isomorphic/universal JavaScript code.
By default, in the browser,
`pino` uses corresponding [Log4j](https://en.wikipedia.org/wiki/Log4j) `console` methods (`console.error`, `console.warn`, `console.info`, `console.debug`, `console.trace`) and uses `console.error` for any `fatal` level logs.
## Options
Pino can be passed a `browser` object in the options object,
which can have the following properties:
### `asObject` (Boolean)
```js
const pino = require('pino')({browser: {asObject: true}})
```
The `asObject` option will create a pino-like log object instead of
passing all arguments to a console method, for instance:
```js
pino.info('hi') // creates and logs {msg: 'hi', level: 30, time: <ts>}
```
When `write` is set, `asObject` will always be `true`.
### `write` (Function | Object)
Instead of passing log messages to `console.log` they can be passed to
a supplied function.
If `write` is set to a single function, all logging objects are passed
to this function.
```js
const pino = require('pino')({
browser: {
write: (o) => {
// do something with o
}
}
})
```
If `write` is an object, it can have methods that correspond to the
levels. When a message is logged at a given level, the corresponding
method is called. If a method isn't present, the logging falls back
to using the `console`.
```js
const pino = require('pino')({
browser: {
write: {
info: function (o) {
//process info log object
},
error: function (o) {
//process error log object
}
}
}
})
```
### `serialize`: (Boolean | Array)
The serializers provided to `pino` are ignored by default in the browser, including
the standard serializers provided with Pino. Since the default destination for log
messages is the console, values such as `Error` objects are enhanced for inspection,
which they otherwise wouldn't be if the Error serializer was enabled.
We can turn all serializers on,
```js
const pino = require('pino')({
browser: {
serialize: true
}
})
```
Or we can selectively enable them via an array:
```js
const pino = require('pino')({
serializers: {
custom: myCustomSerializer,
another: anotherSerializer
},
browser: {
serialize: ['custom']
}
})
// following will apply myCustomSerializer to the custom property,
// but will not apply anotherSerializer to another key
pino.info({custom: 'a', another: 'b'})
```
When `serialize` is `true` the standard error serializer is also enabled (see https://github.com/pinojs/pino/blob/master/docs/api.md#stdSerializers).
This is a global serializer which will apply to any `Error` objects passed to the logger methods.
If `serialize` is an array the standard error serializer is also automatically enabled, it can
be explicitly disabled by including a string in the serialize array: `!stdSerializers.err`, like so:
```js
const pino = require('pino')({
serializers: {
custom: myCustomSerializer,
another: anotherSerializer
},
browser: {
serialize: ['!stdSerializers.err', 'custom'] //will not serialize Errors, will serialize `custom` keys
}
})
```
The `serialize` array also applies to any child logger serializers (see https://github.com/pinojs/pino/blob/master/docs/api.md#discussion-2
for how to set child-bound serializers).
Unlike server pino the serializers apply to every object passed to the logger method,
if the `asObject` option is `true`, this results in the serializers applying to the
first object (as in server pino).
For more info on serializers see https://github.com/pinojs/pino/blob/master/docs/api.md#parameters.
### `transmit` (Object)
An object with `send` and `level` properties.
The `transmit.level` property specifies the minimum level (inclusive) of when the `send` function
should be called, if not supplied the `send` function be called based on the main logging `level`
(set via `options.level`, defaulting to `info`).
The `transmit` object must have a `send` function which will be called after
writing the log message. The `send` function is passed the level of the log
message and a `logEvent` object.
The `logEvent` object is a data structure representing a log message, it represents
the arguments passed to a logger statement, the level
at which they were logged and the hierarchy of child bindings.
The `logEvent` format is structured like so:
```js
{
ts = Number,
messages = Array,
bindings = Array,
level: { label = String, value = Number}
}
```
The `ts` property is a unix epoch timestamp in milliseconds, the time is taken from the moment the
logger method is called.
The `messages` array is all arguments passed to logger method, (for instance `logger.info('a', 'b', 'c')`
would result in `messages` array `['a', 'b', 'c']`).
The `bindings` array represents each child logger (if any), and the relevant bindings.
For instance given `logger.child({a: 1}).child({b: 2}).info({c: 3})`, the bindings array
would hold `[{a: 1}, {b: 2}]` and the `messages` array would be `[{c: 3}]`. The `bindings`
are ordered according to their position in the child logger hierarchy, with the lowest index
being the top of the hierarchy.
By default serializers are not applied to log output in the browser, but they will *always* be
applied to `messages` and `bindings` in the `logEvent` object. This allows us to ensure a consistent
format for all values between server and client.
The `level` holds the label (for instance `info`), and the corresponding numerical value
(for instance `30`). This could be important in cases where client side level values and
labels differ from server side.
The point of the `send` function is to remotely record log messages:
```js
const pino = require('pino')({
browser: {
transmit: {
level: 'warn',
send: function (level, logEvent) {
if (level === 'warn') {
// maybe send the logEvent to a separate endpoint
// or maybe analyse the messages further before sending
}
// we could also use the `logEvent.level.value` property to determine
// numerical value
if (logEvent.level.value >= 50) { // covers error and fatal
// send the logEvent somewhere
}
}
}
}
})
```

95
node_modules/pino/docs/child-loggers.md generated vendored Normal file
View file

@ -0,0 +1,95 @@
# Child loggers
Let's assume we want to have `"module":"foo"` added to every log within a
module `foo.js`.
To accomplish this, simply use a child logger:
```js
'use strict'
// imports a pino logger instance of `require('pino')()`
const parentLogger = require('./lib/logger')
const log = parentLogger.child({module: 'foo'})
function doSomething () {
log.info('doSomething invoked')
}
module.exports = {
doSomething
}
```
## Cost of child logging
Child logger creation is fast:
```
benchBunyanCreation*10000: 564.514ms
benchBoleCreation*10000: 283.276ms
benchPinoCreation*10000: 258.745ms
benchPinoExtremeCreation*10000: 150.506ms
```
Logging through a child logger has little performance penalty:
```
benchBunyanChild*10000: 556.275ms
benchBoleChild*10000: 288.124ms
benchPinoChild*10000: 231.695ms
benchPinoExtremeChild*10000: 122.117ms
```
Logging via the child logger of a child logger also has negligible overhead:
```
benchBunyanChildChild*10000: 559.082ms
benchPinoChildChild*10000: 229.264ms
benchPinoExtremeChildChild*10000: 127.753ms
```
## Duplicate keys caveat
It's possible for naming conflicts to arise between child loggers and
children of child loggers.
This isn't as bad as it sounds, even if the same keys between
parent and child loggers are used, Pino resolves the conflict in the sanest way.
For example, consider the following:
```js
const pino = require('pino')
pino(pino.destination('./my-log'))
.child({a: 'property'})
.child({a: 'prop'})
.info('howdy')
```
```sh
$ cat my-log
{"pid":95469,"hostname":"MacBook-Pro-3.home","level":30,"msg":"howdy","time":1459534114473,"a":"property","a":"prop","v":1}
```
Notice how there's two key's named `a` in the JSON output. The sub-childs properties
appear after the parent child properties.
At some point the logs will most likely be processed (for instance with a [transport](transports.md)),
and this generally involves parsing. `JSON.parse` will return an object where the conflicting
namespace holds the final value assigned to it:
```sh
$ cat my-log | node -e "process.stdin.once('data', (line) => console.log(JSON.stringify(JSON.parse(line))))"
{"pid":95469,"hostname":"MacBook-Pro-3.home","level":30,"msg":"howdy","time":"2016-04-01T18:08:34.473Z","a":"prop","v":1}
```
Ultimately the conflict is resolved by taking the last value, which aligns with Bunyans child logging
behavior.
There may be cases where this edge case becomes problematic if a JSON parser with alternative behavior
is used to process the logs. It's recommended to be conscious of namespace conflicts with child loggers,
in light of an expected log processing approach.
One of Pino's performance tricks is to avoid building objects and stringifying
them, so we're building strings instead. This is why duplicate keys between
parents and children will end up in log output.

72
node_modules/pino/docs/ecosystem.md generated vendored Normal file
View file

@ -0,0 +1,72 @@
# Pino Ecosystem
This is a list of ecosystem modules that integrate with `pino`.
Modules listed under [Core](#core) are maintained by the Pino team. Modules
listed under [Community](#community) are maintained by independent community
members.
Please send a PR to add new modules!
<a id="core"></a>
## Core
+ [`express-pino-logger`](https://github.com/pinojs/express-pino-logger): use
Pino to log requests within [express](https://expressjs.com/).
+ [`koa-pino-logger`](https://github.com/pinojs/koa-pino-logger): use Pino to
log requests within [Koa](http://koajs.com/).
+ [`pino-arborsculpture`](https://github.com/pinojs/pino-arborsculpture): change
log levels at runtime.
+ [`pino-caller`](https://github.com/pinojs/pino-caller): add callsite to the log line.
+ [`pino-clf`](https://github.com/pinojs/pino-clf): reformat Pino logs into
Common Log Format.
+ [`pino-debug`](https://github.com/pinojs/pino-debug): use Pino to interpret
[`debug`](https://npm.im/debug) logs.
+ [`pino-elasticsearch`](https://github.com/pinojs/pino-elasticsearch): send
Pino logs to an Elasticsearch instance.
+ [`pino-eventhub`](https://github.com/pinojs/pino-eventhub): send Pino logs
to an [Event Hub](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-what-is-event-hubs).
+ [`pino-filter`](https://github.com/pinojs/pino-filter): filter Pino logs in
the same fashion as the [`debug`](https://npm.im/debug) module.
+ [`pino-gelf`](https://github.com/pinojs/pino-gelf): reformat Pino logs into
GELF format for Graylog.
+ [`pino-hapi`](https://github.com/pinojs/hapi-pino): use Pino as the logger
for [Hapi](https://hapijs.com/).
+ [`pino-http`](https://github.com/pinojs/pino-http): easily use Pino to log
requests with the core `http` module.
+ [`pino-http-print`](https://github.com/pinojs/pino-http-print): reformat Pino
logs into traditional [HTTPD](https://httpd.apache.org/) style request logs.
+ [`pino-multi-stream`](https://github.com/pinojs/pino-multi-stream): send
logs to multiple destination streams (slow!).
+ [`pino-mongodb`](https://github.com/pinojs/pino-mongodb): store Pino logs
in a MongoDB database.
+ [`pino-noir`](https://github.com/pinojs/pino-noir): redact sensitive information
in logs.
+ [`pino-pretty`](https://github.com/pinojs/pino-pretty): basic prettifier to
make log lines human readable.
+ [`pino-socket`](https://github.com/pinojs/pino-socket): send logs to TCP or UDP
destinations.
+ [`pino-std-serializers`](https://github.com/pinojs/pino-std-serializers): the
core object serializers used within Pino.
+ [`pino-syslog`](https://github.com/pinojs/pino-syslog): reformat Pino logs
to standard syslog format.
+ [`pino-tee`](https://github.com/pinojs/pino-tee): pipe Pino logs into files
based upon log levels.
+ [`pino-toke`](https://github.com/pinojs/pino-toke): reformat Pino logs
according to a given format string.
+ [`restify-pino-logger`](https://github.com/pinojs/restify-pino-logger): use
Pino to log requests within [restify](http://restify.com/).
+ [`rill-pino-logger`](https://github.com/pinojs/rill-pino-logger): use Pino as
the logger for the [Rill framework](https://rill.site/).
<a id="community"></a>
## Community
+ [`pino-colada`](https://github.com/lrlna/pino-colada): cute ndjson formatter for pino.
+ [`pino-fluentd`](https://github.com/davidedantonio/pino-fluentd): send Pino logs to Elasticsearch,
MongoDB and many [others](https://www.fluentd.org/dataoutputs) via Fluentd.
+ [`pino-pretty-min`](https://github.com/unjello/pino-pretty-min): a minimal
prettifier inspired by the [logrus](https://github.com/sirupsen/logrus) logger.
+ [`pino-rotating-file`](https://github.com/homeaway/pino-rotating-file): a hapi-pino log transport for splitting logs into separate, automatically rotating files.
+ [`cls-proxify`](https://github.com/keenondrums/cls-proxify): integration of pino and [CLS](https://github.com/jeff-lewis/cls-hooked). Useful for creating dynamically configured child loggers (e.g. with added trace ID) for each request.

95
node_modules/pino/docs/extreme.md generated vendored Normal file
View file

@ -0,0 +1,95 @@
# Extreme Mode
In essence, extreme mode enables even faster performance by Pino.
In Pino's standard mode of operation log messages are directly written to the
output stream as the messages are generated. Extreme mode works by buffering
log messages and writing them in larger chunks.
## Caveats
This has a couple of important caveats:
* 4KB of spare RAM will be needed for logging
* As opposed to the default mode, there is not a one-to-one relationship between
calls to logging methods (e.g. `logger.info`) and writes to a log file
* There is a possibility of the most recently buffered log messages being lost
(up to 4KB of logs)
* For instance, a power cut will mean up to 4KB of buffered logs will be lost
So in summary, only use extreme mode when performing an extreme amount of
logging and it is acceptable to potentially lose the most recent logs.
* Pino will register handlers for the following process events/signals so that
Pino can flush the extreme mode buffer:
+ `beforeExit`
+ `exit`
+ `uncaughtException`
+ `SIGHUP`
+ `SIGINT`
+ `SIGQUIT`
+ `SIGTERM`
In all of these cases, except `SIGHUP`, the process is in a state that it
*must* terminate. Thus, if an `onTerminated` function isn't registered when
constructing a Pino instance (see [pino#constructor](api.md#constructor)),
then Pino will invoke `process.exit(0)` when no error has occurred, or
`process.exit(1)` otherwise. If an `onTerminated` function is supplied, it
is the responsibility of the `onTerminated` function to manually exit the process.
In the case of `SIGHUP`, we will look to see if any other handlers are
registered for the event. If not, we will proceed as we do with all other
signals. If there are more handlers registered than just our own, we will
simply flush the extreme mode buffer.
## Usage
The `pino.extreme()` method will provide an Extreme Mode destination.
```js
const pino = require('pino')
const dest = pino.extreme() // logs to stdout with no args
const logger = pino(dest)
```
<a id='log-loss-prevention'></a>
## Log loss prevention
The following strategy can be used to minimize log loss:
```js
const pino = require('pino')
const dest = pino.extreme() // no arguments
const logger = pino(dest)
// asynchronously flush every 10 seconds to keep the buffer empty
// in periods of low activity
setInterval(function () {
logger.flush()
}, 10000).unref()
// use pino.final to create a special logger that
// guarantees final tick writes
const handler = pino.final(logger, (err, finalLogger, evt) => {
finalLogger.info(`${evt} caught`)
if (err) finalLogger.error(err, 'error caused exit')
process.exit(err ? 1 : 0)
})
// catch all the ways node might exit
process.on('beforeExit', () => handler(null, 'beforeExit'))
process.on('exit', () => handler(null, 'exit'))
process.on('uncaughtException', (err) => handler(err, 'uncaughtException'))
process.on('SIGINT', () => handler(null, 'SIGINT'))
process.on('SIGQUIT', () => handler(null, 'SIGQUIT'))
process.on('SIGTERM', () => handler(null, 'SIGTERM'))
```
An extreme destination is an instance of
[`SonicBoom`](https://github.com/mcollina/sonic-boom) with `4096`
buffering.
* See [`pino.extreme` api](/docs/api.md#pino-extreme)
* See [`pino.final` api](/docs/api.md#pino-final)
* See [`destination` parameter](/docs/api.md#destination)

215
node_modules/pino/docs/help.md generated vendored Normal file
View file

@ -0,0 +1,215 @@
# Help
* [Exit logging](#exit-logging)
* [Log rotation](#rotate)
* [Reopening log files](#reopening)
* [Saving to multiple files](#multiple)
* [Log filtering](#filter-logs)
* [Transports and systemd](#transport-systemd)
* [Duplicate keys](#dupe-keys)
* [Log levels as labels instead of numbers](#level-string)
* [Pino with `debug`](#debug)
* [Unicode and Windows terminal](#windows)
<a id="exit-logging"></a>
## Exit logging
When a Node process crashes from uncaught exception, exits due to a signal,
or exits of it's own accord we may want to write some final logs  particularly
in cases of error.
Writing to a Node.js stream on exit is not necessarily guaranteed, and naively writing
to an Extreme Mode logger on exit will definitely lead to lost logs.
To write logs in an exit handler, create the handler with [`pino.final`](/docs/api.md#pino-final):
```js
process.on('uncaughtException', pino.final(logger, (err, finalLogger) => {
finalLogger.error(err, 'uncaughtException')
process.exit(1)
}))
process.on('unhandledRejection', pino.final(logger, (err, finalLogger) => {
finalLogger.error(err, 'unhandledRejection')
process.exit(1)
}))
```
The `finalLogger` is a special logger instance that will synchronously and reliably
flush every log line. This is important in exit handlers, since no more asynchronous
activity may be scheduled.
<a id="rotate"></a>
## Log rotation
Use a separate tool for log rotation:
We recommend [logrotate](https://github.com/logrotate/logrotate).
Consider we output our logs to `/var/log/myapp.log` like so:
```
$ node server.js > /var/log/myapp.log
```
We would rotate our log files with logrotate, by adding the following to `/etc/logrotate.d/myapp`:
```
/var/log/myapp.log {
su root
daily
rotate 7
delaycompress
compress
notifempty
missingok
copytruncate
}
```
The `copytruncate` configuration has a very slight possibility of lost log lines due
to a gap between copying and truncating - the truncate may occur after additional lines
have been written. To perform log rotation without `copytruncate`, see the [Reopening log files](#reopening)
help.
<a id="reopening"></a>
## Reopening log files
In cases where a log rotation tool doesn't offer a copy-truncate capabilities,
or where using them is deemed inappropriate `pino.destination` and `pino.extreme`
destinations are able to reopen file paths after a file has been moved away.
One way to use this is to set up a `SIGUSR2` or `SIGHUP` signal handler that
reopens the log file destination, making sure to write the process PID out
somewhere so the log rotation tool knows where to send the signal.
```js
// write the process pid to a well known location for later
const fs = require('fs')
fs.writeFileSync('/var/run/myapp.pid', process.pid)
const dest = pino.destination('/log/file') // pino.extreme will also work
const logger = require('pino')(dest)
process.on('SIGHUP', () => dest.reopen())
```
The log rotation tool can then be configured to send this signal to the process
after a log rotation event has occurred.
Given a similar scenario as in the [Log rotation](#rotate) section a basic
`logrotate` config that aligns with this strategy would look similar to the following:
```
/var/log/myapp.log {
su root
daily
rotate 7
delaycompress
compress
notifempty
missingok
postrotate
kill -HUP `cat /var/run/myapp.pid`
endscript
}
```
<a id="multiple"></a>
## Saving to multiple files
Let's assume we want to store all error messages to a separate log file.
Install [pino-tee](http://npm.im/pino-tee) with:
```bash
npm i pino-tee -g
```
The following writes the log output of `app.js` to `./all-logs`, while
writing only warnings and errors to `./warn-log:
```bash
node app.js | pino-tee warn ./warn-logs > ./all-logs
```
<a id="filter-logs"></a>
## Log Filtering
The Pino philosophy advocates common, pre-existing, system utilities.
Some recommendations in line with this philosophy are:
1. Use [`grep`](https://linux.die.net/man/1/grep):
```sh
$ # View all "INFO" level logs
$ node app.js | grep '"level":30'
```
1. Use [`jq`](https://stedolan.github.io/jq/):
```sh
$ # View all "ERROR" level logs
$ node app.js | jq 'select(.level == 50)'
```
<a id="transport-systemd"></a>
## Transports and systemd
`systemd` makes it complicated to use pipes in services. One method for overcoming
this challenge is to use a subshell:
```
ExecStart=/bin/sh -c '/path/to/node app.js | pino-transport'
```
<a id="dupe-keys"></a>
## How Pino handles duplicate keys
Duplicate keys are possibly when a child logger logs an object with a key that
collides with a key in the child loggers bindings.
See the [child logger duplicate keys caveat](/docs/child-loggers.md#duplicate-keys-caveat)
for information on this is handled.
<a id="level-string"></a>
## Log levels as labels instead of numbers
Pino log lines are meant to be parseable. Thus, Pino's default mode of operation
is to print the level value instead of the string name. However, while it is
possible to set the `useLevelLabels` option, we recommend using one of these
options instead if you are able:
1. If the only change desired is the name then a transport can be used. One such
transport is [`pino-text-level-transport`](https://npm.im/pino-text-level-transport).
1. Use a prettifier like [`pino-pretty`](https://npm.im/pino-pretty) to make
the logs human friendly.
<a id="debug"></a>
## Pino with `debug`
The popular [`debug`](http://npm.im/debug) is used in many modules across the ecosystem.
The [`pino-debug`](http://github.com/pinojs/pino-debug) module
can capture calls to `debug` loggers and run them
through `pino` instead. This results in a 10x (20x in extreme mode)
performance improvement - even though `pino-debug` is logging additional
data and wrapping it in JSON.
To quickly enable this install [`pino-debug`](http://github.com/pinojs/pino-debug)
and preload it with the `-r` flag, enabling any `debug` logs with the
`DEBUG` environment variable:
```sh
$ npm i pino-debug
$ DEBUG=* node -r pino-debug app.js
```
[`pino-debug`](http://github.com/pinojs/pino-debug) also offers fine grain control to map specific `debug`
namespaces to `pino` log levels. See [`pino-debug`](http://github.com/pinojs/pino-debug)
for more.
<a id="windows"></a>
## Unicode and Windows terminal
Pino uses [sonic-boom](https://github.com/mcollina/sonic-boom) to speed
up logging. Internally, it uses [`fs.write`](https://nodejs.org/dist/latest-v10.x/docs/api/fs.html#fs_fs_write_fd_string_position_encoding_callback) to write log lines directly to a file
descriptor. On Windows, unicode output is not handled properly in the
terminal (both `cmd.exe` and powershell), and as such the output could
be visualized incorrectly if the log lines include utf8 characters. It
is possible to configure the terminal to visualize those characters
correctly with the use of [`chcp`](https://ss64.com/nt/chcp.html) by
executing in the terminal `chcp 65001`. This is a known limitation of
Node.js.

167
node_modules/pino/docs/legacy.md generated vendored Normal file
View file

@ -0,0 +1,167 @@
# Legacy
## Legacy Node Support
### Node v4
Node v4 is supported on the [Pino v4](#pino-v4-documentation) line.
### Node v0.10-v0.12
Node v0.10 or Node v0.12 is supported on the [Pino v2](#pino-v2-documentation) line.
## Documentation
### Pino v4 Documentation
<https://github.com/pinojs/pino/tree/v4.x.x/docs>
### Pino v3 Documentation
<https://github.com/pinojs/pino/tree/v3.x.x/docs>
### Pino v2 Documentation
<https://github.com/pinojs/pino/tree/v2.x.x/docs>
## Migration
### Pino v4 to to Pino v5
#### Logging Destination
In Pino v4 the destination could be set by passing a stream as the
second parameter to the exported `pino` function. This is still the
case in v5. However it's strongly recommended to use `pino.destination`
which will write logs ~30% faster.
##### v4
```js
const stdoutLogger = require('pino')()
const stderrLogger = require('pino')(process.stderr)
const fileLogger = require('pino')(fs.createWriteStream('/log/path'))
```
##### v5
```js
const stdoutLogger = require('pino')() // pino.destination by default
const stderrLogger = require('pino')(pino.destination(2))
const fileLogger = require('pino')(pino.destination('/log/path'))
```
Note: This is not a breaking change, `WritableStream` instances are still
supported, but are slower than `pino.destination` which
uses the high speed [`sonic-boom` ⇗](https://github.com/mcollina/sonic-boom) library.
* See [`destination` parameter](/docs/api.md#destination)
#### Extreme Mode
The `extreme` setting does not exist as an option in Pino v5, instead use
a `pino.extreme` destination.
##### v4
```js
const stdoutLogger = require('pino')({extreme: true})
const stderrLogger = require('pino')({extreme: true}, process.stderr)
const fileLogger = require('pino')({extreme: true}, fs.createWriteStream('/log/path'))
```
##### v5
```js
const stdoutLogger = require('pino')(pino.extreme())
const stderrLogger = require('pino')(pino.extreme(2))
const fileLogger = require('pino')(pino.extreme('/log/path'))
```
* See [pino.extreme](/docs/api.md#pino-extreme)
* See [Extreme mode ⇗](/docs/extreme.md)
#### Pino CLI is now pino-pretty CLI
The Pino CLI is provided with Pino v4 for basic log prettification.
From Pino v5 the CLI is installed separately with `pino-pretty`.
##### v4
```sh
$ npm install -g pino
$ node app.js | pino
```
##### v5
```sh
$ npm install -g pino-pretty
$ node app.js | pino-pretty
```
* See [Pretty Printing documentation](/docs/pretty.md)
#### Programmatic Pretty Printing
The [`pino.pretty()`](https://github.com/pinojs/pino/blob/v4.x.x/docs/API.md#prettyoptions)
method has also been removed from Pino v5.
##### v4
```js
var pino = require('pino')
var pretty = pino.pretty()
pretty.pipe(process.stdout)
```
##### v5
Instead use the `prettyPrint` option (also available in v4):
```js
const logger = require('pino')({
prettyPrint: process.env.NODE_ENV !== 'production'
})
```
In v5 the `pretty-print` module must be installed to use the `prettyPrint` option:
```sh
npm install --save-dev pino-pretty
```
* See [prettyPrint option](/docs/api.md#prettyPrint)
* See [Pretty Printing documentation](/docs/pretty.md)
#### Slowtime
In Pino v4 a `slowtime` option was supplied, which allowed for full ISO dates
in the timestamps instead of milliseconds since the Epoch. In Pino v5 this
has been completely removed, along with the `pino.stdTimeFunctions.slowTime`
function. In order to achieve the equivalent in v5, a custom
time function should be supplied:
##### v4
```js
const pino = require('pino')
const logger = pino({slowtime: true})
// following avoids deprecation warning in v4:
const loggerAlt = pino({timestamp: pino.stdTimeFunctions.slowTime})
```
##### v5
```js
const logger = require('pino')({
timestamp: () => ',"time":"' + (new Date()).toISOString() + '"'
})
```
The practice of creating ISO dates in-process for logging purposes is strongly
recommended against. Instead consider post-processing the logs or using a transport
to convert the timestamps.
* See [timestamp option](/docs/api.md#timestamp)

93
node_modules/pino/docs/pretty.md generated vendored Normal file
View file

@ -0,0 +1,93 @@
# Pretty Printing
By default, Pino log lines are newline delimited JSON (NDJSON). This is perfect
for production usage and long term storage. It's not so great for development
environments. Thus, Pino logs can be prettified by using a Pino prettifier
module like [`pino-pretty`][pp]:
```sh
$ cat app.log | pino-pretty
```
For almost all situations, this is the recommended way to prettify logs. The
programmatic API, described in the next section, is primarily for integration
purposes with other CLI based prettifiers.
## Prettifier API
Pino prettifier modules are extra modules that provide a CLI for parsing NDJSON
log lines piped via `stdin` and expose an API which conforms to the Pino
[metadata streams](api.md#metadata) API.
The API requires modules provide a factory function which returns a prettifier
function. This prettifier function must accept either a string of NDJSON or
a Pino log object. A psuedo-example of such a prettifier is:
The uninitialized Pino instance is passed as `this` into prettifier factory function,
so it can be accessed via closure by the returned prettifier function.
```js
module.exports = function myPrettifier (options) {
// `this` is bound to the pino instance
// Deal with whatever options are supplied.
return function prettifier (inputData) {
let logObject
if (typeof inputData === 'string') {
const parsedData = someJsonParser(inputData)
logObject = (isPinoLog(parsedData)) ? parsedData : undefined
} else if (isObject(inputData) && isPinoLog(inputData)) {
logObject = inputData
}
if (!logObject) return inputData
// implement prettification
}
function isObject (input) {
return Object.prototype.toString.apply(input) === '[object Object]'
}
function isPinoLog (log) {
return log && (log.hasOwnProperty('v') && log.v === 1)
}
}
```
The reference implementation of such a module is the [`pino-pretty`][pp] module.
To learn more about creating a custom prettifier module, refer to the
`pino-pretty` source code.
Note: if the prettifier returns `undefined`, instead of a formatted line, nothing
will be written to the destination stream.
### API Example
> #### NOTE:
> For general usage, it is highly recommended that logs are piped into
> the prettifier instead. Prettified logs are not easily parsed and cannot
> be easily investigated at a later date.
1. Install a prettifier module as a separate dependency, e.g. `npm install pino-pretty`.
1. Instantiate the logger with pretty printing enabled:
```js
const pino = require('pino')
const log = pino({
prettyPrint: {
levelFirst: true
},
prettifier: require('pino-pretty')
})
```
Note: the default prettifier module is `pino-pretty`, so the preceding
example could be:
```js
const pino = require('pino')
const log = pino({
prettyPrint: {
levelFirst: true
}
})
```
See the [`pino-pretty` documentation][pp] for more information on the options
that can be passed via `prettyPrint`.
[pp]: https://github.com/pinojs/pino-pretty

133
node_modules/pino/docs/redaction.md generated vendored Normal file
View file

@ -0,0 +1,133 @@
# Redaction
> Redaction is not supported in the browser [#670](https://github.com/pinojs/pino/issues/670)
To redact sensitive information, supply paths to keys that hold sensitive data
using the `redact` option:
```js
const logger = require('.')({
redact: ['key', 'path.to.key', 'stuff.thats[*].secret']
})
logger.info({
key: 'will be redacted',
path: {
to: {key: 'sensitive', another: 'thing'}
},
stuff: {
thats: [
{secret: 'will be redacted', logme: 'will be logged'},
{secret: 'as will this', logme: 'as will this'}
]
}
})
```
This will output:
```JSON
{"level":30,"time":1527777350011,"pid":3186,"hostname":"Davids-MacBook-Pro-3.local","key":"[Redacted]","path":{"to":{"key":"[Redacted]","another":"thing"}},"stuff":{"thats":[{"secret":"[Redacted]","logme":"will be logged"},{"secret":"[Redacted]","logme":"as will this"}]},"v":1}
```
The `redact` option can take an array (as shown in the above example) or
an object. This allows control over *how* information is redacted.
For instance, setting the censor:
```js
const logger = require('.')({
redact: {
paths: ['key', 'path.to.key', 'stuff.thats[*].secret'],
censor: '**GDPR COMPLIANT**'
}
})
logger.info({
key: 'will be redacted',
path: {
to: {key: 'sensitive', another: 'thing'}
},
stuff: {
thats: [
{secret: 'will be redacted', logme: 'will be logged'},
{secret: 'as will this', logme: 'as will this'}
]
}
})
```
This will output:
```JSON
{"level":30,"time":1527778563934,"pid":3847,"hostname":"Davids-MacBook-Pro-3.local","key":"**GDPR COMPLIANT**","path":{"to":{"key":"**GDPR COMPLIANT**","another":"thing"}},"stuff":{"thats":[{"secret":"**GDPR COMPLIANT**","logme":"will be logged"},{"secret":"**GDPR COMPLIANT**","logme":"as will this"}]},"v":1}
```
The `redact.remove` option also allows for the key and value to be removed from output:
```js
const logger = require('.')({
redact: {
paths: ['key', 'path.to.key', 'stuff.thats[*].secret'],
remove: true
}
})
logger.info({
key: 'will be redacted',
path: {
to: {key: 'sensitive', another: 'thing'}
},
stuff: {
thats: [
{secret: 'will be redacted', logme: 'will be logged'},
{secret: 'as will this', logme: 'as will this'}
]
}
})
```
This will output
```JSON
{"level":30,"time":1527782356751,"pid":5758,"hostname":"Davids-MacBook-Pro-3.local","path":{"to":{"another":"thing"}},"stuff":{"thats":[{"logme":"will be logged"},{"logme":"as will this"}]},"v":1}
```
See [pino options in API](/docs/api.md#redact-array-object) for `redact` API details.
<a name="paths"></a>
## Path Syntax
The syntax for paths supplied to the `redact` option conform to the syntax in path lookups
in standard EcmaScript, with two additions:
* paths may start with bracket notation
* paths may contain the asterisk `*` to denote a wildcard
By way of example, the following are all valid paths:
* `a.b.c`
* `a["b-c"].d`
* `["a-b"].c`
* `a.b.*`
* `a[*].b`
## Overhead
Pino's redaction functionality is built on top of [`fast-redact`](http://github.com/davidmarkclements/fast-redact)
which adds about 2% overhead to `JSON.stringify` when using paths without wildcards.
When used with pino logger with a single redacted path, any overhead is within noise -
a way to deterministically measure it's effect has not been found. This is because its not a bottleneck.
However, wildcard redaction does carry a non-trivial cost relative to explicitly declaring the keys
(50% in a case where four keys are redacted across two objects). See
the [`fast-redact` benchmarks](https://github.com/davidmarkclements/fast-redact#benchmarks) for details.
## Safety
The `redact` option is intended as an initialization time configuration option.
It's extremely important that path strings do not originate from user input.
The `fast-redact` module uses a VM context to syntax check the paths, user input
should never be combined with such an approach. See the [`fast-redact` Caveat](https://github.com/davidmarkclements/fast-redact#caveat)
and the [`fast-redact` Approach](https://github.com/davidmarkclements/fast-redact#approach) for in-depth information.

387
node_modules/pino/docs/transports.md generated vendored Normal file
View file

@ -0,0 +1,387 @@
# Transports
A "transport" for Pino is supplementary tool which consumes Pino logs.
Consider the following example:
```js
const split = require('split2')
const pump = require('pump')
const through = require('through2')
const myTransport = through.obj(function (chunk, enc, cb) {
// do the necessary
console.log(chunk)
cb()
})
pump(process.stdin, split(JSON.parse), myTransport)
```
The above defines our "transport" as the file `my-transport-process.js`.
Logs can now be consumed using shell piping:
```sh
node my-app-which-logs-stuff-to-stdout.js | node my-transport-process.js
```
Ideally, a transport should consume logs in a separate process to the application,
Using transports in the same process causes unnecessary load and slows down
Node's single threaded event loop.
## In-process transports
> **Pino *does not* natively support in-process transports.**
Pino does not support in-process transports because Node processes are
single threaded processes (ignoring some technical details). Given this
restriction, one of the methods Pino employs to achieve its speed is to
purposefully offload the handling of logs, and their ultimate destination, to
external processes so that the threading capabilities of the OS can be
used (or other CPUs).
One consequence of this methodology is that "error" logs do not get written to
`stderr`. However, since Pino logs are in a parseable format, it is possible to
use tools like [pino-tee][pino-tee] or [jq][jq] to work with the logs. For
example, to view only logs marked as "error" logs:
```
$ node an-app.js | jq 'select(.level == 50)'
```
In short, the way Pino generates logs:
1. Reduces the impact of logging on an application to the absolute minimum.
2. Gives greater flexibility in how logs are processed and stored.
Given all of the above, Pino recommends out-of-process log processing.
However, it is possible to wrap Pino and perform processing in-process.
For an example of this, see [pino-multi-stream][pinoms].
[pino-tee]: https://npm.im/pino-tee
[jq]: https://stedolan.github.io/jq/
[pinoms]: https://npm.im/pino-multi-stream
## Known Transports
PR's to this document are welcome for any new transports!
+ [pino-applicationinsights](#pino-applicationinsights)
+ [pino-azuretable](#pino-azuretable)
+ [pino-cloudwatch](#pino-cloudwatch)
+ [pino-couch](#pino-couch)
+ [pino-datadog](#pino-datadog)
+ [pino-elasticsearch](#pino-elasticsearch)
+ [pino-mq](#pino-mq)
+ [pino-mysql](#pino-mysql)
+ [pino-papertrail](#pino-papertrail)
+ [pino-redis](#pino-redis)
+ [pino-sentry](#pino-sentry)
+ [pino-socket](#pino-socket)
+ [pino-stackdriver](#pino-stackdriver)
+ [pino-syslog](#pino-syslog)
+ [pino-websocket](#pino-websocket)
+ [pino-http-send](#pino-http-send)
<a id="pino-applicationinsights"></a>
### pino-applicationinsights
The [pino-applicationinsights](https://www.npmjs.com/package/pino-applicationinsights) module is a transport that will forward logs to [Azure Application Insights](https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview).
Given an application `foo` that logs via pino, you would use `pino-applicationinsights` like so:
``` sh
$ node foo | pino-applicationinsights --key blablabla
```
For full documentation of command line switches read [readme](https://github.com/ovhemert/pino-applicationinsights#readme)
<a id="pino-azuretable"></a>
### pino-azuretable
The [pino-azuretable](https://www.npmjs.com/package/pino-azuretable) module is a transport that will forward logs to the [Azure Table Storage](https://azure.microsoft.com/en-us/services/storage/tables/).
Given an application `foo` that logs via pino, you would use `pino-azuretable` like so:
``` sh
$ node foo | pino-azuretable --account storageaccount --key blablabla
```
For full documentation of command line switches read [readme](https://github.com/ovhemert/pino-azuretable#readme)
<a id="pino-cloudwatch"></a>
### pino-cloudwatch
[pino-cloudwatch][pino-cloudwatch] is a transport that buffers and forwards logs to [Amazon CloudWatch][].
```sh
$ node app.js | pino-cloudwatch --group my-log-group
```
[pino-cloudwatch]: https://github.com/dbhowell/pino-cloudwatch
[Amazon CloudWatch]: https://aws.amazon.com/cloudwatch/
<a id="pino-couch"></a>
### pino-couch
[pino-couch][pino-couch] uploads each log line as a [CouchDB][CouchDB] document.
```sh
$ node app.js | pino-couch -U https://couch-server -d mylogs
```
[pino-couch]: https://github.com/IBM/pino-couch
[CouchDB]: https://couchdb.apache.org
<a id="pino-datadog"></a>
### pino-datadog
The [pino-datadog](https://www.npmjs.com/package/pino-datadog) module is a transport that will forward logs to [DataDog](https://www.datadoghq.com/) through it's API.
Given an application `foo` that logs via pino, you would use `pino-datadog` like so:
``` sh
$ node foo | pino-datadog --key blablabla
```
For full documentation of command line switches read [readme](https://github.com/ovhemert/pino-datadog#readme)
<a id="pino-elasticsearch"></a>
### pino-elasticsearch
[pino-elasticsearch][pino-elasticsearch] uploads the log lines in bulk
to [Elasticsearch][elasticsearch], to be displayed in [Kibana][kibana].
It is extremely simple to use and setup
```sh
$ node app.js | pino-elasticsearch
```
Assuming Elasticsearch is running on localhost.
To connect to an external elasticsearch instance (recommended for production):
* Check that `network.host` is defined in the `elasticsearch.yml` configuration file. See [elasticsearch Network Settings documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#common-network-settings) for more details.
* Launch:
```sh
$ node app.js | pino-elasticsearch --node http://192.168.1.42:9200
```
Assuming Elasticsearch is running on `192.168.1.42`.
To connect to AWS Elasticsearch:
```sh
$ node app.js | pino-elasticsearch --node https://es-url.us-east-1.es.amazonaws.com --es-version 6
```
Then [create an index pattern](https://www.elastic.co/guide/en/kibana/current/setup.html) on `'pino'` (the default index key for `pino-elasticsearch`) on the Kibana instance.
[pino-elasticsearch]: https://github.com/pinojs/pino-elasticsearch
[elasticsearch]: https://www.elastic.co/products/elasticsearch
[kibana]: https://www.elastic.co/products/kibana
<a id="pino-mq"></a>
### pino-mq
The `pino-mq` transport will take all messages received on `process.stdin` and send them over a message bus using JSON serialization.
This useful for:
* moving backpressure from application to broker
* transforming messages pressure to another component
```
node app.js | pino-mq -u "amqp://guest:guest@localhost/" -q "pino-logs"
```
Alternatively a configuration file can be used:
```
node app.js | pino-mq -c pino-mq.json
```
A base configuration file can be initialized with:
```
pino-mq -g
```
For full documentation of command line switches and configuration see [the `pino-mq` readme](https://github.com/itavy/pino-mq#readme)
<a id="pino-papertrail"></a>
### pino-papertrail
pino-papertrail is a transport that will forward logs to the [papertrail](https://papertrailapp.com) log service through an UDPv4 socket.
Given an application `foo` that logs via pino, and a papertrail destination that collects logs on port UDP `12345` on address `bar.papertrailapp.com`, you would use `pino-papertrail`
like so:
```
node yourapp.js | pino-papertrail --host bar.papertrailapp.com --port 12345 --appname foo
```
for full documentation of command line switches read [readme](https://github.com/ovhemert/pino-papertrail#readme)
<a id="pino-mysql"></a>
### pino-mysql
[pino-mysql][pino-mysql] loads pino logs into [MySQL][MySQL] and [MariaDB][MariaDB].
```sh
$ node app.js | pino-mysql -c db-configuration.json
```
`pino-mysql` can extract and save log fields into corresponding database field
and/or save the entire log stream as a [JSON Data Type][JSONDT].
For full documentation and command line switches read the [readme][pino-mysql].
[pino-mysql]: https://www.npmjs.com/package/pino-mysql
[MySQL]: https://www.mysql.com/
[MariaDB]: https://mariadb.org/
[JSONDT]: https://dev.mysql.com/doc/refman/8.0/en/json.html
<a id="pino-redis"></a>
### pino-redis
[pino-redis][pino-redis] loads pino logs into [Redis][Redis].
```sh
$ node app.js | pino-redis -U redis://username:password@localhost:6379
```
[pino-redis]: https://github.com/buianhthang/pino-redis
[Redis]: https://redis.io/
<a id="pino-sentry"></a>
### pino-sentry
[pino-sentry][pino-sentry] loads pino logs into [Sentry][Sentry].
```sh
$ node app.js | pino-sentry --dsn=https://******@sentry.io/12345
```
For full documentation of command line switches see the [pino-sentry readme](https://github.com/aandrewww/pino-sentry/blob/master/README.md)
[pino-sentry]: https://www.npmjs.com/package/pino-sentry
[Sentry]: https://sentry.io/
<a id="pino-socket"></a>
### pino-socket
[pino-socket][pino-socket] is a transport that will forward logs to a IPv4
UDP or TCP socket.
As an example, use `socat` to fake a listener:
```sh
$ socat -v udp4-recvfrom:6000,fork exec:'/bin/cat'
```
Then run an application that uses `pino` for logging:
```sh
$ node app.js | pino-socket -p 6000
```
Logs from the application should be observed on both consoles.
[pino-socket]: https://www.npmjs.com/package/pino-socket
#### Logstash
The [pino-socket][pino-socket] module can also be used to upload logs to
[Logstash][logstash] via:
```
$ node app.js | pino-socket -a 127.0.0.1 -p 5000 -m tcp
```
Assuming logstash is running on the same host and configured as
follows:
```
input {
tcp {
port => 5000
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => "127.0.0.1:9200"
}
}
```
See <https://www.elastic.co/guide/en/kibana/current/setup.html> to learn
how to setup [Kibana][kibana].
For Docker users, see
https://github.com/deviantony/docker-elk to setup an ELK stack.
<a id="pino-stackdriver"></a>
### pino-stackdriver
The [pino-stackdriver](https://www.npmjs.com/package/pino-stackdriver) module is a transport that will forward logs to the [Google Stackdriver](https://cloud.google.com/logging/) log service through it's API.
Given an application `foo` that logs via pino, a stackdriver log project `bar` and credentials in the file `/credentials.json`, you would use `pino-stackdriver`
like so:
``` sh
$ node foo | pino-stackdriver --project bar --credentials /credentials.json
```
For full documentation of command line switches read [readme](https://github.com/ovhemert/pino-stackdriver#readme)
<a id="pino-syslog"></a>
### pino-syslog
[pino-syslog][pino-syslog] is a transforming transport that converts
`pino` NDJSON logs to [RFC3164][rfc3164] compatible log messages. The `pino-syslog` module does not
forward the logs anywhere, it merely re-writes the messages to `stdout`. But
when used in combination with `pino-socket` the log messages can be relayed to a syslog server:
```sh
$ node app.js | pino-syslog | pino-socket -a syslog.example.com
```
Example output for the "hello world" log:
```
<134>Apr 1 16:44:58 MacBook-Pro-3 none[94473]: {"pid":94473,"hostname":"MacBook-Pro-3","level":30,"msg":"hello world","time":1459529098958,"v":1}
```
[pino-syslog]: https://www.npmjs.com/package/pino-syslog
[rfc3164]: https://tools.ietf.org/html/rfc3164
[logstash]: https://www.elastic.co/products/logstash
<a id="pino-websocket"></a>
### pino-websocket
[pino-websocket](https://www.npmjs.com/package/@abeai/pino-websocket) is a transport that will forward each log line to a websocket server.
```sh
$ node app.js | pino-websocket -a my-websocket-server.example.com -p 3004
```
For full documentation of command line switches read [readme](https://github.com/abeai/pino-webscoket#README)
<a id="pino-http-send"></a>
### pino-http-send
[pino-http-send](https://npmjs.com/package/pino-http-send) is a configurable and low overhead
transport that will batch logs and send to a specified URL.
```console
$ node app.js | pino-http-send -u http://localhost:8080/logs
```

230
node_modules/pino/docs/web.md generated vendored Normal file
View file

@ -0,0 +1,230 @@
# Web Frameworks
Since HTTP logging is a primary use case, Pino has first class support for the Node.js
web framework ecosystem.
+ [Pino with Fastify](#fastify)
+ [Pino with Express](#express)
+ [Pino with Hapi](#hapi)
+ [Pino with Restify](#restify)
+ [Pino with Koa](#koa)
+ [Pino with Node core `http`](#http)
+ [Pino with Nest](#nest)
<a id="fastify"></a>
## Pino with Fastify
The Fastify web framework comes bundled with Pino by default, simply set Fastify's
`logger` option to `true` and use `request.log` or `reply.log` for log messages that correspond
to each individual request:
```js
const fastify = require('fastify')({
logger: true
})
fastify.get('/', async (request, reply) => {
request.log.info('something')
return { hello: 'world' }
})
```
The `logger` option can also be set to an object, which will be passed through directly
as the [`pino` options object](/docs/api.md#options-object).
See the [fastify documentation](https://www.fastify.io/docs/latest/Logging/) for more information.
<a id="express"></a>
## Pino with Express
```sh
npm install express-pino-logger
```
```js
const app = require('express')()
const pino = require('express-pino-logger')()
app.use(pino)
app.get('/', function (req, res) {
req.log.info('something')
res.send('hello world')
})
app.listen(3000)
```
See the [express-pino-logger readme](http://npm.im/express-pino-logger) for more info.
<a id="hapi"></a>
## Pino with Hapi
```sh
npm install hapi-pino
```
```js
'use strict'
require('make-promises-safe')
const Hapi = require('hapi')
async function start () {
// Create a server with a host and port
const server = Hapi.server({
host: 'localhost',
port: 3000
})
// Add the route
server.route({
method: 'GET',
path: '/',
handler: async function (request, h) {
// request.log is HAPI standard way of logging
request.log(['a', 'b'], 'Request into hello world')
// a pino instance can also be used, which will be faster
request.logger.info('In handler %s', request.path)
return 'hello world'
}
})
await server.register({
plugin: require('.'),
options: {
prettyPrint: process.env.NODE_ENV !== 'production'
}
})
// also as a decorated API
server.logger().info('another way for accessing it')
// and through Hapi standard logging system
server.log(['subsystem'], 'third way for accessing it')
await server.start()
return server
}
start().catch((err) => {
console.log(err)
process.exit(1)
})
```
See the [hapi-pino readme](http://npm.im/hapi-pino) for more info.
<a id="restify"></a>
## Pino with Restify
```sh
npm install restify-pino-logger
```
```js
const server = require('restify').createServer({name: 'server'})
const pino = require('restify-pino-logger')()
server.use(pino)
server.get('/', function (req, res) {
req.log.info('something')
res.send('hello world')
})
server.listen(3000)
```
See the [restify-pino-logger readme](http://npm.im/restify-pino-logger) for more info.
<a id="koa"></a>
## Pino with Koa
### Koa
```sh
npm install koa-pino-logger
```
```js
const Koa = require('koa')
const app = new Koa()
const pino = require('koa-pino-logger')()
app.use(pino)
app.use((ctx) => {
ctx.log.info('something else')
ctx.body = 'hello world'
})
app.listen(3000)
```
See the [koa-pino-logger readme](https://github.com/pinojs/koa-pino-logger) for more info.
<a id="http"></a>
## Pino with Node core `http`
```sh
npm install pino-http
```
```js
const http = require('http')
const server = http.createServer(handle)
const logger = require('pino-http')()
function handle (req, res) {
logger(req, res)
req.log.info('something else')
res.end('hello world')
}
server.listen(3000)
```
See the [pino-http readme](http://npm.im/pino-http) for more info.
<a id="nest"></a>
## Pino with Nest
```sh
npm install nestjs-pino
```
```ts
import { NestFactory } from '@nestjs/core'
import { Controller, Get, Module } from '@nestjs/common'
import { LoggerModule, Logger } from 'nestjs-pino'
@Controller()
export class AppController {
constructor(private readonly logger: Logger) {}
@Get()
getHello() {
this.logger.log('something')
return `Hello world`
}
}
@Module({
controllers: [AppController],
imports: [LoggerModule.forRoot()]
})
class MyModule {}
async function bootstrap() {
const app = await NestFactory.create(MyModule)
await app.listen(3000)
}
bootstrap()
```
See the [nestjs-pino readme](http://npm.im/nestjs-pino) for more info.