tap Reporting Results
While TAP is intended to be both machine parseable and human intelligible, and the raw TAP content is often a good way to see exactly what is happening with a test, it tends to be quite a bit too noisy for regular ergonomic human consumption.
For this reason, tap comes with an ink-based reporter system, and additional reporters can be added as well.
You can specify the reporter to use with the --reporter
config
option. Custom reporters can be an Ink-based React component, a
Stream class, or a CLI program.
Included Reporters#
The base
reporter is the one that tap uses by default. It
shows information about tests as they are running in parallel,
and aims to be verbose enough to show you what's going on,
without showing more information that is useful.
If the base
reporter is too noisy for your liking, you can use
the terse
reporter, which is similar, but prints much less
information, or min
or dot
which are even terser, or silent
which is as terse as it gets.
If the base report is not noisy enough, try running with
--passes
to show all passing assertions, or -R tap
to print
out the raw TAP data. If that's still not
enough, you can run with --debug
to see all the inner workings
of tap's machinery. (This is really only good for debugging tap
itself.)
All the reporters are designed to be as accessible as possible, featuring diff and syntax highlighting color choices that are amenable to any level of color sensitivity.
PASS docs/foo.test.js 2 OK 392ms 🌈 TEST COMPLETE 🌈 Asserts: 2 pass 0 fail 2 of 2 complete Suites: 1 pass 0 fail 1 of 1 complete # { total: 2, pass: 2 } # time=426.889ms
An example with a bit more going on:
FAIL test/tap.test.js 3 failed 6 todo 6 skip of 30 325ms ~ a test that is entirely skipped ~ skipped with a message ~ stringOrNull > a failure skipped ~ stringOrNull > a pass skipped ~ stringOrNull > a failure skipped with message message ~ stringOrNull > a pass skipped with message message ☐ a test marked todo ☐ todo with a message ☐ stringOrNull > a failure marked todo ☐ stringOrNull > a pass marked todo ☐ stringOrNull > todo failure with message message ☐ stringOrNull > todo pass with message message ✖ suite of tests that fail > uhoh, this one throws > Invalid time value lib/index.mjs:11:43 ✖ suite of tests that fail > failer > should be equal test/tap.test.js:51:7 ✖ suite of tests that fail > failer > should be equal test/tap.test.js:53:7 -----------|---------|----------|---------|---------|------------------- File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s -----------|---------|----------|---------|---------|------------------- All files | 89.47 | 100 | 80 | 89.47 | index.mjs | 89.47 | 100 | 80 | 89.47 | 18-19 -----------|---------|----------|---------|---------|------------------- 🌈 TEST COMPLETE 🌈 FAIL test/tap.test.js 3 failed 6 todo 6 skip of 30 325ms ~ a test that is entirely skipped ~ skipped with a message ~ stringOrNull > a failure skipped ~ stringOrNull > a pass skipped ~ stringOrNull > a failure skipped with message message ~ stringOrNull > a pass skipped with message message ☐ a test marked todo ☐ todo with a message ☐ stringOrNull > a failure marked todo ☐ stringOrNull > a pass marked todo ☐ stringOrNull > todo failure with message message ☐ stringOrNull > todo pass with message message ✖ suite of tests that fail > uhoh, this one throws > Invalid time value lib/index.mjs 8 9 // This is a function that throws, to show how both 10 // handle errors. 11 export const thrower = (n) => new Date(n).toISOString() ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ 12 13 // one that fails, to show how failures are handled 14 export const failer = (n) => String(n + 1) 15 type: RangeError tapCaught: testFunctionThrow Date.toISOString (<anonymous>) thrower (lib/index.mjs:11:43) Test.<anonymous> (test/tap.test.js:43:13) ✖ suite of tests that fail > failer > should be equal test/tap.test.js 48 t.equal(failer(1), '2') 49 t.equal(failer(-1), '0') 50 // expect to convert string numbers to Number, but doesn't 51 t.equal(failer('1'), '2') ━━━━━━━━━┛ 52 // expect to convert non-numerics to 0, but it doesn't 53 t.equal(failer({}), '1') 54 t.end() 55 }) --- expected +++ actual @@ -1,1 +1,1 @@ -2 +11 compare: === Test.<anonymous> (test/tap.test.js:51:7) Test.<anonymous> (test/tap.test.js:47:5) test/tap.test.js:39:3 ✖ suite of tests that fail > failer > should be equal test/tap.test.js 50 // expect to convert string numbers to Number, but doesn't 51 t.equal(failer('1'), '2') 52 // expect to convert non-numerics to 0, but it doesn't 53 t.equal(failer({}), '1') ━━━━━━━━━┛ 54 t.end() 55 }) 56 57 t.end() --- expected +++ actual @@ -1,1 +1,1 @@ -1 +[object Object]1 compare: === Test.<anonymous> (test/tap.test.js:53:7) Test.<anonymous> (test/tap.test.js:47:5) test/tap.test.js:39:3 Asserts: 15 pass 3 fail 6 skip 6 todo 18 of 30 complete Suites: 0 pass 1 fail 0 skip 1 of 1 complete # { total: 30, pass: 15, fail: 3, todo: 6, skip: 6 } # time=437.285ms
The included reporters are:
base
- Shown above. A moderate level of reporting, clear indicators of where the test summary is starting, and live updates as tests are run and completed.terse
- Much less extraneous decorative output. It shows theAsserts
andSuites
summary, but no live-updating indicators of which tests are running and how long they take.min
- Even more terse than terse. Nothing at all is printed unless a failure occurs.silent
- Literally as terse as it is possible to be, no output at all.dot
- Similar tomin
, but prints a dot for each assertion, colored appropriately for its pass/fail/skip/todo status. (Or, if colors are disabled, just a regular dot.)junit
- JUnit style XML results.json
- Output the results of the test run as a single JSON object.jsonstream
- Line-delmited JSON, printing an array message for each suite and assertion.markdown
- Similar tojsonstream
, but markdown instead of JSON.tap
- Just the raw TAP stream.
Those are just the built-in reports. You can write your own
using the @tapjs/reporter
library.
Reporting to a File#
Particularly for the JSON, XML, or Markdown reporters, it can be useful to pipe to a file.
To do this, you can set the --reporter-file
option (shorthand:
-f
) to a path on disk. For example:
$ # write the xml to rspec.xml
$ tap -R junit --reporter-file rspec.xml
You can also use the replay
command in this case to output a
human-friendly report only on test failure:
$ # will create xml file, only print verbose report on failure
$ tap -R junit --reporter-file rspec.xml || tap replay
Ink-Based Reporters#
To use a custom reporter written in Ink, set the --reporter
config option to the module which default-exports a React tag
taking a TAP
object as the tap
attribute, and a
LoadedConfig
object as the config
attribute.
Stream-Based Reporters#
Alternatively, you can set --reporter
to a module that
default-exports a Writable Stream class. (That is, a class
with write
and end
methods on its prototype.)
In this case, the class will be instantiated with no arguments,
and the root TAP
test will be piped into it.
CLI-Based Reporters#
Lastly, you can provide a name of an executable program, which will receive the TAP content on its standard input.
In this case, the --reporter-arg
config options may be used to
set the arguments to the reporter program.
For example,
$ npm install --save-dev tap-mocha-reporter $ tap --color -R tap-mocha-reporter -r nyan 871 _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-__,------, 0 _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-__| /\_/\ 0 _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_~|_( ^ .^) _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ "" "" 871 passing (2s)