Halt execution of the entire test suite.
Halt execution of the entire test suite.
BAIL_OUT("can't connect to the database!")
Output a comment to Console.err
.
Output a comment to Console.err
. This is intended to be visible to
users even when running the test under a summarizing harness.
diag("Testing with Scala " + util.Properties.versionString)
Assert that the given block of code throws an exception.
Assert that the given block of code throws an exception.
dies_ok[MyException] { myObj.explode }
Runs a block of code, returning the exception that it throws, or None if no exception was thrown.
Runs a block of code, returning the exception that it throws, or None if no exception was thrown. Not an assertion on its own, but can be used to create more complicated assertions about exceptions.
is(exception { myObj.explode }, None)
An assertion that always fails, with a reason.
An assertion that always fails, with a reason.
fail("we should never get here")
An assertion that always fails.
An assertion that always fails.
fail()
A helper method which should be used to wrap test utility methods.
A helper method which should be used to wrap test utility methods. Normally, when tests fail, a message is printed giving the file and line number of the call to the test method. If you write your own test methods, they will typically use the existing methods to generate assertions, and so the file and line numbers will likely be much less useful. Wrapping the body of your method in this method will ensure that the file and line number that is reported is the line where your helper method is called instead.
def testFixtures = hideTestMethod { ??? }
Assert that two objects are equal (using ==
), and describe the
assertion.
Assert that two objects are equal (using ==
), and describe the
assertion.
is(response.status, 200, "we got a 200 OK response")
Assert that two objects are equal (using ==
).
Assert that two objects are equal (using ==
).
is(response.status, 200)
Assert that two objects are not equal (using !=
), and describe the
assertion.
Assert that two objects are not equal (using !=
), and describe the
assertion.
isnt(response.body, "", "we got a response body")
Assert that two objects are not equal (using !=
).
Assert that two objects are not equal (using !=
).
isnt(response.body, "")
Assert that a string matches a regular expression, and describe the assertion.
Assert that a string matches a regular expression, and describe the assertion.
like(response.header("Content-Type"), """text/x?html""".r, "we got an html content type")
Assert that a string matches a regular expression.
Assert that a string matches a regular expression.
like(response.header("Content-Type"), """text/x?html""".r)
Assert that the given block of code doesn't throw an exception.
Assert that the given block of code doesn't throw an exception.
lives_ok { myObj.explode }
Output a comment to Console.out
.
Output a comment to Console.out
. This is intended to only be visible
when viewing the raw TAP stream.
note("Starting the response tests")
Assert that a condition is true, and describe the assertion.
Assert that a condition is true, and describe the assertion.
ok(response.isSuccess, "the response succeeded")
Assert that a condition is true.
Assert that a condition is true.
ok(response.isSuccess)
An assertion that always succeeds, with a reason.
An assertion that always succeeds, with a reason.
pass("this line of code should be executed")
An assertion that always succeeds.
An assertion that always succeeds.
pass()
Runs the test.
Runs the test. The TAP stream will be written to Console.out and Console.err, so you can swap these out as required in order to parse it.
The exit code that the test produced. Success is indicated by 0,
failure to run the correct number of tests by 255, and any other
failure by the number of tests that failed. This should be used
by reporters which run a single test, which can call
sys.exit(exitCode)
Runs the test just like run, but in a way that makes sense when test results are being summarized rather than directly displayed.
Runs the test just like run, but in a way that makes sense when test results are being summarized rather than directly displayed.
Summarizing test reporters tend to repeatedly update the same line on the terminal, so this method makes calls to diag (which sends messages to stderr, where they are typically displayed as-is) prefix the message with a newline, to ensure that the output starts on its own line.
Mark a block of tests that should not be run at all.
Mark a block of tests that should not be run at all. They are treated as always passing.
skip(3, "too dangerous to run for now") { ??? }
Declare a logical group of assertions, to be run as a single test.
Declare a logical group of assertions, to be run as a single test. This is effectively an entirely separate test, which is run, and the result of that test is reported as a single assertion in the test that contains it. The subtest can specify its own plan in the same way that the overall test is allowed to. The name will be used as the description for the single assertion that the overall test sees.
subtest("response tests") { ??? }
Assert that the given block of code throws an exception, and that the exception that it throws is equal to the passed exception.
Assert that the given block of code throws an exception, and that the exception that it throws is equal to the passed exception.
throws_ok(new MyException("foo")) { myObj.explode }
Mark a block of tests as expected to fail.
Mark a block of tests as expected to fail. If the tests which run in the todo block fail, they will not be treated as test failures, and if they succeed, the user will be notified.
todo("waiting on fixes elsewhere") { ??? }
Assert that a string doesn't match a regular expression.
Assert that a string doesn't match a regular expression.
unlike(response.header("Authorization"), """^Digest.*""".r, "we don't support digest authentication")
Assert that a string doesn't match a regular expression.
Assert that a string doesn't match a regular expression.
unlike(response.header("Authorization"), """^Digest.*""".r)
This class is an implementation of the excellent Test::More testing library for Perl. It provides a simple assertion-based testing API, which produces TAP, which can be parsed by any TAP consumer. This library includes several TAP-consuming harnesses to use with tests using this class, including one that supports testing via
sbt test
.Basics
The most basic test looks like this:
This runs a test containing a single assertion. This will generate a TAP stream that looks like this:
which can be parsed by one of the test harnesses provided by this library.
Running tests
The simplest way to run tests is through sbt. You can register this framework with sbt by adding this line to your
build.sbt
file:Then, any classes in your test directory which extend
TestMore
will be automatically detected and run.Assertions
This class contains many more assertion methods than just
ok
. Here is a more extensive example (borrowed from Test::More's documentation):The difference between the simple
ok
method and the more specific methods likeis
andlike
is in how failures are reported. If you write this:the output will look like this:
On the other hand, a more specific assertion such as:
will produce more useful output:
In addition to assertions, there are also several methods which take a block of code to run, to modify the assertions contained in that block.
The
todo
method runs tests which are expected to fail. If they do fail, the failure is reported to the test harness as a normal succeeding test, and nothing happens. If they succeed, they are still reported as a succeeding test, but a message is displayed to the user indicating that the todo indication can be removed.The
skip
method takes a block which should not be run at all. This is similar totodo
, except that it is useful for tests which could cause problems if they were to actually run. Since the tests are never run, it's not possible to count how many tests there should be, so this must be specified as a parameter.The
subtest
method runs a block of assertions as though they were an entirely separate test, and then reports the result of that test as a single assertion in the test that calledsubtest
.Test plans
Normally, you can run any number of assertions within your class body, and the framework will assume that if no exceptions were thrown, all of the assertions that were meant to be run were actually run. Sometimes, however, that may not be a safe assumption, especially with heavily callback-driven code. In this case, you can specify exactly how many tests you intend to run, and the number of tests actually run will be checked against this at the end. To declare this, give a number to the
TestMore
constructor:In addition, if the entire test should be skipped, you can give a plan of
SkipAll()
:Extensions
These assertion methods are written with the intention of being composable. You can write your own test methods which call
is
orok
on more specific bits of data. The one issue here is that, as shown above, test failure messages refer to the file and line where theis
orok
call was made. If you want this to instead point at the line where your assertion helper method was called, you can use thehideTestMethod
method like this:This way, the test failure will be reported from the line where
notok
was called, not from the call took
in thenotok
method.