test_server
This module provides support for test suite authors.
The test_server
module aids the test suite author by providing
various support functions. The supported functionality includes:
- Logging and timestamping
- Capturing output to stdout
- Retrieving and flushing the message queue of a process
- Watchdog timers, process sleep, time measurement and unit conversion
- Private scratch directory for all test suites
- Start and stop of slave- or peer nodes
For more information on how to write test cases and for examples, please see the Test Server User's Guide.
TEST SUITE SUPPORT FUNCTIONS
The following functions are supposed to be used inside a test suite.
Functions
os_type() -> OSType
OSType = term()
os:type/0
This function is equivalent to os:type/0
. It is kept
for backwards compatibility.
fail()
fail(Reason)
Reason = term()
This will make the test suite fail with a given reason, or
with suite_failed
if no reason was given. Use this
function if you want to terminate a test case, as this will
make it easier to read the log- and HTML files. Reason
will appear in the comment field in the HTML log.
timetrap(Timout) -> Handle
Timeout = integer() | {hours,H} | {minutes,M} | {seconds,S}
H = M = S = integer()
Pid = pid()
self()
by default)Sets up a time trap for the current process. An expired
timetrap kills the process with reason
timetrap_timeout
. The returned handle is to be given
as argument to timetrap_cancel
before the timetrap
expires. If Timeout
is an integer, it is expected to
be milliseconds.
Note!
If the current process is trapping exits, it will not be killed
by the exit signal with reason timetrap_timeout
.
If this happens, the process will be sent an exit signal
with reason kill
10 seconds later which will kill the
process. Information about the timetrap timeout will in
this case not be found in the test logs. However, the
error_logger will be sent a warning.
timetrap_cancel(Handle) -> ok
Handle = term()
timetrap
This function cancels a timetrap. This must be done before the timetrap expires.
timetrap_scale_factor() -> ScaleFactor
ScaleFactor = integer()
This function returns the scale factor by which all timetraps
are scaled. It is normally 1, but can be greater than 1 if
the test_server is running cover
, using a larger amount of
scheduler threads than the amount of logical processors on the
system, running under purify, valgrind or in a debug-compiled
emulator. The scale factor can be used if you need to scale you
own timeouts in test cases with same factor as the test_server
uses.
sleep(MSecs) -> ok
MSecs = integer() | float() | infinity
This function suspends the calling process for at least the
supplied number of milliseconds. There are two major reasons
why you should use this function instead of
timer:sleep
, the first being that the module
timer
may be unavailable at the time the test suite is
run, and the second that it also accepts floating point
numbers.
adjusted_sleep(MSecs) -> ok
MSecs = integer() | float() | infinity
This function suspends the calling process for at least the
supplied number of milliseconds. The function behaves the same
way as test_server:sleep/1
, only MSecs
will be multiplied by the 'multiply_timetraps' value, if set,
and also automatically scaled up if 'scale_timetraps' is set
to true (which it is by default).
hours(N) -> MSecs
minutes(N) -> MSecs
seconds(N) -> MSecs
N = integer()
Theese functions convert N
number of hours, minutes
or seconds into milliseconds.
Use this function when you want to
test_server:sleep/1
for a number of seconds, minutes or
hours(!).
format(Format) -> ok
format(Format, Args)
format(Pri, Format)
format(Pri, Format, Args)
Format = string()
Args = list()
io_:format
.Formats output just like io:format
but sends the
formatted string to a logfile. If the urgency value,
Pri
, is lower than some threshold value, it will also
be written to the test person's console. Default urgency is
50, default threshold for display on the console is 1.
Typically, the test person don't want to see everything a
test suite outputs, but is merely interested in if the test
cases succeeded or not, which the test server tells him. If he
would like to see more, he could manually change the threshold
values by using the test_server_ctrl:set_levels/3
function.
capture_start() -> ok
capture_stop() -> ok
capture_get() -> list()
These functions makes it possible to capture all output to
stdout from a process started by the test suite. The list of
characters captured can be purged by using capture_get
.
messages_get() -> list()
This function will empty and return all the messages currently in the calling process' message queue.
timecall(M, F, A) -> {Time, Value}
M = atom()
F = atom()
A = list()
Time = integer()
Value = term()
This function measures the time (in seconds) it takes to call a certain function. The function call is not caught within a catch.
do_times(N, M, F, A) -> ok
do_times(N, Fun)
N = integer()
M = atom()
F = atom()
A = list()
Calls MFA or Fun N times. Useful for extensive testing of a sensitive function.
m_out_of_n(M, N, Fun) -> ok | exit({m_out_of_n_failed, {R,left_to_do}}
N = integer()
M = integer()
Repeatedly evaluates the given function until it succeeds (doesn't crash) M times. If, after N times, M successful attempts have not been accomplished, the process crashes with reason {m_out_of_n_failed, {R,left_to_do}}, where R indicates how many cases that was still to be successfully completed.
For example:
m_out_of_n(1,4,fun() -> tricky_test_case() end)
Tries to run tricky_test_case() up to 4 times, and is
happy if it succeeds once.
m_out_of_n(7,8,fun() -> clock_sanity_check() end)
Tries running clock_sanity_check() up to 8 times,and
allows the function to fail once. This might be useful if
clock_sanity_check/0 is known to fail if the clock crosses an
hour boundary during the test (and the up to 8 test runs could
never cross 2 boundaries)
call_crash(M, F, A) -> Result
call_crash(Time, M, F, A) -> Result
call_crash(Time, Crash, M, F, A) -> Result
Result = ok | exit(call_crash_timeout) | exit({wrong_crash_reason, Reason})
Crash = term()
Time = integer()
M = atom()
F = atom()
A = list()
Spawns a new process that calls MFA. The call is considered
successful if the call crashes with the gives reason
(Crash
) or any reason if not specified. The call must
terminate within the given time (default infinity
), or
it is considered a failure.
temp_name(Stem) -> Name
Stem = string()
Returns a unique filename starting with Stem
with
enough extra characters appended to make up a unique
filename. The filename returned is guaranteed not to exist in
the filesystem at the time of the call.
break(Comment) -> ok
Comment = string()
Comment
is a string which will be written in
the shell, e.g. explaining what to do.
This function will cancel all timetraps and pause the
execution of the test case until the user executes the
continue/0
function. It gives the user the opportunity
to interact with the erlang node running the tests, e.g. for
debugging purposes or for manually executing a part of the
test case.
When the break/1
function is called, the shell will
look something like this:
--- SEMIAUTOMATIC TESTING --- The test case executes on process <0.51.0> "Here is a comment, it could e.g. instruct to pull out a card" ----------------------------- Continue with --> test_server:continue().
The user can now interact with the erlang node, and when
ready call test_server:continue().
Note that this function can not be used if the test is
executed with ts:run/0/1/2/3/4
in batch
mode.
continue() -> ok
This function must be called in order to continue after a
test case has called break/1
.
run_on_shielded_node(Fun, CArgs) -> term()
Fun = function() (arity 0)
CArg = string()
Fun
is executed in a process on a temporarily created
hidden node with a proxy for communication with the test server
node. The node is called a shielded node (should have been called
a shield node). If Fun
is successfully executed, the result
is returned. A peer node (see start_node/3
) started from
the shielded node will be shielded from test server node, i.e.
they will not be aware of each other. This is useful when you want
to start nodes from earlier OTP releases than the OTP release of
the test server node.
Nodes from an earlier OTP release can normally not be started
if the test server hasn't been started in compatibility mode
(see the +R
flag in the erl(1)
documentation) of
an earlier release. If a shielded node is started in compatibility
mode of an earlier OTP release than the OTP release of the test
server node, the shielded node can start nodes of an earlier OTP
release.
Note!
You must make sure that nodes started by the shielded node never communicate directly with the test server node.
Note!
Slave nodes always communicate with the test server node; therefore, never start slave nodes from the shielded node, always start peer nodes.
start_node(Name, Type, Options) -> {ok, Node} | {error, Reason}
Name = atom() | string()
Type = slave | peer
Options = [{atom(), term()]
This functions starts a node, possibly on a remote machine,
and guarantees cross architecture transparency. Type is set to
either slave
or peer
.
slave
means that the new node will have a master,
i.e. the slave node will terminate if the master terminates,
TTY output produced on the slave will be sent back to the
master node and file I/O is done via the master. The master is
normally the target node unless the target is itself a slave.
peer
means that the new node is an independent node
with no master.
Options
is a tuplelist which can contain one or more
of
{remote, true}
{args, Arguments}
{wait, false}
Only valid for peer nodes
{fail_on_error, false}
{error, Reason}
rather than failing the
test case.
Only valid for peer nodes. Note that slave nodes always act as if they had
fail_on_error=false
{erl, ReleaseList}
When specifying this option to run a previous release, use
is_release_available/1
function to test if the given
release is available and skip the test case if not.
In order to avoid compatibility problems (may not appear right away), use a shielded node (see
run_on_shielded_node/2
)
when starting nodes from different OTP releases than the test
server.
{cleanup, false}
{env, Env}
Env
should be a list of tuples {Name, Val}
,
where Name
is the name of an environment variable, and
Val
is the value it is to have in the started node.
Both Name
and Val
must be strings. The one
exception is Val
being the atom false
(in
analogy with os:getenv/1
), which removes the
environment variable. Only valid for peer nodes. Not
available on VxWorks.{start_cover, false}
false
. This can be necessary if the connection to
the node at some point will be broken but the node is
expected to stay alive. The reason is that a remote cover
node can not continue to run without its main node. Another
solution would be to explicitly stop cover on the node
before breaking the connection, but in some situations (if
old code resides in one or more processes) this is not
possible.stop_node(NodeName) -> bool()
NodeName = term()
This functions stops a node previously started with
start_node/3
. Use this function to stop any node you
start, or the test server will produce a warning message in
the test logs, and kill the nodes automatically unless it was
started with the {cleanup, false}
option.
is_commercial() -> bool()
This function test whether the emulator is commercially supported emulator. The tests for a commercially supported emulator could be more stringent (for instance, a commercial release should always contain documentation for all applications).
is_release_available(Release) -> bool()
Release = string() | atom()
This function test whether the release given by
Release
(for instance, "r12b_patched") is available
on the computer that the test_server controller is running on.
Typically, you should skip the test case if not.
Caution: This function may not be called from the suite
clause of a test case, as the test_server will deadlock.
is_native(Mod) -> bool()
Mod = atom()
Checks whether the module is natively compiled or not
app_test(App) -> ok | test_server:fail()
app_test(App,Mode)
App = term()
Mode = pedantic | tolerant
Checks an applications .app file for obvious errors. The following is checked:
- required fields
- that all modules specified actually exists
- that all requires applications exists
- that no module included in the application has export_all
- that all modules in the ebin/ dir is included (If
Mode==tolerant
this only produces a warning, as all modules does not have to be included)
appup_test(App) -> ok | test_server:fail()
App = term()
Checks an applications .appup file for obvious errors. The following is checked:
- syntax
- that .app file version and .appup file version match
- for non-library applications: validity of high-level upgrade instructions, specifying no instructions is explicitly allowed (in this case the application is not upgradeable)
- for library applications: that there is exactly one wildcard regexp clause restarting the application when upgrading or downgrading from any version
comment(Comment) -> ok
Comment = string()
The given String will occur in the comment field of the table on the HTML result page. If called several times, only the last comment is printed. comment/1 is also overwritten by the return value {comment,Comment} from a test case or by fail/1 (which prints Reason as a comment).
TEST SUITE EXPORTS
The following functions must be exported from a test suite module.
Functions
all(suite) -> TestSpec | {skip, Comment}
TestSpec = list()
Comment = string()
This function must return the test specification for the test suite module. The syntax of a test specification is described in the Test Server User's Guide.
init_per_suite(Config0) -> Config1 | {skip, Comment}
Config0 = Config1 = [tuple()]
Comment = string()
This function is called before all other test cases in the
suite. Config
is the configuration which can be modified
here. Whatever is returned from this function is given as
Config
to the test cases.
If this function fails, all test cases in the suite will be skipped.
end_per_suite(Config) -> void()
Config = [tuple()]
This function is called after the last test case in the suite, and can be used to clean up whatever the test cases have done. The return value is ignored.
init_per_testcase(Case, Config0) -> Config1 | {skip, Comment}
Case = atom()
Config0 = Config1 = [tuple()]
Comment = string()
This function is called before each test case. The
Case
argument is the name of the test case, and
Config
is the configuration which can be modified
here. Whatever is returned from this function is given as
Config
to the test case.
end_per_testcase(Case, Config) -> void()
Case = atom()
Config = [tuple()]
This function is called after each test case, and can be used to clean up whatever the test case has done. The return value is ignored.
Case(doc) -> [Decription]
Case(suite) -> [] | TestSpec | {skip, Comment}
Case(Config) -> {skip, Comment} | {comment, Comment} | Ok
Description = string()
TestSpec = list()
Comment = string()
Ok = term()
Config = [tuple()]
The documentation clause (argument doc
) can
be used for automatic generation of test documentation or test
descriptions.
The specification clause (argument spec
)
shall return an empty list, the test specification for the
test case or {skip,Comment}
. The syntax of a test
specification is described in the Test Server User's Guide.
The execution clause (argument Config
) is
only called if the specification clause returns an empty list.
The execution clause is the real test case. Here you must call
the functions you want to test, and do whatever you need to
check the result. If something fails, make sure the process
crashes or call test_server:fail/0/1
(which also will
cause the process to crash).
You can return {skip,Comment}
if you decide not to
run the test case after all, e.g. if it is not applicable on
this platform.
You can return {comment,Comment}
if you wish to
print some information in the 'Comment' field on the HTML
result page.
If the execution clause returns anything else, it is
considered a success, unless it is {'EXIT',Reason}
or
{'EXIT',Pid,Reason}
which can't be distinguished from a
crash, and thus will be considered a failure.
A conf test case is a group of test cases with an init and a cleanup function. The init and cleanup functions are also test cases, but they have special rules:
- They do not need a specification clause.
- They must always have the execution clause.
- They must return the
Config
parameter, a modified version of it or{skip,Comment}
from the execution clause. - The cleanup function may also return a tuple
{return_group_result,Status}
, which is used to return the status of the conf case to Test Server and/or to a conf case on a higher level. (Status = ok | skipped | failed
). init_per_testcase
andend_per_testcase
are not called before and after these functions.
TEST SUITE LINE NUMBERS
If a test case fails, the test server can report the exact line
number at which it failed. There are two ways of doing this,
either by using the line
macro or by using the
test_server_line
parse transform.
The line
macro is described under TEST SUITE SUPPORT
MACROS below. The line
macro will only report the last line
executed when a test case failed.
The test_server_line
parse transform is activated by
including the headerfile test_server_line.hrl
in the test
suite. When doing this, it is important that the
test_server_line
module is in the code path of the erlang
node compiling the test suite. The parse transform will report a
history of a maximum of 10 lines when a test case
fails. Consecutive lines in the same function are not shown.
The attribute -no_lines(FuncList).
can be used in the
test suite to exclude specific functions from the parse
transform. This is necessary e.g. for functions that are executed
on old (i.e. <R10B) OTP releases. FuncList = [{Func,Arity}]
.
If both the line
macro and the parse transform is used in
the same module, the parse transform will overrule the macro.
TEST SUITE SUPPORT MACROS
There are some macros defined in the test_server.hrl
that are quite useful for test suite programmers:
The line macro, is quite essential when writing test cases. It tells the test server exactly what line of code that is being executed, so that it can report this line back if the test case fails. Use this macro at the beginning of every test case line of code.
The config macro, is used to
retrieve information from the Config
variable sent to all
test cases. It is used with two arguments, where the first is the
name of the configuration variable you wish to retrieve, and the
second is the Config
variable supplied to the test case
from the test server.
Possible configuration variables include:
data_dir
- Data file directory.priv_dir
- Scratch file directory.nodes
- Nodes specified in the spec filenodenames
- Generated nodenames.- Whatever added by conf test cases or
init_per_testcase/2
Examples of the line
and config
macros can be
seen in the Examples chapter in the user's guide.
If the line_trace
macro is defined, you will get a
timestamp (erlang:now()
) in your minor log for each
line
macro in your suite. This way you can at any time see
which line is currently being executed, and when the line was
called.
The line_trace
macro can also be used together with the
test_server_line
parse transform described above. A
timestamp will then be written for each line in the suite, except
for functions stated in the -no_lines
attribute.
The line_trace
macro can e.g. be defined as a compile
option, like this:
erlc -W -Dline_trace my_SUITE.erl