The test package contains all regression tests for Python as well as the modules test.test_support and test.regrtest. test.test_support is used to enhance your tests while test.regrtest drives the testing suite.
Each module in the test package whose name starts with test_ is a testing suite for a specific module or feature. All new tests should be written using the unittest or doctest module. Some older tests are written using a “traditional” testing style that compares output printed to sys.stdout; this style of test is considered deprecated.
See also
It is preferred that tests that use the unittest module follow a few guidelines. One is to name the test module by starting it with test_ and end it with the name of the module being tested. The test methods in the test module should start with test_ and end with a description of what the method is testing. This is needed so that the methods are recognized by the test driver as test methods. Also, no documentation string for the method should be included. A comment (such as # Tests function returns only True or False) should be used to provide documentation for test methods. This is done because documentation strings get printed out if they exist and thus what test is being run is not stated.
A basic boilerplate is often used:
import unittest
from test import test_support
class MyTestCase1(unittest.TestCase):
# Only use setUp() and tearDown() if necessary
def setUp(self):
... code to execute in preparation for tests ...
def tearDown(self):
... code to execute to clean up after tests ...
def test_feature_one(self):
# Test feature one.
... testing code ...
def test_feature_two(self):
# Test feature two.
... testing code ...
... more test methods ...
class MyTestCase2(unittest.TestCase):
... same structure as MyTestCase1 ...
... more test classes ...
def test_main():
test_support.run_unittest(MyTestCase1,
MyTestCase2,
... list other tests ...
)
if __name__ == '__main__':
test_main()
This boilerplate code allows the testing suite to be run by test.regrtest as well as on its own as a script.
The goal for regression testing is to try to break code. This leads to a few guidelines to be followed:
The testing suite should exercise all classes, functions, and constants. This includes not just the external API that is to be presented to the outside world but also “private” code.
Whitebox testing (examining the code being tested when the tests are being written) is preferred. Blackbox testing (testing only the published user interface) is not complete enough to make sure all boundary and edge cases are tested.
Make sure all possible values are tested including invalid ones. This makes sure that not only all valid values are acceptable but also that improper values are handled correctly.
Exhaust as many code paths as possible. Test where branching occurs and thus tailor input to make sure as many different paths through the code are taken.
Add an explicit test for any bugs discovered for the tested code. This will make sure that the error does not crop up again if the code is changed in the future.
Make sure to clean up after your tests (such as close and remove all temporary files).
If a test is dependent on a specific condition of the operating system then verify the condition already exists before attempting the test.
Import as few modules as possible and do it as soon as possible. This minimizes external dependencies of tests and also minimizes possible anomalous behavior from side-effects of importing a module.
Try to maximize code reuse. On occasion, tests will vary by something as small as what type of input is used. Minimize code duplication by subclassing a basic test class with a class that specifies the input:
class TestFuncAcceptsSequences(unittest.TestCase):
func = mySuperWhammyFunction
def test_func(self):
self.func(self.arg)
class AcceptLists(TestFuncAcceptsSequences):
arg = [1,2,3]
class AcceptStrings(TestFuncAcceptsSequences):
arg = 'abc'
class AcceptTuples(TestFuncAcceptsSequences):
arg = (1,2,3)
See also
test.regrtest can be used as a script to drive Python’s regression test suite. Running the script by itself automatically starts running all regression tests in the test package. It does this by finding all modules in the package whose name starts with test_, importing them, and executing the function test_main() if present. The names of tests to execute may also be passed to the script. Specifying a single regression test (python regrtest.py test_spam.py) will minimize output and only print whether the test passed or failed and thus minimize output.
Running test.regrtest directly allows what resources are available for tests to use to be set. You do this by using the -u command-line option. Run python regrtest.py -uall to turn on all resources; specifying all as an option for -u enables all possible resources. If all but one resource is desired (a more common case), a comma-separated list of resources that are not desired may be listed after all. The command python regrtest.py -uall,-audio,-largefile will run test.regrtest with all resources except the audio and largefile resources. For a list of all resources and more command-line options, run python regrtest.py -h.
Some other ways to execute the regression tests depend on what platform the tests are being executed on. On Unix, you can run make test at the top-level directory where Python was built. On Windows, executing rt.bat from your PCBuild directory will run all regression tests.
The test.support module provides support for Python’s regression tests.
This module defines the following exceptions:
The test.support module defines the following constants:
The test.support module defines the following functions:
Execute unittest.TestCase subclasses passed to the function. The function scans the classes for methods starting with the prefix test_ and executes the tests individually.
It is also legal to pass strings as parameters; these should be keys in sys.modules. Each associated module will be scanned by unittest.TestLoader.loadTestsFromModule(). This is usually seen in the following test_main() function:
def test_main():
test_support.run_unittest(__name__)
This will run all tests defined in the named module.
A convenience wrapper for warnings.catch_warnings() that makes it easier to test that a warning was correctly raised with a single assertion. It is approximately equivalent to calling warnings.catch_warnings(record=True).
The main difference is that on entry to the context manager, a WarningRecorder instance is returned instead of a simple list. The underlying warnings list is available via the recorder object’s warnings attribute, while the attributes of the last raised warning are also accessible directly on the object. If no warning has been raised, then the latter attributes will all be None.
A reset() method is also provided on the recorder object. This method simply clears the warning list.
The context manager is used like this:
with check_warnings() as w:
warnings.simplefilter("always")
warnings.warn("foo")
assert str(w.message) == "foo"
warnings.warn("bar")
assert str(w.message) == "bar"
assert str(w.warnings[0].message) == "foo"
assert str(w.warnings[1].message) == "bar"
w.reset()
assert len(w.warnings) == 0
This is a context manager than runs the with statement body using a StringIO.StringIO object as sys.stdout. That object can be retrieved using the as clause of the with statement.
Example use:
with captured_stdout() as s:
print("hello")
assert s.getvalue() == "hello"
The test.support module defines the following classes: