We currently don't have a record of tests executed, the platforms and environments they were run upon, and what code they covered. The only indirect evidence is available in open bugs, when people care to fill in these details. :-(
- Unify running all the tests, in- or out- tree
- Try switching the feature tests and non-automated unit tests in test/ from our runner scripts to python native unittest discovery mechanism.
- We have a stale gramps/test/regrtest.py runner, notable for logging init. - should we revive that, or maybe integrate into setup.py test runner?
- Coverage analysis
- Continuous test status report, coverage, automatic deployment into win/mac/linux VMs (needs server capacity to be hosted online)(I can dream, can't I?)
- Automated regression tests for our GUI. The following links look interesting:
Currently used tests and frameworks
testing of reports
test/runtest.sh - Report test for Gramps: Generate every report in every format. Runs all possible reports using the report cli interface, based on the example.gramps database. This test is not fully self-contained, in that it depends on various environment settings, such as your locale, your preferred name display formats, and your report options. Last, but not the least, the verification of the resulting reports is entirely manual.
Bugs tagged as found-by-runtest.sh
other report testing
See more specialized scripts in test/, status unknown.
test/impex.sh - Import/export test for Gramps.
From the file header:
- Import example XML data and create internal Gramps DB
- Open produced Gramps DB, then
- check data for integrity
- output in all formats
- Check resulting XML for well-formedness and validate it against DTD and RelaxNG schema.
- Import every exported file produced if the format is also supported for import, and run a text summary report.
- Diff each report with the summary of the produced example DB.
Bugs tagged as found-by-impex.sh
test/RunAllTests.py - Testing framework for performing a variety of unittests for Gramps. Runs out-of-tree (not in gramps/) testing code, by looking for any test/*_Test.py files and executing the test suites therein. See the current code in test/*_Test.py for example and python standard unittest docs.
Starting with gramps40 branch, these tests include non-automated unit tests only. The automated unit tests are all under gramps/.
Bugs tagged as found-by-RunAllTests.py
GtkHandler testing code pops up the Gramps error dialog, but this is actually for testing the error reporting itself. Don't be scared by the dialog, it's expected. Your manual work is required to close the dialogs with the "Cancel" button. The relevant tests still pass (unless there's another bug there)...
unit tests in the main tree
python setup.py test
See Unit Test Quickstart for detailed running instructions.
|gramps/cli/test/user_test.py||8||OK||mocking] to run)|
|gramps/gen/test/constfunc_test.py||1||OK||(linux only, skipped elsewhere)|
|gramps/gen/test/user_test.py||1||OK||See #7013 for context|
|gramps/gen/lib/test/date_test.py||8||OK||(locale-based, OK for en, fails for some other locales)|
|gramps/gen/lib/test/merge_test.py||201||FAIL||2 failed skipped (7027- Bug shows resolved)|
|gramps/gen/utils/test/place_test.py||28||FAIL||4 failures (bug #7044 - Bug shows resolved)|
|gramps/gui/test/user_test.py||1||OK||(requires mocking to run)|
|gramps/test/test/gedread_util_test.py||No longer required?|
|gramps/test/test/test_util_test.py||No longer required?|
There is also semi-interactive testing via __main__ in some code:
|gramps/gen/relationship.py||To Do||Relationship calculator|
|gramps/gui/ddtargets.py||To Do||Not worth running?|
|gramps/gui/widgets/undoablebuffer.py||To Do||Not worth running?|
|gramps/plugins/rel/rel_*.py||To Do||Relationship calculator plugins|
Manual test plan
See TestPlan.txt in gramps toplevel. (I believe this is only done at a major release (like 4.0.0)).