Add a testcase that parses all tests in the parser/tst/simple_tests/
directory with parse_profile_data() to ensure that everything with valid
syntax is accepted, and that all tests marked as FAIL raise an
exception.
This already resulted in
- several patches to fix low-hanging fruits (including some bugs in the
parser simple_tests itsself)
- a list of tests that don't behave as expected. Those files get their
expected result reverted to make sure we notice any change in the
tools behaviour, especially changing to the really expected resulted.
This method also makes sure that the testcase doesn't report any of
the known failures.
- a 5% improvement in test coverage - mostly caused by nearly completely
covering parse_profile_data.
- addition of some missing testcased (as noticed by missing coverage),
for example several "rule outside of a profile" testcases.
As indicated above, the tools don't work as expected on all test
profiles - most of the failures happen on expected-to-fail tests that
pass parse_profile_data() without raising an exception. There are also
some tests failing despite valid syntax, often with rarely used syntax
like if conditions and qualifier blocks.
Most of the failing (generated) tests are caused by features not
implemented in the tools yet:
- validating dbus rules (currently we just store them without any parsing)
- checks for conflicting x permissions
- permissions before path ("r /foo,")
- 'safe' and 'unsafe' keywords for *x rules
- 'Pux' and 'Cux' permissions (which actually mean PUx and CUx, and get
rejected by the tools - ideally the generator script should create
PUx and CUx tests instead)
skip_startswith excludes several generated tests from being run. I know
that skip_startswith also excludes tests that would not fail, but the
generated filenames (especially generated_x/exact-*) don't have a
pattern that I could easily use to exclude less tests - and I'm not too
keen to add a list with 1000 single filenames ;-)
Acked-by: John Johansen <john.johansen@canonical.com>