2
0
mirror of https://gitlab.com/apparmor/apparmor synced 2025-09-01 14:55:10 +00:00

Compare commits

...

477 Commits

Author SHA1 Message Date
John Johansen
6e0b7ca20a parser: change priority so that it accumulates based on permissions
The current behavior of priority rules can be non-intuitive with
higher priority rules completely overriding lower priority rules even in
permissions not held in common. This behavior does have use cases but
its can be very confusing, and does not normal policy behavior

Eg.
  priority=0 allow r /**,
  priority=1 deny  w /**,

will result in no allowed permissions even though the deny rule is
only removing the w permission, beause the higher priority rule
completely over ride lower priority permissions sets (including
none shared permissions).

Instead move to tracking the priority at a per permission level. This
allows the w permission to still override at priority 1, while the
read permission is allowed at priority 0.

The final constructed state will still drop priority for the final
permission set on the state.

Note: this patch updates the equality tests for the cases where
the complete override behavior was being tested for.

The complete override behavior will be reintroduced in a future
patch with a keyword extension, enabling that behavior to be used
for ordered blocks etc.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-06 12:24:04 -08:00
John Johansen
e56dbc2084 parser: fix prefix dump to include priority
The original patch adding priority to the set of prefixes failed to
update the prefix dump to include the priority priority field.

Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-06 10:54:42 -08:00
John Johansen
cc31a0da22 parser: drop priority from state permissions
The priority field is only used during state construction, and can
even prevent later optimizations like minimization. The parser already
explcitily clears the states priority field as part of the last thing
done during construction so it doesn't prevent minimization
optimizations.

This means the state priority not only wastes storage because it is
unused post construction but if used it could introduce regressions,
or other issues.

The change to the minimization tests just removes looking for the
priority field that is no longer reported.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-06 10:53:19 -08:00
John Johansen
9221d291ec parser: stop using dynamic_cast for prompt permission calculations
Like was done for the other MatchFlags switch to using a node type
instead of dynamic_cast as this will result in a performance
improvement.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-06 08:34:02 -08:00
Christian Boltz
002bf1339c Merge spread: Add support for EXPECT_DENIALS in profile tests
This commit adds support for EXPECT_DENIALS in profile tests. Any test
that sets the EXPECT_DENIALS environment variable is expected to trigger
AppArmor denials and will fail if none was generated.

This allows to test that problematic behaviors are correctly blocked.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1515
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-02-05 16:44:42 +00:00
Maxime Bélair
fc3f27e255 spread: Add support for EXPECT_DENIALS in profile tests
Introduce the EXPECT_DENIALS environment variable for profile tests.
Each line of EXPECT_DENIALS is a regex that must match an AppArmor
denial for the corresponding test, and conversely.

This ensures that problematic behaviors are correctly blocked and logged.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>
2025-02-05 10:02:21 +01:00
John Johansen
4831a854fe Merge Initial profile for tar binary
Profile for `tar` package.

In order to test this, I've diffed the output of the `tar`'s testsuite with and without the profile:

```
sudo apt build-dep tar
apt source tar
cd tar-*/
./configure
cd tests/
./testsuite > without_profile.log
apparmor_parser ~/tar
./testsuite > with_profile.log
diff without_profile.log with_profile.log # should not output anything
echo $? # should be zero
```

Additionally, [the testsuite available on QRT](https://git.launchpad.net/qa-regression-testing/tree/scripts/test-tar.py) for the `tar` package should continue to pass after loading the profile.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1453
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-02-03 21:46:36 +00:00
Octavio Galland
c9cfbb4668 restrict networking to localhost 2025-02-03 16:33:13 -03:00
Octavio Galland
38399e7720 disallow ${HOME}/bin 2025-02-03 16:32:58 -03:00
John Johansen
4765bcd7bc Merge parser: misc fixes on apparmor.d man page
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1516
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-31 21:54:08 +00:00
Georgia Garcia
998ee0595e parser: misc fixes on apparmor.d man page
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-31 18:23:14 -03:00
Zygmunt Krynicki
54561af112 Merge tests/spread: fix debian system name
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1511
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-30 16:22:26 +00:00
Zygmunt Krynicki
39cd3f6f21 Merge tests: unify formatting of .gitlab-ci.yml
We had some mixture of indent styles.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1510
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-30 16:22:12 +00:00
Zygmunt Krynicki
d4582f232f tests: unify formatting of .gitlab-ci.yml
We had some mixture of indent styles.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-30 08:02:33 +01:00
Zygmunt Krynicki
8967dee5b9 tests/spread: fix debian system name
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-29 19:45:45 +01:00
John Johansen
d482aab419 Merge tests: mark more regression test as known-failures
A number of tests are failing and since spread does not contain a native
XFAIL facility, we have to maintain a silent-failure feature code
ourselves. A few of those have been fixed since the first iteration of
this patch. The remaining known failures are being fixed.

Later on I would like to separate XFAIL from SKIP so that if a test is
known to exercise kernel feature unavailable on the given system, the
test is just not executed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1483
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-29 09:05:19 +00:00
Zygmunt Krynicki
219626c503 Merge utils: adjusts aa-notify tests to handle Python 3.13+
Python 3.13 changes the formatting of long-short option pairs that use a
meta-variable. Up until 3.13 the meta-variable was repeated. Since
Python change [1] the meta-var is only printed once.

[1] https://github.com/python/cpython/pull/103372

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1495
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-28 22:24:25 +00:00
Zygmunt Krynicki
0acc138712 utils: abbreviate delta for Python 3.12 argparse
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 23:01:42 +01:00
Zygmunt Krynicki
6336465edf utils: adjusts aa-notify tests to handle Python 3.13+
Python 3.13 changes the formatting of long-short option pairs that use a
meta-variable. Up until 3.13 the meta-variable was repeated. Since
Python change [1] the meta-var is only printed once.

[1] https://github.com/python/cpython/pull/103372

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 20:49:16 +01:00
Zygmunt Krynicki
32bf95bb1e tests: exclude debian systems from toybox test
This is so that we get a baseline that passes to enable testing in CI/CD
but also to spark a discussion around what to do with a profile that
indirectly relies on a kernel feature that is not available on a given
system.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 14:57:32 +01:00
Zygmunt Krynicki
b0422d5572 tests: mark more regression test as known-failures
A number of tests are failing and since spread does not contain a native
XFAIL facility, we have to maintain a silent-failure feature code
ourselves. A few of those have been fixed since the first iteration of
this patch. The remaining known failures are being fixed.

Later on I would like to separate XFAIL from SKIP so that if a test is
known to exercise kernel feature unavailable on the given system, the
test is just not executed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 14:56:02 +01:00
Georgia Garcia
6405608442 Merge tests: add fuse-overlayfs to cloud-init
This is a dependency of the overlayfs_fuse regression test.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1509
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-28 12:10:03 +00:00
Zygmunt Krynicki
237b5c0f73 tests: add fuse-overlayfs to cloud-init
This is a dependency of the overlayfs_fuse regression test.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 11:28:44 +01:00
John Johansen
3b7ee81f04 Merge utils: test: account for last cmd format change in test-aa-notify
The "last" command, which was supplied by util-linux in older Ubuntu
versions, is now supplied by wtmpdb in Oracular and Plucky. Unfortunately,
this changed the output format and broke our column based parsing.

While the wtmpdb upstream has added json support at
https://github.com/thkukuk/wtmpdb/issues/20, we cannot use it because
we need to support systems that do not have this new feature added.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1508
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2025-01-27 19:55:48 +00:00
John Johansen
265a1656d1 Merge libapparmor: fixes to the SWIG bindings for SWIG 4.3 and later
Unfortunately we are affected by the backwards-incompatible change introduced by https://github.com/swig/swig/pull/2907

This MR contains fixes to keep the Python-side API the same on systems using SWIG 4.3 or later, e.g. Ubuntu Plucky.

Fixes https://gitlab.com/apparmor/apparmor/-/issues/475.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

Closes #475
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1504
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-27 19:51:20 +00:00
Georgia Garcia
5f06df3868 Merge utils: look for 'file' class when parsing logs
Since kernel commit 8c4b785a86be the class is available to check if
the log belongs to which class. This fixes cases where the logparser
is not able to distinguish between network and file operations.

This issue does not manifest previous to and including apparmor-4.0
because we did not process auditing logs then.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/478
Reported-by: vyomydv vyom.yadav@canonical.com
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

This patch should be cherry-picked to apparmor-4.1

Closes #478
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1507
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-27 19:34:25 +00:00
Georgia Garcia
af6dfe5b81 utils: look for 'file' class when parsing logs
Since kernel commit 8c4b785a86be the class is available to check if
the log belongs to which class. This fixes cases where the logparser
is not able to distinguish between network and file operations.

This issue does not manifest previous to and including apparmor-4.0
because we did not process auditing logs then.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/478
Reported-by: vyomydv vyom.yadav@canonical.com
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-27 15:43:23 -03:00
Ryan Lee
afd6aa0581 utils: test: account for last cmd format change in test-aa-notify
The "last" command, which was supplied by util-linux in older Ubuntu
versions, is now supplied by wtmpdb in Oracular and Plucky. Unfortunately,
this changed the output format and broke our column based parsing.

While the wtmpdb upstream has added json support at
https://github.com/thkukuk/wtmpdb/issues/20, we cannot use it because
we need to support systems that do not have this new feature added.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-27 10:15:45 -08:00
Octavio Galland
76647b33b1 typo in tar test 2025-01-27 10:10:29 -03:00
John Johansen
c81eacacac Merge aa-load documentation improvements
This MR includes copyediting of the `aa-load --help` text as well as a man page based on the help text.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1505
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-25 01:39:42 +00:00
Ryan Lee
ee8300545e Write a man page for aa-load based on the help text
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 16:03:20 -08:00
Ryan Lee
6592daff90 Copyedit the help text for aa-load
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 16:02:05 -08:00
Ryan Lee
3fa40935f5 Replace aa_find_mountpoint cstring_output_allocate due to $isvoid issue
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 15:00:15 -08:00
Ryan Lee
1620887463 Replace simple %append_output uses with ISVOID helpers for SWIG 4.3
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 14:59:39 -08:00
Ryan Lee
1b46ab10fd Create %append_output compatibility wrappers for SWIG 4.3
Unfortunately we are affected by the backwards-incompatible change introduced by https://github.com/swig/swig/pull/2907

These wrappers will be needed to fix tests on systems using SWIG 4.3 or later, e.g. Ubuntu Plucky.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 14:59:39 -08:00
Georgia Garcia
dfb7abf2a6 Merge Set up overlayfs_fuse test that uses a FUSE implementation of overlayfs
This also reorganizes the overlayfs tests slightly in order to maximize code reuse between the old test and the new one.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1503
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-24 20:28:57 +00:00
Ryan Lee
be38da7570 Move most file setup and creation to before the overlay mount call
kernel overlayfs propagates the changes, while fuse_overlayfs doesn't

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 09:00:28 -08:00
Ryan Lee
ed8b6cb663 Add fuse_overlayfs to apt dependency list of Gitlab CI test-build-regression
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 09:00:26 -08:00
Ryan Lee
9e05668d5a Set up an overlayfs_fuse regression test by using the other path of the overlayfs_common.inc helper
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 08:59:06 -08:00
Ryan Lee
a0f551d5b7 Wire up the kernel/fuse argument switch in overlayfs_common.inc regression tests
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 08:59:06 -08:00
Ryan Lee
9413658277 Move overlayfs test into include helper and wrap in overlayfs_kernel
By making the test a file to be included as a helper, we can reuse most of the code for a fuse_overlayfs test without copy-pasting

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 08:59:06 -08:00
Octavio Galland
8efe442717 tar test fixes 2025-01-24 09:30:49 -03:00
John Johansen
dcce4bc62f Merge Upadate man apparmor.d to highlight pivot_root limitation
As pointed out by https://bugs.launchpad.net/apparmor/+bug/2087875 ,
profile transitions with pivot_root are currently not supported on any
kernel.

This commit makes this limitation more obvious to users.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1436
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: John Johansen <john@jjmx.net>
2025-01-24 11:18:44 +00:00
Zygmunt Krynicki
4c8c4a1d77 Merge tests: unify CI/CD preparation phase
We now have GitLab CI/CD pipeline co-existing with spread, coupled with
image-garden and the cloud-init profile defined for each distribution.

To avoid duplicating list of required dependencies, re-use cloud-init
profile as the reference list of dependencies (superset between build
and test) to install.

In addition to the dependency list, the build_all job now re-uses spread
prepare section in similar fashion. If it builds in spread, it should
build in CI as well.

A small quality-of-life improvement is the shape of a collapsible
section around dependency installation should make reading job logs
easier.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1494
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-24 07:25:09 +00:00
Georgia Garcia
17a09d2987 Merge Allow overrides and preservation of some environment variables in utils make check
Our ubuntu packaging builds Python-enabled libapparmor's in the directories `libapparmor/libapparmor.python[version_identifier]`. In order for the util's `make check` to pick up on the correct libapparmor during the Ubuntu build process, we need the ability to override its search path. This patch introduces a `LIBAPPARMOR_BASEDIR` variable to allow for that.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1497
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 19:11:07 +00:00
Georgia Garcia
625a919bb8 Merge utils: test: various fixes for utils testing in Ubuntu packaging
The first patch fixes a `test-aa-notify.py` `TypeError` when `APPARMOR_NOTIFY` and `__AA_CONFDIR` are both specified, which is something that was broken all this time.

The second patch ensures that `aa-notify` in the test suite is run using the same Python interpreter that the test suite itself is run with, which is necessary for testing the utils under different Pythons.

The third patch does analogous modifications to the minitools tests that launch `aa-audit`, `aa-complain`, etc.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1498
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 19:07:00 +00:00
Ryan Lee
e32c267332 utils: test: use sys.executable when launching minitools in tests
This is analogous to the previous patch's change to the aa-notify tests.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-23 10:37:57 -08:00
Octavio Galland
7a7f88ddf3 tar spread smoke-test 2025-01-23 14:03:23 -03:00
Octavio Galland
e296d5b04c follow rule ordering convention 2025-01-23 13:50:27 -03:00
Octavio Galland
790c795f90 always execute binaries under current profile 2025-01-23 13:50:27 -03:00
Octavio Galland
ef0d5b4cde allow networking 2025-01-23 13:50:27 -03:00
Octavio Galland
667816fe43 explictly allow binaries from certain directories 2025-01-23 13:50:27 -03:00
Octavio Galland
e7807b3761 allow any program to be executed 2025-01-23 13:50:27 -03:00
Octavio Galland
29637f19c9 allow more binaries and capabilities 2025-01-23 13:50:27 -03:00
Octavio Galland
5271d6a74a Fix syntax error, use l to specify executable link 2025-01-23 13:50:27 -03:00
Octavio Galland
486c8c26fe Initial profile for tar binary 2025-01-23 13:50:27 -03:00
Georgia Garcia
c80ef6fb59 Merge tests: skip profile tests on Fedora
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1501
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 13:54:06 +00:00
Georgia Garcia
e750c6c66c Merge tests: add tool for observing the profile of a given command
Using gdb in batch mode, put a breakpoint on _start and spawn the
process.  Then using the built-in python interpreter print the
confinement label on the process and terminate everything.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1500
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 13:52:24 +00:00
Zygmunt Krynicki
065c1d67ca tests: skip profile tests on Fedora
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 14:31:42 +01:00
Zygmunt Krynicki
f98c1098b0 Merge tests: add httpd-devel and pam-devel to fedora cloud-init profile
Those are needed to build the two extension modules.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1499
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-23 13:07:45 +00:00
Zygmunt Krynicki
ffd38b7ac4 tests: measure toybox with actual-profile-of
This should be a more readable example to follow in other tests.  The
toybox test was special given the fact that it is a shell itself, and is
fairly programmable.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 13:53:45 +01:00
Zygmunt Krynicki
23df780544 tests: add tool for observing the profile of a given command
Using gdb in batch mode, put a breakpoint on _start and spawn the
process.  Then using the built-in python interpreter print the
confinement label on the process and terminate everything.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 13:53:45 +01:00
Zygmunt Krynicki
a2ace0d5d7 tests: add httpd-devel and pam-devel to fedora cloud-init profile
Those are needed to build the two extension modules.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 13:48:18 +01:00
Zygmunt Krynicki
29c618a11b tests: put logs from apt-get in a collapsed section
This is a small quality-of-life improvement when looking at CI/CD logs
on GitLab.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 12:37:10 +01:00
Zygmunt Krynicki
f01a40a77c tests: unify CI/CD preparation phase
We now have GitLab CI/CD pipeline co-existing with spread, coupled with
image-garden and the cloud-init profile defined for each distribution.

To avoid duplicating list of required dependencies, re-use cloud-init
profile as the reference list of dependencies (superset between build
and test) to install.

In addition to the dependency list, the build_all job now re-uses spread
prepare section in similar fashion. If it builds in spread, it should
build in CI as well.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 12:37:10 +01:00
Georgia Garcia
25676c4694 Merge tests: add integration test for toybox
This is something that was done interactively as a part of a training
session.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1487
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-22 20:39:15 +00:00
Ryan Lee
77cabf7dba utils: test: use sys.executable when launching aa-notify in tests
If the tests are running under a different Python, then the aa-notify bin should use the same Python

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-22 12:04:35 -08:00
Ryan Lee
3365e492a7 utils: test: test-aa-notify: Ensure aanotify_bin is always a list
os.environ returns a string, but the default value is a list, and the concatenation of __AA_CONFDIR assumes a list.
Thus, if APPARMOR_NOTIFY and __AA_CONFDIR were both specified, this would error out.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-22 11:52:38 -08:00
Ryan Lee
90143494fc Allow overrides and preservation of some environment variables in utils make check
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-22 11:10:41 -08:00
Zygmunt Krynicki
1462e1c4b0 Merge tests: enable build tests on Fedora 41
Tests that interact with the kernel are skipped (tests/regression and
tests/snapd) but everything else is green. Most of the tests are
actually passing. The only exception is the aa-notify test that was
broken by Python 3.13 stdlib change. The fix for that has been posted
separately.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1496
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-22 11:06:31 +00:00
Zygmunt Krynicki
03215f46c4 Merge tests: build PAM and apparmor modules in spread
Those fell under the radar during the initial push to expose all of
the tests to spread.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1493
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-22 11:06:17 +00:00
Zygmunt Krynicki
ef880d325f Merge tests: switch tumbleweed to boot with security=apparmor
The openSUSE project has decided to switch to security=selinux by
default. For the purpose of continuing to test AppArmor on the
distribution, alter the cloud-init profile to switch to booting with
security=apparmor.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1492
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-22 11:06:01 +00:00
Zygmunt Krynicki
7ce6819c53 tests: enable build tests on Fedora 41
Tests that interact with the kernel are skipped (tests/regression and
tests/snapd) but everything else is green. Most of the tests are
actually passing. The only exception is the aa-notify test that was
broken by Python 3.13 stdlib change. The fix for that has been posted
separately.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 20:59:40 +01:00
Zygmunt Krynicki
be47567d27 tests: add integration test for toybox
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 12:34:42 +01:00
Zygmunt Krynicki
2ab2c8f8a1 tests: add suite with profile tests
Hopefully more and more profiles will come with smoke tests. Since the
pattern of those tests is likely to be very similar (compile profile,
run some programs, remove profile) it will be good to check if the
profile had caused any denials to be logged. Having this at the suite
level should make writing actual tests easier.

The prepare-each and restore-each logic compile the profile, check for
errors and finally remove the profile. The debug-each logic shows the
program name (with full path).

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 12:34:42 +01:00
Zygmunt Krynicki
5c17df0219 profiles: attach toybox profile to /usr/bin/toybox
This is the actual path used on Debian and derivatives.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 11:16:24 +01:00
Zygmunt Krynicki
42c8745e73 tests: build PAM and apparmor modules in spread
Those fell under the radar during the initial push to expose all of
the tests to spread.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 01:54:24 +01:00
Zygmunt Krynicki
2b44cc09a6 tests: switch tumbleweed to boot with security=apparmor
The openSUSE project has decided to switch to security=selinux by
default. For the purpose of continuing to test AppArmor on the
distribution, alter the cloud-init profile to switch to booting with
security=apparmor.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 01:52:59 +01:00
Georgia Garcia
85d57b7f06 Merge tests: pair of cleanups for the coverity job
Avoid a deprecated feature and reduce YAML complexity.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1491
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-20 18:12:56 +00:00
Zygmunt Krynicki
5abbf31ce1 tests: inline .send-to-coverity command
There is no other use of this yaml fragment in the project so inline it
for simplicity.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-20 14:11:17 +01:00
Zygmunt Krynicki
61d75a11ef tests: rewrite coverity job to avoid deprecated "only" feature
The "only" feature has been deprecated for a while. The standard
replacement is the rules:if feature.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-20 14:09:45 +01:00
Christian Boltz
817d5eed1d Merge postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/incoming/.

Similar to 3c2aae3.

Example error:

```
type=AVC msg=audit(1737094364.337:12023): apparmor="DENIED" operation="open" profile="postfix-showq" name="/var/spool/postfix/incoming/B7E4C12C784A" pid=17879 comm="showq" requested_mask="r" denied_mask="r" fsuid=91 ouid=91FSUID="postfix" OUID="postfix"
```

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1489
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-18 13:08:58 +00:00
pyllyukko
ba765e0eab postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/incoming/.

Similar to 3c2aae3.
2025-01-18 09:46:24 +02:00
Georgia Garcia
a12004f96c Merge regression tests: fix the overlayfs mv test failures
The file being moved from needs rw permissions and not just w permissions.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1488
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-17 13:09:38 +00:00
Ryan Lee
63c944a01a regression tests: fix the overlayfs mv test failures
The file being moved from needs rw permissions and not just w permissions

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-16 18:10:06 -08:00
John Johansen
f171f5ebc8 Merge tests: snapd/mount-control: assorted fixes
This makes the snapd/mount-control test pass on all the currently tested systems. Note that there's a somewhat complex problem with the new mount APIs (https://lwn.net/Articles/753473/) from 2018 that are now being used on, for example, Debian 13.

I will need to make similar changes to the profiles generated by snapd, so any insight on what to do there is strongly appreciated.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1479
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-16 19:35:35 +00:00
John Johansen
2e42c33f48 Merge parser: add backend pipeline ordering info to README
Add a basic overview of the ordering of the backend of the compiler
and which stages specific dump info lines up with.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1470
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-16 19:32:26 +00:00
John Johansen
4fc3aacc8f Merge aa-notify: Use a quieter default behavior
In aa-notify, notifications are now merged by default to reduce the risk
of flooding.

Additionally, we now use an exponential backoff algorithm for the
merging time period. If there is several notications within a time
period, it doubles, up to a maximum. The time period shrinks if there is
no notification. The time period is reset if the user clicks on a
notifiation
    
Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1468
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-16 19:31:19 +00:00
Maxime Bélair
7049d7b0c6 aa-notify: Use a quieter default behavior 2025-01-16 19:31:18 +00:00
Christian Boltz
692e6850ba Merge Add support for lastlog2 to get last login
lastlog2 is the 2038-safe replacement for wtmp, and in the meantime
became part of util-linux.

Adjust get_last_login_timestamp() to use the lastlog2 database
(/var/lib/lastlog/lastlog2.db) if it exists, and adjust
get_last_login_timestamp_lastlog2() to actually do that.

(If lastlog2.db doesn't exist, aa-notify will read wtmp as usual.)

Unfortunately lastlog2 doesn't have a way to get machine-readable output
(for example json), therefore - after trying and failing to parse the
lastlog2 output - directly read from lastlog2.db. Let's hope the format
never changes ;-)

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1228378

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1216660

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/372

I propose this patch for 4.0 and master.

Closes #372
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1282
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-14 18:56:21 +00:00
Christian Boltz
45e4c27cf0 Add support for lastlog2 to get last login
lastlog2 is the 2038-safe replacement for wtmp, and in the meantime
became part of util-linux.

This commit switches from trying to parse the lastlog2 output to
directly reading lastlog2.db with sqlite3.

Adjust get_last_login_timestamp() to use the lastlog2 database
(/var/lib/lastlog/lastlog2.db) if it exists, and adjust
get_last_login_timestamp_lastlog2() to actually do that.

(If lastlog2.db doesn't exist, aa-notify will read wtmp as usual.)

Unfortunately lastlog2 doesn't have a way to get machine-readable output
(for example json), therefore - after trying and failing to parse the
lastlog2 output - directly read from lastlog2.db. Let's hope the format
never changes ;-)

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1228378

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1216660

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/372
2025-01-14 19:36:43 +01:00
Christian Boltz
371a9ff9ec Add support for lastlog2 to get last login
lastlog2 is the 2038-safe replacement for wtmp, and in the meantime
became part of util-linux.

Adjust get_last_login_timestamp() to use lastlog2 if it exists, and add
get_last_login_timestamp_lastlog2() to actually do that.

(If lastlog2 doesn't exist, aa-notify will read wtmp as usual.)

Unfortunately lastlog2 doesn't have a way to get machine-readable output
(for example json), therefore we have to parse the output that is meant
for humans. Let's hope the format never changes ;-)

(The alternative would have been to use squlite3 to once more read the
data behind the official program's back, but that was already a bad idea
for wtmp, therefore I decided against it.)

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1228378

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1216660

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/372
2025-01-14 19:36:42 +01:00
Christian Boltz
7d537efcb0 Rename get_last_login_timestamp to get_last_login_timestamp_wtmp
... and add a wrapper function with the old name

Also rename the tests to the new name, and create a copy with the
original name. The copy will be adjusted to also check/expect lastlog2
results in a later commit.
2025-01-14 19:36:40 +01:00
Christian Boltz
9629bc8b6f Merge Support unloading profiles in kill and prompt mode
... in aa-teardown (actually everything that uses rc.apparmor.functions)
and aa-remove-unknown.

Fixes: https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2093797

I propose this fix for 3.0..master, since the apparmor.d manpage in all these branches mentions the `kill` flag.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1484
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-14 18:24:40 +00:00
Christian Boltz
1c2d79de7f Support unloading profiles in kill and prompt mode
... in aa-teardown (actually everything that uses rc.apparmor.functions)
and aa-remove-unknown.

Fixes: https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2093797
2025-01-13 18:07:39 +01:00
Zygmunt Krynicki
43355fada5 Merge tests: add dosfstools to image-garden cloud-init
The package is required by the file_unbindable_mount regression test.
To properly re-generate affected images please update image-garden
to version containing 9714dc45d0ef06862ffe7037193dc43386db48ea
(Tie .user-data and .meta-data to MAKEFILE_LIST).

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1480
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-12 21:02:39 +00:00
John Johansen
c57d727482 parser: add backend pipeline ordering info to README
Add a basic overview of the ordering of the backend of the compiler
and which stages specific dump info lines up with.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-10 23:35:57 -08:00
Christian Boltz
b4cb33b488 Merge tests: regression: separate bash traces from errors
The BASH_XTRACEFD variable can be used to redirect "set -x" traces
to a dedicated file. We can use it to split the execution trace
(what has actually happened) from the failure messages.

On a failing test this does provide improved clarity when debugging
interactively with "spread -debug". On non-interactive runs the now
shorter error list is also implicitly printed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1481
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-10 20:48:16 +00:00
Christian Boltz
7fa4b82235 Merge tests: run autotools test verbosely
Instead of showing just the summary, display the actual test log as well.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1482
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-10 20:47:54 +00:00
Zygmunt Krynicki
fa33d7199b tests: run autotools test verbosely
Instead of showing just the summary, display the actual test log as well.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 14:04:55 +01:00
Zygmunt Krynicki
2c2e0478f8 tests: regression: separate bash traces from errors
The BASH_XTRACEFD variable can be used to redirect "set -x" traces
to a dedicated file. We can use it to split the execution trace
(what has actually happened) from the failure messages.

On a failing test this does provide improved clarity when debugging
interactively with "spread -debug". On non-interactive runs the now
shorter error list is also implicitly printed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 12:40:17 +01:00
Zygmunt Krynicki
699b598593 tests: sort cloud-init package lists
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 12:38:30 +01:00
Zygmunt Krynicki
215fab71a5 tests: add dosfstools to image-garden cloud-init
The package is required by the file_unbindable_mount regression test.
To properly re-generate affected images please update image-garden
to version containing 9714dc45d0ef06862ffe7037193dc43386db48ea
(Tie .user-data and .meta-data to MAKEFILE_LIST).

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 12:37:49 +01:00
Zygmunt Krynicki
cff25b8d17 tests: snapd/mount-control: allow paths used on openSUSE
In addition allow linking to libeconf, generalize locale paths to cover
values other than C.UTF-8 and allow reading system-wide locale.alias and
gconv modules.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 11:09:36 +01:00
Zygmunt Krynicki
8ed810756b tests: snapd/mount-control: stop/start auditd
This is needed on openSUSE Tumbleweed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 11:09:19 +01:00
Zygmunt Krynicki
5556de53c0 tests: snapd/mount-control: allow new mount APIs
This is not the best of fixes but it seems that on Debian 13, with new
libmount calling fsopen/fsconfig/move_mount, the current apparmor mount
rule is insufficient to allow the call to go through.

The key problems are:
- the fstype is not visible to LSM
- the source directory is an empty string
- the mount is moved to final position

I don't know the extent of "new" mount API coverage by LSM hooks but
I think we should either synthesize new permissions from old rules,
.e.g match each of the system calls against what the mount class
expression, or somehow allow the exceptions better.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 11:08:26 +01:00
Zygmunt Krynicki
32116a50b0 tests: snapd/mount-control: fix bash syntax.
This masked failures that were already occuring.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 11:08:07 +01:00
John Johansen
72f9952a5f Merge parser: add a hfa dump that matches the renumbered chfa
Construction of the chfa can reorder states from what the numbering
given during the hfa constuctions because of reordering for better
compression, dead state removal to ensure better packing etc.

This however means the dfa dump is difficult (it is possible using
multiple dumpes) to match up to the chfa that the kernel is
using. Make this easier by making the dfa dump be able to take
the remapping as input, and provide an option to dump the
chfa equivalent hfa.

Renumbered states will show up as {new <== {orig}} in the dump

Eg.
```
--D dfa-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{5} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {5}  0 (0x 4/0//0/0/0)
    0x4 -> {5}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {5}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {5}  0 (0x 4/0//0/0/0)
    \n 0xa -> {5}  0 (0x 4/0//0/0/0)
    \  0x20 -> {5}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {6}
{6} perms: none
    1 0x31 -> {5}  0 (0x 4/0//0/0/0)
```

```
-D dfa-compressed-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{2 == {5}} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    0x4 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \n 0xa -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \  0x20 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {4 == {6}}
{4 == {6}} perms: none
    1 0x31 -> {2 == {5}}  0 (0x 4/0//0/0/0)
```

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1474
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2025-01-09 19:04:13 +00:00
John Johansen
cd8b75abc0 Merge parser: convert uint to unsigned int
As reported in https://gitlab.com/apparmor/apparmor/-/merge_requests/1475
uint requires the inclusion of sys/types.h for use in musl libc.
Including that would be fine but since it is only used for the
cast for the owner type comparison, just convert to use a more
standard type.

Reported-by: @fossd <fossdd@pwned.life>
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1478
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-09 10:40:27 +00:00
John Johansen
ff03702fde parser: convert uint to unsigned int
As reported in https://gitlab.com/apparmor/apparmor/-/merge_requests/1475
uint requires the inclusion of sys/types.h for use in musl libc.
Including that would be fine but since it is only used for the
cast for the owner type comparison, just convert to use a more
standard type.

Reported-by: @fossd <fossdd@pwned.life>
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 02:15:00 -08:00
John Johansen
e5a960a685 Merge cupsd: Add /etc/paperspecs and convert to @etc_ro/rw
I had this message in my log

```
Dez 30 08:14:46 kernel: audit: type=1400 audit(1735542886.787:307): apparmor="DENIED" operation="open" class="file" profile="/usr/sbin/cupsd" name="/etc/paperspecs" pid=317509 comm="cupsd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
```

If the second commit is bad, I can drop it.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1472
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-09 09:48:48 +00:00
John Johansen
0eca26c6c2 Merge Allow write access to /run/user/*/dconf/user
Gtk applications like Firefox request write access to the file
`/run/user/1000/dconf/user`. The code in `dconf_shm_open` opens the file
with `O_RDWR | O_CREAT`.

4057f8c84f/shm/dconf-shm.c (L68)

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1471
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-09 09:46:55 +00:00
John Johansen
50452e1147 parser: add a hfa dump that matches the renumbered chfa
Construction of the chfa can reorder states from what the numbering
given during the hfa constuctions because of reordering for better
compression, dead state removal to ensure better packing etc.

This however means the dfa dump is difficult (it is possible using
multiple dumpes) to match up to the chfa that the kernel is
using. Make this easier by making the dfa dump be able to take the
emapping as input, and provide an option to dump the chfa equivalent
hfa.

Renumbered states will show up as {new <== {orig}} in the dump

Eg.
--D dfa-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{5} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {5}  0 (0x 4/0//0/0/0)
    0x4 -> {5}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {5}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {5}  0 (0x 4/0//0/0/0)
    \n 0xa -> {5}  0 (0x 4/0//0/0/0)
    \  0x20 -> {5}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {6}
{6} perms: none
    1 0x31 -> {5}  0 (0x 4/0//0/0/0)

-D dfa-compressed-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{2 == {5}} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    0x4 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \n 0xa -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \  0x20 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {4 == {6}}
{4 == {6}} perms: none
    1 0x31 -> {2 == {5}}  0 (0x 4/0//0/0/0)

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-03 14:18:50 -08:00
John Johansen
3034c0772e Merge parser: improve unreachable state removal
Currently states are added to the reachable set when they are popped
from the workqueue. This however can result in states being
added to the work queue multiple times and reprocessed.

```
Eg. If state 2 has the transitions, and 9 is not in the reachable set
  a -> 9
  b -> 9
  c -> 9
  d -> 9
  e -> 3
```

then 9 will get pushed onto the work 4 times. Even worse other states
on the workqueue may also add state 9 to the workqueue because it has
not been added to the reachable set.

Instead add states to the reachable set when they are added to the
workqueue. The first encounter with a state will result in it being
reachable and all other encounters will see that it already in the set
and not add it to the workqueue.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1473
Acked-by: seth.arnold@gmail.com
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-02 09:36:48 +00:00
John Johansen
4fb3dbc7b3 parser: improve unreachable state removal
Currently states are added to the reachable set when they are popped
from the workqueue. This however can result in states being
added to the work queue multiple times and reprocessed.

Eg. If state 2 has the transitions, and 9 is not in the reachable set
  a -> 9
  b -> 9
  c -> 9
  d -> 9
  e -> 3

then 9 will get pushed onto the work 4 times. Even worse other states
on the workqueue may also add state 9 to the workqueue because it has
not been added to the reachable set.

Instead add states to the reachable set when they are added to the
workqueue. The first encounter with a state will result in it being
reachable and all other encounters will see that it already in the set
and not add it to the workqueue.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-01 17:16:17 -08:00
Jörg Sommer
318fb30446 Allow write access to /run/user/*/dconf/user
Gtk applications like Firefox request write access to the file
`/run/user/1000/dconf/user`. The code in `dconf_shm_open` opens the file
with `O_RDWR | O_CREAT`.

4057f8c84f/shm/dconf-shm.c (L68)
2024-12-31 10:23:50 +01:00
Jörg Sommer
c3af6228fd cupsd: convert profile to @etc_ro/rw
While cups itself writes to /etc the others require only read-only access
and might therefore live in /usr/etc.
2024-12-31 10:12:16 +01:00
Jörg Sommer
97d7fa3f5f cupsd: Add /etc/paperspecs read access
Cups uses libpaper which accesses /etc/paperspecs.

ce42216e2e/lib/libpaper.c.in.in (L419)
2024-12-31 10:12:16 +01:00
John Johansen
40e9b2a961 Merge parser: fix priority for file rules.
Fix priority for file rules, and the ability to dump the dfa at different stages, and update and fix the equality tests.

This in particular adds the ability to better debug the equality tests. Instead of just piping the parser output into the hash it creates a tmp dir and drops the binary files there so they can be manually examined. It adds new options particularly the -r option making so the tests will exit on first failure to make it easier to isolate and examine a failure.

Eg.
```
./equality.sh -r -d -v
Equality Tests:
................................................................................................................................................................................................................................
Binary inequality 'priority=-1'x'priority=-1' change_hat rules automatically inserted
FAIL: Hash values match
parser: ./../apparmor_parser -QKSq --features-file=./features_files/features.all
known-good (ee4f926922ecd341f1389a79dd155879) == profile-under-test (ee4f926922ecd341f1389a79dd155879) for the following profiles:
known-good         /t { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current a, ^test { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current a, /f r, }}
profile-under-test /t { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current w, ^test { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current w, /f r, }}

  files retained in "/tmp/eq.3240859-deHu10/"
```

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1455
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-30 09:26:45 +00:00
John Johansen
8c799f4eec Merge Allow python cache under the @{HOME}/.cache/ dir
Starting with Python 3.8, you can use the PYTHONPYCACHEPREFIX environment
variable to define a cache directory for Python [1]. I think most people would set
this dir to @{HOME}/.cache/python/ , so the python abstraction should allow
writing to this location.

[1]: https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPYCACHEPREFIX

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1467
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-24 22:33:28 +00:00
John Johansen
027b508da8 parser: equality tests: convert to using sha256sum for the hashes
There is a general industry wide effort to move off of md5 and even
sha1 (see recent kernel changes). While in this particular use case it
doesn't make a difference (besides slightly lowering the chance of a
collision) switch to sha256sum to make sure our code doesn't depend on
tools that are deprecated and there is an effort to remove.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:55 -08:00
John Johansen
bf7b80c478 parser: equality tests: fix r carve out tests
Similar to the deny x permission tests, the tests that test carving
out r permissions need to be updated to be conditional on what
priority is being used on the rule.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:55 -08:00
John Johansen
25f16b239d parser: equality tests: update deny x perm carve out test
With priority rules, deny does not carve out permissions from the
higher priority rule. Technically it doesn't from lower priority either
as it completely overrides them, but that case already results in
an inequality so does not cause the tests to fail.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:55 -08:00
John Johansen
369029dc07 parser: equality tests: fix cx specified profile transition
cx rules using a specified profile transition, may be emulated by
using px and a hierarchical profile name. That is

  cx -> b

may be transformed into

  px -> profile//b

which will generate an xtable entry of

  profile//b

which means the previous patch using

  pivot_root -> b,

to reliably add b to the xtable will not cover this case.

transition to using two pivot_root rules to provide the xtable entries
  pivot_root /a -> b,
  pivot_root /c -> /t//b,

the paths /a and /c are irrelavent as long as they don't have an
overlap with the generic globbing expression in the test, Two table
entries will be generated. We guarantee no overlap by converting the

  /** to /f**

Also the xtable reserving rules are moved to the end of the profile so
the table order can be reliably created. A follow on MR around xtable
improvements should add reliability to xtable order.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:55 -08:00
John Johansen
84650beb2f parser: equality tests: fix equality failure due to xtable
exec rules that specify an specific target profile generate an entry
in the xtable. The test entries containing " -> b" are an example of
this.

Currently the parser allocates the xtable entry before priorities are
applied in the backend, or minimization is done. Further more the
parser does not ref count the xtable entry to know what it is no
longer referenced.

The equality tests generate rules that are designed to completely
override and remove a lower priority rule, and remove it. Eg.

  /t { priority=1 /* ux, /f px -> b, }

and then compares the generated profile to the functionaly equivalent
profile eg.

  /t { priority=1 /* ux, }

To verify the overridden rule has been completely removed.
Unfortunately the compilation is not removing the unused xtable entry
for the specified transition, causing the equality comparison to fail.

Ideally the parser should be fixed so unused xtable entries are removed,
but that should be done in a different MR, and have its own test.

To fix the current tests, and another rule that adds an xtable entry
to the same target that can not be overriden by the x rule using
pivot_root. The parser will dedup the xtable entry resulting in the
known and test profile both having the same xtable. So the test will
pass and meet the original goal of verifying the x rule being overriden
and eliminated.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:28 -08:00
John Johansen
cca842b897 parser: equality tests: rework and add debug features
Failed equality tests can be hard to debug. The profiles aren't always
enough to figure out what is going on. Add several options that will
help in debugging, and developing new tests.

Add switches and arg parsing.

Add the ability to run tests individually

Add a -r flag to allow retaining the test and output
similar to the regression tests, so the exact output from the
tests can be examined.

Add a -d flag to dump dfa build information.

Allow overriding the parser, features, and description for a given
test run.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:24:46 -08:00
John Johansen
05ddc61246 parser: equality tests: wrap test run in function
In preparation for some additional abilities wrap the current tests in
a function.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:24:39 -08:00
John Johansen
31e60baab2 parser: equality tests: consitently dump error output to stderr
printf of failure/error info should be going to stderr. Unfortunately
the test has a mix of 2>&1 and 1>&2. Having a mix is just wrong, we
could standardize on either but since the info is error info 1>&2
seems to be the better choice.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
57c57f198c parser: equality tests: fix failing overlapping x rule tests
The test was passing because the file priority was always zero bug
resulting in the priority rule always being correctly combined
with the specific match x rule, instead of overriding it.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
4b410b67f1 parser: equality tests: fix change_hat priority test
The test was passing because the file priority always being zero bug,
the supplied rule always had the same priority as the implied
rule. Resulting in binary_equality always passing even though the
specified priority should have resulted in a failure.

Fix this by checking if the priorities are equal to the implied
rule other wise it should result in an inequality.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
d275dfdd42 parser: equality tests: output parser, config and features info
When there is a failure output the exact call info used to invoke the
parser. To facilitate manually recreating the test.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
fcee32a37e parser: equality tests: convert xequality tests to equality
With the file priority fix the xequality (expected equal but known
failure) tests are now passing. So convert them to regular equality
tests.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
5d2a38e816 parser: add some new dfa dump options.
The dfa goes through several stages during the build. Allow dumping it
at the various stages instead of only at the end.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
9d5b86bc9d parser: fix priority for file rules.
File rules could drop priority info when rule matched a rule
that was the same except for having different priority. For now
fix this by treating them as a different rule.

The priority was also be dropped when add_prefix was used to
add the priority during the parse resulting in file rules always
getting a default priority of 0.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
8e431ebcd9 Merge regression tests: make loop device size more generous
Depending on the system, copying echo to the loop device fails because the echo binary is too large.
Especially on systems that have echo be just a symlink to coreutils (e.g. busybox) (as opposed to echo being its own binary) 16k is just not enough.
2M seems fine on my system, but this might need yet a higher value depending on what coreutils other people actually run.

The crash in question:
```
cp: error writing '/tmp/sdtest.3937422-31490-Bxvi6g/mount_target/echo': No space left on device
Fatal Error (file_unbindable_mount): Unexpected shell error. Run with -x to debug
rm: cannot remove '/tmp/sdtest.3937422-31490-Bxvi6g/mount_target': Device or resource busy
```

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1469
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-20 08:53:41 +00:00
John Johansen
cd4bb05f20 Merge Add overlayfs regression tests
These tests exercise various common file operations on files in an overlayfs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1461
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>
2024-12-20 08:25:58 +00:00
Grimmauld
1cc2a3bd86 regression tests: make loop device size more generous
Depending on the system, copying echo to the loop device fails because the echo binary is too large.
Especially on systems that have echo be just a symlink to coreutils (e.g. busybox) 16k is just not enough.
2M seems fine on my system, but this might need yet a higher value depending on what coreutils other people actually run.
The actual loop device needs to be larger to properly fit the allocated file size. Testing shows 4M is sufficient, but this is basically arbitrary.
2024-12-19 23:48:43 +01:00
John Johansen
ba60bfff85 Merge Write a regression test for mediating file access in private mounts
This test, as is, emits an execname warning which is due to a bug in the `prologue.inc` infrastructure (see !1450 for a fix to this issue).

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1448
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 19:44:51 +00:00
John Johansen
c489631770 Merge aa-status: fix json generation
- previously, aa-status --json --show profiles would return non-standard json
- adding the --pretty flag would crash completely
- closes #470

Things done:
- removed trailing ", " in json generation
- generate json seperator (", ") for each new json field
  (profiles/processes) after the header if json is enabled

Tested on NixOS and apparmor 4.0.3 base, but should work on any version the patch applies on.

Closes #470
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1451
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 19:44:13 +00:00
John Johansen
59957aa1d8 Merge fixes on the testing infrastructure
This MR is meant to resolve warnings such as "Warning: execname '/home/username/Documents/apparmor/tests/regression/apparmor/file_unbindable_mount': no such file or directory" when running tests like the one in the current version of !1448.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1450
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 19:38:00 +00:00
John Johansen
50f260df51 Merge profiles: transmission-gtk needs attach_disconnected
From LP: #2085377, when using ip netns to torrent traffic through a
VPN, attach_disconnected is needed by the policy because ip netns sets
up a mount namespace.

Fixes: https://bugs.launchpad.net/bugs/2085377
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1395
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 19:15:41 +00:00
John Johansen
3ed5adb665 Merge Allow make-* flags with remount operations
While the mount syscall documentation disallows this, the kernel silently
ignores make-* flags when doing a remount, and real applications were
passing this conflicting set of flags. Because changing the kernel to
reject this combination would break userspace, we should allow them
instead.

For an example: see https://bugs.launchpad.net/apparmor/+bug/2091424.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1466
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 17:27:53 +00:00
Mikhail Morfikov
03b5a29b05 Allow python cache under the @{HOME}/.cache/ dir
Starting with Python 3.8, you can use the PYTHONPYCACHEPREFIX environment
variable to define a cache directory for Python [1]. I think most people would set
this dir to @{HOME}/.cache/python/ , so the python abstraction should allow
writing to this location.

[1]: https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPYCACHEPREFIX
2024-12-19 09:33:13 +01:00
Ryan Lee
83270fcf68 Add a regression test for allowing rprivate with conflicting options
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-18 10:28:49 -08:00
Georgia Garcia
67ee5f8b39 Merge Add separator between mount flags in dump_flags
The previous code would concatenate all of them together without spacing.
While dump_flags and the corresponding operator<< function aren't currently used,
this will help for when dump_flags is used to debug parser problems.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1465
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-18 15:26:29 +00:00
Georgia Garcia
a3299ba133 Merge Use "profile//hat" in storage
TL;DR: Replace `aa[profile][hat]` with `active_profiles['profile//hat']` as a preparation to get rid of `aa`'s limits, especially to enable handling nested childs.

Since this is an extremely shortened summary, I recommend to check the individual commits for a readable and understandable diff and more details.

Note that this MR is "just" a preparation - nested childs are not supported yet. Also, `include` still uses the old structure. Both will be separate MRs - this one is already big enough ;-)

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1360
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-18 12:07:01 +00:00
Christian Boltz
58664c106c read_profile: rename active_profile parameter
... to is_active_profile to prevent confusion with active_profiles
2024-12-17 22:51:12 +01:00
Christian Boltz
0551be806e ask_the_questions: use full_profile
... instead of combine_profname([profile, hat])
2024-12-17 22:51:12 +01:00
Christian Boltz
728a8717e9 aa.py: get rid of 'aa'
... which is no longer used - everything is in active_profiles now :-)
2024-12-17 22:51:12 +01:00
Christian Boltz
f66ada256a Drop now-unused split_to_merged() and its tests 2024-12-17 22:51:12 +01:00
Christian Boltz
695e472b2c Switch aa-mergeprof from aa to active_profiles 2024-12-17 22:51:12 +01:00
Christian Boltz
531f47676d ProfileList: add get_all_profiles()
... and a test for it
2024-12-17 22:51:12 +01:00
Christian Boltz
c93d560f89 Switch test-libapparmor-test_multi.py from aa to active_profiles
Note that the old code assigned dummy_prof to aa[profile][hat] and
active_profiles[profile] (= the main/parent profile) - which is
diffferent when testing a log for a child profile.

aa[profile][hat] was the wrong place - but since we used exactly that
again when checking for added exec rules, this error was hidden.

Now that the test is switched to using active_profiles, only check the
main profile for exec rules added by ask_exec(). (This will need to be
adjusted when we add a test for exec rules/events in nested childs, but
not earlier ;-)
2024-12-17 22:51:12 +01:00
Christian Boltz
61a7ba2822 Switch usage of 'aa' to active_profiles
This is mostly a search-and-replace patch.

In most cases, that means replacing `aa[profile][hat]` with
`active_profiles[full_profile]`.

In cases where the main/parent profile is meant, switch from
`aa[profile][profile]` to `active_profiles[profile]`.

Checks like `p in apparmor.aa` that check if a (main) profile exists
become `active_profiles.profile_exists(p)`.

write_profile() gets changed to loop over
`active_profiles.get_profile_and_childs()` which makes the code simpler.

`split_to_merged(aa)` becomes just `active_profiles`.

The only change that is not search-and-replace style is in
write_piece(). It expects a dict (not a ProfileList), therefore adjust
serialize_profile() so that it always hands over a dict.
2024-12-17 22:51:12 +01:00
Christian Boltz
c5bbe79338 replace original_aa with original_profiles
This also changes the internal structure - instead of the nested dict
original_aa[profile][hat], we now have a ProfileList original_profiles[profile//hat].
2024-12-17 22:51:12 +01:00
Christian Boltz
c5e495c56d ProfileList: add replace_profile()
... and some tests for it.
2024-12-17 22:51:12 +01:00
Christian Boltz
a37c65957f ProfileList: add __getitem__()
... and add some tests for it.
2024-12-17 22:51:12 +01:00
Christian Boltz
b66dfd8bfb Use active_profiles.profile_exists()
... to test if a given profile or hat exists
2024-12-17 22:51:12 +01:00
Christian Boltz
0da12fe7cb Use extra_profiles.profile_exists()
... instead of accessing the internal storage directly.
2024-12-17 22:51:12 +01:00
Christian Boltz
3a02d6d14c Use temporary object instead of working in aa[profile][hat]
... in ask_addhat() and ask_the_questions().

Also deduplicate some code in ask_the_questions().
2024-12-17 22:51:12 +01:00
Christian Boltz
f5ed9cffe3 serialize_profile(): simplify and cleanup
Drop `comment.replace('\\n', '\n')` because that doesn't make sense and
doesn't change anything - not even a comment that contains the literal
string '\n' (backslash + letter n).

Besides that, get rid of the 'string' variable and store everything in
'data'.
2024-12-17 22:51:11 +01:00
Christian Boltz
2e8a75195c De-duplicate code in read_profile() 2024-12-17 22:51:11 +01:00
Christian Boltz
578ab8da9d Store child profiles and hats in active_profiles
... including just-created child profiles and hats.

Also ensure that serialize_profile() doesn't print them out as child
profiles AND external hats.

This commit includes a bugfix for a rare corner case:
Since create_new_profile() can return more than one profile if the
program has required_hats, add all of them to active_profiles.
(aa only got the expected profile added, but not the required_hats.)
2024-12-17 22:51:11 +01:00
Christian Boltz
fe9b2542ca ProfileList: add profile_exists()
... and extend the existing tests for add_profile to also check
profile_exists().
2024-12-17 22:51:11 +01:00
Christian Boltz
a0e6fbe32a ProfileStorage: store parent profile
... and extend the tests to get some coverage.
2024-12-17 22:51:11 +01:00
Christian Boltz
792d1a5568 ProfileList addProfile(): always hand over ProfileStorage
... and make it non-optional

Note that read_profile() in aa.py skips child profiles and hats,
therefore active_profiles for now only contains the main profiles.
2024-12-17 22:51:11 +01:00
Ryan Lee
52babe8054 Allow make-* flags with remount operations
While the mount syscall documentation disallows this, the kernel silently
ignores make-* flags when doing a remount, and real applications were
passing this conflicting set of flags. Because changing the kernel to
reject this combination would break userspace, we should allow them
instead.

For an example: see https://bugs.launchpad.net/apparmor/+bug/2091424.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-17 11:59:54 -08:00
Ryan Lee
96718ea4d1 Add separator between mount flags in dump_flags
The previous code would concatenate all of them together without spacing.
While dump_flags and the corresponding operator<< function aren't currently used,
this will help for when dump_flags is used to debug parser problems.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-17 11:50:35 -08:00
Georgia Garcia
f9edc7d4c1 profiles: transmission-gtk needs attach_disconnected
From LP: #2085377, when using ip netns to torrent traffic through a
VPN, attach_disconnected is needed by the policy because ip netns sets
up a mount namespace.

Fixes: https://bugs.launchpad.net/bugs/2085377
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-17 09:32:18 -03:00
Georgia Garcia
b2f713dd83 Merge Python SWIG binding fixes (API breaking)
Changes to Python SWIG bindings that are breaking changes but that fix bindings that were previously unusable.

This MR also depends on !1334 and !1337 being merged first, though ~~I can rebase this one if necesssary~~ this MR has now been rebased after those two were merged.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1338
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-17 12:24:15 +00:00
Christian Boltz
f2c398405b Merge Update fs type comment in swap regression test
As per https://gitlab.com/apparmor/apparmor/-/merge_requests/1463#note_2259888640: this really should have been a part of !1463, except that cboltz only pointed this out after the MR was already merged. Better late than never, nevertheless.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1464
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-16 21:14:00 +00:00
Ryan Lee
5cd3362a81 Update fs type comment in swap regression test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 12:51:30 -08:00
Ryan Lee
1d3d48cc2a Shellcheck pass over overlayfs.sh
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
b24a820e7a Extend overlayfs test with more file ops
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
8212fa8be4 Add more operations to the regression test complain binary
This extra functionality is to be used in a different regression test that reuses the binary

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
e0127767fd Add the overlayfs regression test to task.yaml
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
1cb11f5a89 Add the overlayfs regression test to the Makefile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
2fdb5c799c Add a basic overlayfs regression test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
fa58d3611a Shellcheck fix pass over file_unbindable_mount test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:37:50 -08:00
Georgia Garcia
6d7b5df947 Merge Fix swap regression test on btrfs
As per !1462 it turns out that the swap regression test on btrfs also needs special casing in order to work properly. This is an analogous patch to check for btrfs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1463
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-13 20:32:01 +00:00
Ryan Lee
c768a7dc79 Add file_unbindable_mount to regression task.yaml
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:27:47 -08:00
Ryan Lee
049b35dff0 Add file_unbindable_mount to regression test Makefile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:27:47 -08:00
Ryan Lee
f249c6d58f Write a regression test for mediating file access in unbindable mounts
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:27:47 -08:00
Ryan Lee
90c7af69c5 Fix swap regression test on btrfs
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:13:55 -08:00
Georgia Garcia
e8f1ac4791 Merge fix swap test on zfs file system
Swap on ZFS is *weird*. Getting it working needs some special casing, see e.g. https://askubuntu.com/questions/1198903/can-not-use-swap-file-on-zfs-files-with-holes

Currently, the swap regression test fails on my system (with /tmp in zfs):
```bash
tests/regression/apparmor ❯ ./swap.sh
Error: swap failed. Test 'SWAPON (unconfined)' was expected to 'pass'. Reason for failure 'FAIL: swapon /tmp/sdtest.872368-19048-kN4FN2/swapfile failed - Invalid argument'
Error: swap failed. Test 'SWAPOFF (unconfined)' was expected to 'pass'. Reason for failure 'FAIL: swapoff /tmp/sdtest.872368-19048-kN4FN2/swapfile failed - Invalid argument'
swapon: /tmp/sdtest.872368-19048-kN4FN2/swapfile: skipping - it appears to have holes.
Fatal Error (swap): Unexpected shell error. Run with -x to debug
```

However, just doing a file mount does make the test work on zfs, similar to how it is done with tmpfs. This means we don't need any special-casing for zfs beyond what is already there for working around (similar) tmpfs limitations.

Also, while researching this, it is possible a similar patch is needed for btrfs, but i currently don't have an easy way to test that.
This is non-breaking for anyone *not* using zfs, and it is currently broken with zfs anyways.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1462
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-13 20:06:56 +00:00
Grimmauld
9a1b538298 fix swap test on zfs file system 2024-12-13 15:35:47 +01:00
Christian Boltz
b3de4ef022 Merge limit buildpath.py setuptools version check to the relevant bits
previously, this check would fail if the setuptools version would contain non-integers.
On my system, that is the case: `setuptools.__version__` is `'75.1.0.post0'`
I believe it is entirely fair to just check the relevant bits and refuse  to continue if those can not be checked properly.
Having some extra slug on the version should not immediately cause issues (e.g. the `post0` here, or slugs like `beta`, `alpha` and the likes).
Probably only very few systems are running setuptools with weird version info, but supporting this is a simple one-line change i figured i might as well MR.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1460
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-11 21:25:00 +00:00
Grimmauld
3302ae98e4 limit buildpath.py setuptools version check to the relevant bits
previously, this check would fail if the setuptools version would contain non-integers.
On my system, that is the case: `setuptools.__version__` is `'75.1.0.post0'`
I believe it is entirely fair to just check the relevant bits and refuse  to continue if those can not be checked properly.
But haviong something extra on the version should not immediately cause issues (e.g. the `post0` here, or slugs like `beta`, `alpha` and the likes).
Probably only very few systems are running setuptools with weird version info, but supporting this doesn't cost much, i believe.
2024-12-11 16:30:19 +01:00
Georgia Garcia
8a6eb170e1 Merge postfix-smtp profile fix
Allow locking for /var/spool/postfix/pid/unix.relay.

Example log entry: `type=AVC msg=audit(1733851239.685:8882): apparmor="DENIED" operation="file_lock" profile="postfix-smtp" name="/var/spool/postfix/pid/unix.relay" pid=14222 comm="smtp" requested_mask="k" denied_mask="k" fsuid=91 ouid=0FSUID="postfix" OUID="root"`

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1459
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-10 20:55:43 +00:00
pyllyukko
76dcf46d4f postfix-smtp profile fix
Allow locking for /var/spool/postfix/pid/unix.relay.
2024-12-10 19:32:49 +02:00
John Johansen
a315d89a2b Merge smbd: allow capability chown
This is neeed for "inherit owner = yes" in smb.conf.

From man smb.conf:

    inherit owner (S)

    The ownership of new files and directories is normally governed by
    effective uid of the connected user. This option allows the Samba
    administrator to specify that the ownership for new files and
    directories should be controlled by the ownership of the parent
    directory.

Fixes: https://bugzilla.suse.com/show_bug.cgi?id=1234327

I propose this fix for 3.x, 4.x and master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1456
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-10 09:34:03 +00:00
John Johansen
60f1b55ab5 Merge Use MS_SYNCHRONOUS instead of MS_SYNC
MS_SYNC is a flag for msync(2) while MS_SYNCHRONOUS is a flag for mount(2).
The header used to define MS_SYNC but IMO this is confusing since that's an
unrelated flag.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1458
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-10 08:39:18 +00:00
Zygmunt Krynicki
d164e877f5 Use MS_SYNCHRONOUS instead of MS_SYNC
MS_SYNC is a flag for msync(2) while MS_SYNCHRONOUS is a flag for mount(2).
The header used to define MS_SYNC but IMO this is confusing since that's an
unrelated flag.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-10 09:09:45 +01:00
John Johansen
239ae21b69 Merge Allow spread to use locally-provided kernel
By placing a bzImage into the top level of the AppArmor git repository one can
instruct spread and image-garden to use that image instead of booting
traditionally with an EFI / full disk image pair.

In addition, make error handling in qemu more robust, so failures are both
surfaced and do not cause endless attempts to allocate.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1452
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-10 00:47:48 +00:00
Zygmunt Krynicki
7031b5aeee Allow spread to use locally-provided kernel
By placing a bzImage into the top level of the AppArmor git repository one can
instruct spread and image-garden to use that image instead of booting
traditionally with an EFI / full disk image pair.

In addition, make error handling in qemu more robust, so failures are both
surfaced and do not cause endless attempts to allocate.

Please update image-garden to at least 5a00ead9964df6463e19432ae50e7760fc6da755

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-10 00:50:02 +01:00
Grimmauld
9967ba9873 aa-status: fix json output with --count flag 2024-12-09 23:58:04 +01:00
Christian Boltz
d305028502 smbd: allow capability chown
This is neeed for "inherit owner = yes" in smb.conf.

From man smb.conf:

    inherit owner (S)

    The ownership of new files and directories is normally governed by
    effective uid of the connected user. This option allows the Samba
    administrator to specify that the ownership for new files and
    directories should be controlled by the ownership of the parent
    directory.

Fixes: https://bugzilla.suse.com/show_bug.cgi?id=1234327
2024-12-09 20:45:42 +01:00
Georgia Garcia
11d121409d Merge tests: add regression tests for snapd mount-control
The test adds a very small and simple smoke test that shows that a mount rule
with both fstype and options allows mounts to be performed on a real running
kernel.

The test is structured in a way that should make it easy to extend with new
variants (flags, fstype) in the future.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1445
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-09 19:25:14 +00:00
Christian Boltz
dfe771602d Merge postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/hold/.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1454
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-09 18:56:30 +00:00
pyllyukko
3c2aae3a22 postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/hold/.
2024-12-09 19:23:34 +02:00
Georgia Garcia
b4adff2ce0 tests: fix profile name when wrapper is specified
When settest was called with two parameters, one for the test name and
the other for the test wrapper/binary, the profile created with
genprofile would show the test name, causing an error if the file
didn't exist.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-09 12:16:13 -03:00
Georgia Garcia
0307619ed9 tests: add option to append a profile to a profile already generated
Some of the tests using the --stdin option of mkprofile.pl are adding
more than one profile at a time. Whenever a profile is created in the
test, its name is added to the file profile.names so the test
infrastructure can tell if the profile is loaded or removed when
appropriately. The issue is that the name of the second profile
created by --stdin is not added, so these checks are not applied.

This patch adds the option of appending a second profile (not rules).
The option --append was used instead of a short -A because the short
options are arguments of mkprofile.pl, which --append is not.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-09 12:16:13 -03:00
Zygmunt Krynicki
1f60021979 tests: add regression tests for snapd mount-control
The test adds a very small and simple smoke test that shows that a mount rule
with both fstype and options allows mounts to be performed on a real running
kernel.

The test is structured in a way that should make it easy to extend with new
variants (flags, fstype) in the future.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-09 14:55:03 +01:00
Grimmauld
4f006a660c aa-status: fix json generation
- previously, aa-status --json --show profiles would return non-standard json
- adding the --pretty flag would crash completely
- closes #470

Things done:
- removed trailing ", " in json generation
- generate json seperator (", ") for each new json field
  (profiles/processes) after the header if json is enabled

Tested on NixOS and apparmor 4.0.3 base, but should work on any version the patch applies on.
2024-12-09 10:57:58 +01:00
Georgia Garcia
9cc40e2dca tests: remove outdated restriction on image name specification
Due to how the tests were implemented in the past, permissions could
be passed along with the image name, and the permission part would be
discarded. The issue is that permissions are usually separated by ':',
but namespaces also contain ':', which would cause a conflict.

Since permissions are no longer passed as part of the image name,
remove that description so profile names in namespaces can be
supported.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-06 14:25:34 -03:00
Christian Boltz
5fb91616e3 Merge python 3.13 fixes/workarounds
Fixes/workarounds for python 3.13 support.

fail.py: handle missing cgitb - workaround for https://gitlab.com/apparmor/apparmor/-/issues/447

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1439
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-05 17:35:13 +00:00
Georgia Garcia
d9304c7653 Merge Allow running tests with spread
Spread is a full-system, or integration test suite runner initially developed
to test snapd. Over time it has spread to other projects where it provides a
structured way to organize, run and debug complex full-system interactions.
Spread is documented on https://github.com/canonical/spread and is used in
production since late 2016.

Spread has a notion of backends which are responsible for allocating and
discarding test machines. For the purpose of running AppArmor regression tests,
I've combined spread with my own tool, image garden. The tool provides
off-the-shelf images, constructed on-the-fly from freely available images, and
makes them easily available to spread.

The reason for doing it this way is so that using non-free cloud systems is not
required and anyone can repeat the test process locally, on their own computer.
Vanilla spread is somewhat limited to x86-64 systems but the way I've used it
here makes it equally possible to test x86_64 *and* aarch64 systems. I've done
most of the development on an ARM single-board-computer running on my desk.

Spread requires a top-level spread.yaml file and a collection of task.yaml
files that describe individual tasks (for us, those are just tests). Tasks have
no implied dependency except that to reach a given task, spread will run all
the _prepare_ statements leading to that task, starting from the project, test
suite and then task. With proper care one can then run a specific individual
test with a one-line command, for example:

```
spread -v garden:ubuntu-cloud-24.04:tests/regression/apparmor:at_secure
```

This will prepare a fresh ubuntu-cloud-24.04 system (matching the CPU
architecture of the host), copy the project tree into the test machine, install
all the build dependencies, build all the parts of apparmor and then run one
specific variant of the regression test, namely the at_secure program.
Importantly the same test can also run on, say debian-cloud-13 (Debian Trixie),
but also, if you have a Google cloud account, on Google Compute Engine or in
one of the other backends either built into spread or available as a fork of
spread or as a helper for ad-hoc backend. Spread can also create more than one
worker per system and distribute the tests to all of the available instances.
In no way are we locking ourselves out of the ability to run our test suite on
our target of choice.

Spread has other useful switches, such as:
- `-reuse` for keeping machines around until discarded with -discard
- `-resend` for re-sending updated copy of the project (useful for -reuse)
- `-debug` for starting an interactive shell on any failure
- `-shell` for starting an interactive shell instead of the `execute` phase

This first patch contains just the spread elements, assuming that both spread
and image-garden are externally installed. A GitLab continuous integration
installing everything required and running a subset of tests will follow
shortly.

I've expanded the initial selection of systems to allow running all the tests
on several versions of Ubuntu, Debian and openSUSE, mainly as a sanity check
but also to showcase how practical spread is at covering real-world systems.

A number of tests are currently failing:

    - garden:debian-cloud-12:tests/regression/apparmor:attach_disconnected
    - garden:debian-cloud-12:tests/regression/apparmor:deleted
    - garden:debian-cloud-12:tests/regression/apparmor:unix_fd_server
    - garden:debian-cloud-12:tests/regression/apparmor:unix_socket_pathname
    - garden:debian-cloud-13:tests/regression/apparmor:attach_disconnected
    - garden:debian-cloud-13:tests/regression/apparmor:deleted
    - garden:debian-cloud-13:tests/regression/apparmor:unix_fd_server
    - garden:debian-cloud-13:tests/regression/apparmor:unix_socket_pathname
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:attach_disconnected
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:deleted
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:e2e
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_fd_server
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_socket_pathname
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:xattrs_profile

In addition, only on openSUSE, I've skipped the entire test suite of the utils
directory, as it requires python3 ttk themes, which I cannot find in packaged
form.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1432
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-05 14:02:31 +00:00
Zygmunt Krynicki
d27377a62f Document spread tests in README.md
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 14:48:58 +01:00
Mikko Rapeli
434e34bb51 fail.py: handle missing cgitb
It's no longer in python standard library starting
at version 3.13. Fixes:

root@qemuarm64:~# aa-complain /etc/apparmor.d/*
Traceback (most recent call last):
  File "/usr/sbin/aa-complain", line 18, in <module>
    from apparmor.fail import enable_aa_exception_handler
  File "/usr/lib/python3.13/site-packages/apparmor/fail.py", line 12, in <module>
    import cgitb
ModuleNotFoundError: No module named 'cgitb'

Signed-off-by: Mikko Rapeli <mikko.rapeli@linaro.org>
2024-12-05 06:58:01 +00:00
John Johansen
e1d8bf1888 Merge Add explicit test for parser priority-based carveouts
Tests #466 but is marked as expected fail due to that bug not being resolved.

Depends on !1441 which adds the xfail infrastructure to the parser equality testing framework, and should be rebased on top of master once that MR is merged.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1443
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-05 06:21:59 +00:00
Zygmunt Krynicki
1df91e2c8c Third iteration of spread support
- Tests defined in utils/test are now described by a task.yaml in the same
  directory and can run concurrently across many machines.
- Tests for utils/ are now executed on openSUSE Tumbleweed since ttk themes is
  no longer a hard dependency in master.
- Tests no longer run on openSUSE Leap 15.6 due to the age of default
  Python (3.6) and gcc/g++. The tight integration with SWIG which does
  not seem to support other Python versions very well. Perl hard-codes
  old GCC for extension modules. The upcoming openSUSE Leap 16 should be
  a viable target. In the meantime we can still test everything through
  rolling-release Tumbleweed.
- Formatting of YAML files is now more uniform, at four spaces per tab.
- The run-spread.sh script is now in the root of the tree. The script allows
  running all spread tests sequentially on one system, while collecting logs
  and artifacts for convenient analysis after the fact.
- All systems are adjusted to run _four_ workers in parallel with _two_ virtual
  cores each and equipped with 1.5GB of virtual memory. This aims to best
  utilize the capacity of a typical CI worker with two to four cores and about
  8GB of available memory.
- Failing tests are marked as such, so that as a whole the entire spread suite
  can pass and be useful at catching regressions.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 02:17:07 +01:00
Zygmunt Krynicki
c95ac4d350 Second iteration of spread support
Compared to v1 the following improvements have been made:

- The cost of installing packages have been shifted from each startup to image
  preparation phase, thanks to the integration of custom cloud-init profiles
  into image-garden. This has dramatic impact on iteration time while also
  entirely removing requirement to be online to run once a prepared image is
  available.

- Support for running on Google Compute Engine has been removed since it would
  not be able to use cloud-init the same way would currently only complicate
  setup.

- The number of workers have been tuned for local iteration, aiming for
  comfortable work with 16GB of memory on the host. Once CI/CD pipeline
  support is introduced I will add a dedicated entry so that resources are
  utilized well both locally and when running in CI.

- The set of regression tests listed in tests/regression/apparmor/task.yaml is
  now cross-checked so introduction of a new test to the makefile there is
  automatically flagged and causes spread to fail with a clear message.

- The task tests/unit/utils has been improved to generate profiles. Thanks to
  Christian Boltz for explaining this relationship between tests.

- A number of comments have been improved and cleaned up for readability,
  accuracy and sometimes better grammar.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 02:17:07 +01:00
Zygmunt Krynicki
cc04181578 Allow running tests with spread
Spread is a full-system, or integration test suite runner initially developed
to test snapd. Over time it has spread to other projects where it provides a
structured way to organize, run and debug complex full-system interactions.
Spread is documented on https://github.com/canonical/spread and is used in
production since late 2016.

Spread has a notion of backends which are responsible for allocating and
discarding test machines. For the purpose of running AppArmor regression tests,
I've combined spread with my own tool, image garden. The tool provides
off-the-shelf images, constructed on-the-fly from freely available images, and
makes them easily available to spread.

The reason for doing it this way is so that using non-free cloud systems is not
required and anyone can repeat the test process locally, on their own computer.
Vanilla spread is somewhat limited to x86-64 systems but the way I've used it
here makes it equally possible to test x86_64 *and* aarch64 systems. I've done
most of the development on an ARM single-board-computer running on my desk.

Spread requires a top-level spread.yaml file and a collection of task.yaml
files that describe individual tasks (for us, those are just tests). Tasks have
no implied dependency except that to reach a given task, spread will run all
the _prepare_ statements leading to that task, starting from the project, test
suite and then task. With proper care one can then run a specific individual
test with a one-line command, for example:

```
spread -v garden:ubuntu-cloud-24.04:tests/regression/apparmor:at_secure
```

This will prepare a fresh ubuntu-cloud-24.04 system (matching the CPU
architecture of the host), copy the project tree into the test machine, install
all the build dependencies, build all the parts of apparmor and then run one
specific variant of the regression test, namely the at_secure program.
Importantly the same test can also run on, say debian-cloud-13 (Debian Trixie),
but also, if you have a Google cloud account, on Google Compute Engine or in
one of the other backends either built into spread or available as a fork of
spread or as a helper for ad-hoc backend. Spread can also create more than one
worker per system and distribute the tests to all of the available instances.
In no way are we locking ourselves out of the ability to run our test suite on
our target of choice.

Spread has other useful switches, such as:
- `-reuse` for keeping machines around until discarded with -discard
- `-resend` for re-sending updated copy of the project (useful for -reuse)
- `-debug` for starting an interactive shell on any failure
- `-shell` for starting an interactive shell instead of the `execute` phase

This first patch contains just the spread elements, assuming that both spread
and image-garden are externally installed. A GitLab continuous integration
installing everything required and running a subset of tests will follow
shortly.

I've expanded the initial selection of systems to allow running all the tests
on several versions of Ubuntu, Debian and openSUSE, mainly as a sanity check
but also to showcase how practical spread is at covering real-world systems.

A number of systems and tests are currently failing:

- garden:debian-cloud-12:tests/regression/apparmor:attach_disconnected
- garden:debian-cloud-12:tests/regression/apparmor:deleted
- garden:debian-cloud-12:tests/regression/apparmor:unix_fd_server
- garden:debian-cloud-12:tests/regression/apparmor:unix_socket_pathname
- garden:debian-cloud-13:tests/regression/apparmor:attach_disconnected
- garden:debian-cloud-13:tests/regression/apparmor:deleted
- garden:debian-cloud-13:tests/regression/apparmor:unix_fd_server
- garden:debian-cloud-13:tests/regression/apparmor:unix_socket_pathname
- garden:opensuse-cloud-15.6:tests/regression/apparmor:attach_disconnected
- garden:opensuse-cloud-15.6:tests/regression/apparmor:deleted
- garden:opensuse-cloud-15.6:tests/regression/apparmor:e2e
- garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_fd_server
- garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_socket_pathname
- garden:opensuse-cloud-15.6:tests/regression/apparmor:xattrs_profile
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:attach_disconnected
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:deleted
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:unix_fd_server
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:unix_socket_pathname
- garden:ubuntu-cloud-22.04:tests/regression/apparmor:attach_disconnected

In addition, only on openSUSE, I've skipped the entire test suite of the utils
directory, as it requires python3 ttk themes, which I cannot find in packaged
form.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 02:17:07 +01:00
Zygmunt Krynicki
9588b06e0f Allow running exactly one test in utils/test
The new check-one-test-% pattern rule allows running individual test scripts.
This allows them to be tested in parallel across many Make worker threads or
across many distinct machines with spread.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 02:17:07 +01:00
Georgia Garcia
57bcb02629 Merge Revert "Merge Fix regression test bug involving binaries different from name"
This reverts merge request !1446 due to breakage in the aa-exec and userns regression tests.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1447
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-04 21:33:26 +00:00
Ryan Lee
5cfdf9867f Revert "Merge Fix regression test bug involving binaries different from name"
This reverts merge request !1446
2024-12-04 21:05:55 +00:00
Georgia Garcia
1c4322a095 Merge Fix regression test bug involving binaries different from name
When the test name and test binary differed and genprofile was used, there would be an execname warning about the original expected binary not existing. This fixes that warning.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1446
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-04 20:36:26 +00:00
Ryan Lee
b7669222dc Fix regression test bug involving binaries different from name
When the test name and test binary differed and genprofile was used, there would be an execname warning about the original expected binary not existing. This fixes that warning.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-04 12:19:21 -08:00
Ryan Lee
b925d8acff parser equality tests: print both profiles upon test failure
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-04 11:37:36 -08:00
Ryan Lee
7b5f4c0d6f Add explicit test for parser priority-based carveouts
These are marked as expected fail due to a bug in the parser's priority
handling.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-04 11:29:40 -08:00
John Johansen
53e322b755 Merge parser: equality tests: add the ability have tests that are a known problem
currently the equality tests require the tests to PASS as known equality
or inequality. Add the ability to add tests that are a known problem
and are expected to fail the equality, or inequality test.

This is done by using

   verify_binary_xequality
   verify_binary_xinequality

This allows new tests to be added to document a known issue, without
having to develop the fix for the issue. The use of this facility
is expected to be temporary, so any test marked as xequality or
xinequality will be noisy but not fail the other tests until they
are fixed, at which point they will cause the tests to fail to
force them to be updated to the correct equality or inequality
test.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1441
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-04 02:55:57 +00:00
John Johansen
662a26d133 Merge profiles: update bwrap profile
Update the bwrap profile so that it will attach to application profiles
if present.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1435
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-12-03 21:45:49 +00:00
John Johansen
b81ea65c1c parser: equality tests: add the ability have tests that are a known problem
currently the equality tests require the tests to PASS as known equality
or inequality. Add the ability to add tests that are a known problem
and are expected to fail the equality, or inequality test.

This is done by using

   verify_binary_xequality
   verify_binary_xinequality

This allows new tests to be added to document a known issue, without
having to develop the fix for the issue. The use of this facility
is expected to be temporary, so any test marked as xequality or
xinequality will be noisy but not fail the other tests until they
are fixed, at which point they will cause the tests to fail to
force them to be updated to the correct equality or inequality
test.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-03 13:44:53 -08:00
Georgia Garcia
ea7e75cde7 Merge Drop useless code in test-regex_matches.py
No need to assign a variable to itsself, not even conditionally.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1442
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-03 16:35:17 +00:00
Georgia Garcia
36ae21e3fa Merge Remove match statements in utils for older Python compatibility
Somehow the use of new match statements slipped by review despite our commitment to supporting older Python versions. Replace them with an unfortunately-needed if-elif chain.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1440
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-03 12:20:30 +00:00
Christian Boltz
ce0cfccfd7 Drop useless code in test-regex_matches.py
No need to assign a variable to itsself, not even conditionally.
2024-12-02 21:33:34 +01:00
Ryan Lee
2068ea8720 Remove match statements in utils for older Python compatibility
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-02 10:47:16 -08:00
Christian Boltz
93c7035148 Merge aa-remove-unknown: fix readability check [upstreaming]
I am upstreaming this patch that is part of the nix package of apparmor for close to a year now.
This fixes the issue at https://github.com/NixOS/nixpkgs/issues/273164 for more distros than just NixOS.
The original merge Request on the nix side patching this was https://github.com/NixOS/nixpkgs/pull/285915.
However, people had issues with gitlab, so this never hit apparmor upstream until now. This does however also mean this patch has seen production and seems to work quite well.

## Original reasoning/message of the patch author:

This check is intended for ensuring that the profiles file can actually
be opened.  The *actual* check is performed by the shell, not the read
utility, which won't even be executed if the input redirection (and
hence the test) fails.

If the test succeeds, though, using `read` here might actually
jeopardize the test result if there are no profiles loaded and the file
is empty.

This commit fixes that case by simply using `true` instead of `read`.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1438
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-01 16:11:13 +00:00
Andreas Wiese
b4aa00de51 aa-remove-unknown: fix readability check
This check is intended for ensuring that the profiles file can actually
be opened.  The *actual* check is performed by the shell, not the read
utility, which won't even be executed if the input redirection (and
hence the test) fails.

If the test succeeds, though, using `read` here might actually
jeopardize the test result if there are no profiles loaded and the file
is empty.

This commit fixes that case by simply using `true` instead of `read`.
2024-11-29 12:20:48 +01:00
Maxime Bélair
cf51f7aadd Upadate man apparmor.d to highlight pivot_root limitation
As pointed out by https://bugs.launchpad.net/apparmor/+bug/2087875 ,
profile transitions with pivot_root are currently not supported on any
kernel.

This commit makes this limitation more obvious to users.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>
2024-11-27 17:29:42 +01:00
John Johansen
420945139c Merge Note in README which build/test steps can be meaningfully parallelized
This MR documents the lessons learned from the experiments that ultimately resulted in !1416.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1434
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-27 08:23:21 +00:00
John Johansen
74a67394ac Merge Make signal.cc:signal_map an unordered map
This also includes renaming SIGTSTP "stp" to "tstp" while preserving backwards compatibility.

Analogous to !1420.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1425
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-11-27 08:20:13 +00:00
John Johansen
c5b17d85ea Merge regression tests: Add test to check for DAC permissions to the testsuite
The regression test suite uses root with capabilities restricted in
several tests. This can cause the test suite to fail in weird and
confusing ways.

Add a test to check for DAC permissiosns from / to the testsuite
and abort running the tests with an error message if DAC permissions
are going to cause the test suite to fail.

Currently the test is pretty basic, but is better than nothing.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1411
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-11-27 08:14:11 +00:00
Georgia Garcia
9d2cde168e Merge aa-notify: Adding support for merging notification.
The new flag --merge-notifications enables the merging of all
notifications from a fixed time period into a single one, thus
preventing notification flooding.
A new GUI allows users to choose either a synthetic or a comprehensive
view of the notifications.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1324
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-11-26 18:35:37 +00:00
Maxime Bélair
f63fcdc8d2 aa-notify: Adding support for merging notification. 2024-11-26 18:35:37 +00:00
John Johansen
1979af7710 profiles: update bwrap profile
Update the bwrap profile so that it will attach to application profiles
if present.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-26 09:52:17 -08:00
Ryan Lee
4286d3a79f Apply 1 suggestion(s) to 1 file(s)
Co-authored-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-11-26 16:49:57 +00:00
Ryan Lee
967685352c Note in README which build/test steps can be meaningfully parallelized
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-25 17:42:41 -08:00
Christian Boltz
6f5cdb7b44 Merge Assorted fixes for test suite portability
I've been working on improved end-to-end testing of AppArmor on a number
of popular Linux distributions. My first run contains Debian, Ubuntu and openSUSE.

This branch contains three small fixes that, mainly, allow running more tests on
openSUSE Tumbleweed.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1431
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-11-25 21:58:08 +00:00
Zygmunt Krynicki
4caf0aff81 On openSUSE 15.6 make fails to find awk
Using this version of make:
```
GNU Make 4.2.1
Built for x86_64-suse-linux-gnu
```
I'm not entirely sure why but the alternative syntax I've used works correctly.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 15:05:23 +01:00
Zygmunt Krynicki
32ee85cef8 Use larger loop device in mult_mount.sh test
This fixes the test to pass on openSUSE Tumbleweed, where the small size
prevented alloction of an inode for the `lost+found` directory:

```
garden:opensuse-cloud-tumbleweed .../tests/regression/apparmor# mkfs.ext2 -F -m 0 -N 10 /tmp/sdtest.32929-21402-6x826m/image.ext3
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 512 1k blocks and 8 inodes

Allocating group tables: done
Writing inode tables: done
ext2fs_mkdir: Could not allocate inode in ext2 filesystem while creating /lost+found
```

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 13:55:19 +01:00
Zygmunt Krynicki
92fcdcab9e Quote trailing backslash in test case
This fixes an error with Python 3.11:

```
test/test-parser-simple-tests.py:420:21: E502 the backslash is redundant between brackets
```

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 13:55:05 +01:00
Zygmunt Krynicki
4b0adc63f5 Use command -v rather than which
Which is technically not POSIX and command -v works everywhere. This fixes
building and running the test suite on openSUSE Tumbleweed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 13:54:41 +01:00
Zygmunt Krynicki
f58fe9cd52 parser: quote BISON_MAJOR in case it is empty
On a test system without bison installed, make setup fails with:

  /bin/sh: 1: bison: not found
  /bin/sh: 1: test: -ge: unexpected operator

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 13:54:25 +01:00
Christian Boltz
2b45586fa9 Merge Dovecot profile: Allow reading of /proc/sys/kernel/core_pattern
See <https://dovecot.org/bugreport.html>

(the link describes how Dovecot requires access to `core_pattern`)

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1331
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-11-21 20:51:13 +00:00
Georgia Garcia
a2d52fedb2 Merge tests: fix incorrect setfattr call in xattrs_profile
The file was quoted with the following space, making the test broken.

Signed-off-by: Zygmunt Krynicki <me@zygoon.pl>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1429
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-11-21 18:05:24 +00:00
Zygmunt Krynicki
8c16fb2700 tests: fix incorrect setfattr call in xattrs_profile
The file was quoted with the following space, making the test broken.

Signed-off-by: Zygmunt Krynicki <me@zygoon.pl>
2024-11-21 18:10:14 +01:00
pyllyukko
0a5a9c465f Dovecot profile: Allow reading of /proc/sys/kernel/core_pattern
See <https://dovecot.org/bugreport.html>
2024-11-21 16:21:17 +02:00
Ryan Lee
f21a243ce4 Make signal.cc:signal_map an unordered_map
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-18 15:25:56 -08:00
Ryan Lee
5bcf05dd3d Similar name adjustments for signal.cc:signal_map
Because this is used in parsing profiles, we keep backwards compatibility by including
both names and mapping them to the same underlying signal number.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-18 15:25:27 -08:00
Ryan Lee
7a85f6282f Name adjustments to signal.cc::sig_names
Because sig_names is only used to dump parsed signals for debugging purposes,
renaming SIGTSTP "stp" to "tstp" is not a breaking change.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-18 15:21:27 -08:00
Christian Boltz
e27b0ad2b6 Merge Quote some variables in regression test suite to allow for spaces
This is not a complete fix for the spaces issue, but it is the next simple step that can be taken before the more difficult work of finding the remaining bugs in each shell script.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1424
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-11-18 21:30:46 +00:00
Ryan Lee
e7ec01f075 Quote some variables in regression test suite to allow for spaces
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-18 10:16:28 -08:00
John Johansen
2e77129e15 Merge parser: fix expr MatchFlag dump
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.

rule: \d2  ->  \x2 priority=1001 (0x4/0)< 0x4>

rule: \d7  ->  \a priority=3e9 (0x4/0)< 0x4>

rule: \d10  ->  \n priority=3e9 (0x4/0)< 0x4>

rule: \d9  ->  \t priority=3e9 (0x4/0)< 0x4>

rule: \d14  ->  \xe priority=1001 (0x4/0)< 0x4>

where priority=3e9 is the hex encoded priority 1001.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1419
Approved-by: Maxime Bélair <maxime.belair@canonical.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 17:41:13 +00:00
John Johansen
4af8acee6f Merge Initialize the backmap field in new_entry for capability addition
The old code implicitly initialized it to 0 by overwriting a
zero-initialized array terminator. Now that we construct the new entry
from scratch, we need to do this manually.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1423
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 02:07:44 +00:00
Ryan Lee
a513d02297 Initialize the backmap field in new_entry for capability addition
The old code implicitly initialized it to 0 by overwriting a
zero-initialized array terminator. Now that we construct the new entry
from scratch, we need to do this manually.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-14 17:14:19 -08:00
John Johansen
a422d2ea17 Merge Partial fix for regression tests if parent directory contains spaces
Most `tests/regression/apparmor/*.sh` scripts contain

    . $bin/prologue.inc

This will explode if one of the parent directories contains a space.

Minimized reproducer:

```
# cat test.sh
pwd=`dirname $0`
pwd=`cd $pwd ; /bin/pwd`
bin=$pwd
echo "pwd: $bin"
. $bin/prologue.inc
# ./test.sh
pwd: /tmp/foo bar
./test.sh: line 9: /tmp/foo: No such file or directory
```

Notice that test.sh tries to source `/tmp/foo` instead of `/tmp/foo bar/prologue.inc`.

The fix is to quote the prologue.inc path:

    . "$bin/prologue.inc"

While on it, also fix other uses of $bin - directly and indirectly - by quoting them.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1418
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:46:04 +00:00
John Johansen
015b41aeb4 Merge parser: improve libapparmor_re build and dump info
Fix libapparmor_re/Makefile so it works correctly with rebuilds and improve state machine dump information, to aid with debugging of permission handling during the compile.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1410
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:32:30 +00:00
John Johansen
48bf4d1df9 Merge Small fixset 2 for parser code nits
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1422
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:31:57 +00:00
John Johansen
95c648c0d2 Merge Pass parallelism arg to parser_sanity test prove invocation
Unfortunately, meaningfully parallelizing parser testing is a giant task:

- Parser equality testing is a shell script based framework where adding parallelism would be a major rework.
- Parser testing using Python’s unittest framework also needs a different test runner to enable parallelism.
- Parser testing using Perl’s prove framework already supports parallelism, but adding -j to Prove does not result in speedups. Thus, I suspect most of the overhead is in spawning the processes, and that speeding this part up will require making the parser a library and testing it that way.

The commit in this MR passes a `-j` parallelism flag to Perl's prove framework, but local testing has shown that this does not create speedups, and Gitlab CI has a very modest improvement of 11 minutes 16 seconds for the parser testing stage without `-j $(nproc)` vs 10 minutes 51 seconds with `-j $(nproc)`. Instead of passing `-j $(nproc)`, pass a fixed `-j 2` to gain some speedups, as the overhead of `-j $(nproc)` on a system with more than 2 cores eats up any time gains that parallelism would have brought.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1416
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:27:31 +00:00
John Johansen
926929da16 Merge Write basic file complain-mode regression tests
The test "Complain mode profile (file exec cx permission entry)" currently will only pass on a Ubuntu Oracular system due to a kernel bugfix patch that has not yet been upstreamed or backported.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1415
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:26:40 +00:00
John Johansen
d0d6975a1f Merge Remove unused and broken dup_value_list function
This function was broken all this time: instead of duplicating each entry in the list, it would duplicate the first entry n times. Since this function is currently not used anywhere, delete it instead of fixing it.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1421
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:24:52 +00:00
John Johansen
b6adc6b9e5 Merge Make parser_misc keyword_table and rlimit_table unordered_maps
Besides of transitioning towards C++, this also eliminates the linear scan search that the functions using these arrays did.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1420
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:24:09 +00:00
Ryan Lee
ae5a16bfd1 Use list_remove_at macro in mount.cc:extract_flags
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-14 10:36:30 -08:00
John Johansen
82e4b4ba00 regression tests: Add test to check for DAC permissions to the testsuite
The regression test suite uses root with capabilities restricted in
several tests. This can cause the test suite to fail in weird and
confusing ways.

Add a test to check for DAC permissiosns from / to the testsuite
and abort running the tests with an error message if DAC permissions
are going to cause the test suite to fail.

Currently the test is pretty basic, but is better than nothing.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-14 10:20:37 -08:00
Ryan Lee
af5cd5002b Remove unused and broken dup_value_list function
This function was broken all this time: instead of duplicating each entry
in the list, it would duplicate the first entry n times. Since this
function is currently not used anywhere, delete it instead of fixing it.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 15:53:08 -08:00
Ryan Lee
88719dbb7b Fix infinite loop in chfa.cc:weld_file_to_policy
This is simple enough to fix even if weld_file_to_policy isn't used in practice
with the compat layer that uses it being a target for deletion

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 15:46:38 -08:00
Ryan Lee
5b391a5a4f Remove unused list_remove and list_find_prev helper macros in parser.h
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 14:51:08 -08:00
Ryan Lee
d9ee417dc3 Add missing null check in mount.cc:extract_flags
There is a null check before storing invflags into inv, but not before initializing the value at inv to 0.
Assuming the null check is needed, it should be there in both places.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 14:45:55 -08:00
Ryan Lee
9689e3a39f Clarify duplicate insertion logic in parser_alias.c:process_entries
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 14:34:55 -08:00
Ryan Lee
cb110eaf98 Write basic file complain-mode regression tests
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-12 12:08:22 -08:00
John Johansen
a31343c5f7 parser: fix expr MatchFlag dump
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.

rule: \d2  ->  \x2 priority=1001 (0x4/0)< 0x4>

rule: \d7  ->  \a priority=3e9 (0x4/0)< 0x4>

rule: \d10  ->  \n priority=3e9 (0x4/0)< 0x4>

rule: \d9  ->  \t priority=3e9 (0x4/0)< 0x4>

rule: \d14  ->  \xe priority=1001 (0x4/0)< 0x4>

where priority=3e9 is the hex encoded priority 1001.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-11 14:39:40 -08:00
John Johansen
f24fc4841f Merge parser: fix mapping of AA_CONT_MATCH for policydb compat entries
The mapping of AA_CONT_MATCH was being dropped resulting in the
tcp tests failing because they would only match up to the first conditional
match check in the layout.

Bug: https://gitlab.com/apparmor/apparmor/-/issues/462
Fixes: e29f5ce5f ("parser: if extended perms are supported by the kernel build a permstable")
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1409
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-11 20:53:08 +00:00
John Johansen
eb365b374d Merge parser: bug fix do not change auditing information when applying deny
The parser recently changed how/where deny information is applied.
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed the implicit filtering of explicit denies during
the minimization pass. The implicit clear allowed the explicit
information to be carried into the minimization pass and merged with
implicit denies. The end result being a minimized dfa with the explicit
deny information available to be applied post minimization, and
then dropped later at permission encoding in the accept entries.

Extended permission however enable carrying explicit deny information
into the kernel to fix certain bugs like complain mode not being
able to distinguish between implicit and explicit deny rules (ie.
deny rules get ignored in complain mode). However keeping explicit
deny information when unnecessary result in a larger state machine
than necessary and slower compiles.

commit 179c1c1ba ("parser: fix minimization check for filtering_deny")
Moved the explicit apply_and_clear_deny() pass to before minimization
to restore mnimization's ability to create a minimized dfa with
explicit and implicit deny information merged but this also cleared
the explicit deny information that used to be carried through
minimization. This meant that when the deny information was applied
post minimization it resulted in the audit and quiet information
being cleared.

This resulted in the query_label tests failing as they are checking
for the expected audit infomation in the permissions.

Fixes: 179c1c1ba ("parser: fix minimization check for filtering_deny")
Bug: https://gitlab.com/apparmor/apparmor/-/issues/461
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1408
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-11 20:52:19 +00:00
Christian Boltz
55db4af979 Quote indirect uses of $bin and ${bin}
... to avoid issues with spaces in a parent directory's name.

"Indirect uses" means usage of $bin via another variable, for example
`foo=$bin/whatever`
2024-11-10 22:10:42 +01:00
Christian Boltz
22cf88b7c7 Quote all uses of $bin and ${bin}
... to avoid issues with spaces in a parent directory's name.
2024-11-10 22:10:42 +01:00
Christian Boltz
e1972eb22f Fix sourcing prologue.inc if parent directory contains spaces
Most `tests/regression/apparmor/*.sh` scripts contain

    . $bin/prologue.inc

This will explode if one of the parent directories contains a space.

Minimized reproducer:

```
pwd=`dirname $0`
pwd=`cd $pwd ; /bin/pwd`
bin=$pwd
echo "pwd: $bin"
. $bin/prologue.inc
pwd: /tmp/foo bar
./test.sh: line 9: /tmp/foo: No such file or directory
```

Notice that test.sh tries to source `/tmp/foo` instead of `/tmp/foo bar/prologue.inc`.

The fix - as done in this commit - is to quote the prologue.inc path:

    . "$bin/prologue.inc"
2024-11-10 22:10:32 +01:00
John Johansen
4c32ad8fb7 Merge test-logprof: Increase timeout once more
Builds for risc64 are much slower than on other architectures (4-5
seconds with qemu-user or on Litchi Pi 4A).

Since the timeout is only meant as a safety net, increase it generously,
and hopefully for the last time.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/463

I propose this patch for 4.0 and master.

Closes #463
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1417
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-09 22:37:24 +00:00
Christian Boltz
508ace452d test-logprof: Increase timeout once more
Builds for risc64 are much slower than on other architectures (4-5
seconds with qemu-user or on Litchi Pi 4A).

Since the timeout is only meant as a safety net, increase it generously,
and hopefully for the last time.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/463
2024-11-09 19:57:00 +01:00
Ryan Lee
d7cf845437 Make capabilities tracker into a class
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-08 14:55:43 -08:00
Ryan Lee
f6f3279c10 Make parser_misc keyword_table and rlimit_table unordered_maps
Besides of transitioning towards C++ this also eliminates the linear scan search that the functions using these arrays did.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-08 14:52:16 -08:00
Ryan Lee
f9a2c3c0eb Pass parallelism arg to parser_sanity test prove invocation
Because the main overhead of the parser_sanity test suite is process
spawning, parallelizing too much could end up hurting performance instead
of helping it. Thus, use a fixed value of 2 instead of $(nproc).

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-08 09:34:03 -08:00
Steve Beattie
5b98577a4d gitlab-ci: Build regression test suite in CI
Even if we can't run the regression tests in our GitLab CI environment, we can at least ensure the binaries in the regression test suite compile successfully.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1414
Approved-by: Steve Beattie <steve+gitlab@nxnw.org>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-11-07 20:06:30 +00:00
Ryan Lee
630b38238d Build regression tests in GitLab CI
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-07 11:47:55 -08:00
John Johansen
0828ab67b2 Merge regression tests: check for setfattr binary used by xattrs_profile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1412
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-07 03:27:28 +00:00
John Johansen
aa24194981 Merge Don't use hardcoded cerr in hfa.h State::dump
After https://gitlab.com/apparmor/apparmor/-/merge_requests/1410#note_2197215172 I did a quick grep and uncovered one other potential instance of `cerr` being written to in a dump function that takes an `ostream`.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1413
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-07 03:25:42 +00:00
Ryan Lee
d1cecd5f46 Don't use hardcoded cerr in hfa.h State::dump
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 18:00:16 -08:00
Ryan Lee
b39a535cb9 regression tests: check for setfattr binary used by xattrs_profile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 17:36:37 -08:00
John Johansen
15a02e0948 parser: fix mapping of AA_CONT_MATCH for policydb compat entries
The mapping of AA_CONT_MATCH was being dropped resulting in the
tcp tests failing because they would only match up to the first conditional
match check in the layout.

Bug: https://gitlab.com/apparmor/apparmor/-/issues/462
Fixes: e29f5ce5f ("parser: if extended perms are supported by the kernel build a permstable")
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 12:33:36 -08:00
John Johansen
45964d34e7 parser: add the abilitiy to dump the permissions table
Instead of encoding permissions in the accept and accept2 tables
extended perms uses a permissions table and accept becomes an index
into the table.

Add the ability to dump the permissions table so that it can be
compared and debugged.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 11:44:30 -08:00
John Johansen
00dedf10ad parser: add the accept2 table entry to the chfa dump
The chfa dump is missing information about the accept2 entry. The
accept2 information is necessary to help with debugging state machine
builds as accept2 is used to store quiet and audit information in the
old format or conditional information in the extended perms format.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 09:21:03 -08:00
John Johansen
7cc7f47424 parser: fix and cleanup libapparmor_re/Makefile
The Makefile is missing some of its .h depenedncies causing compiles
to either fail or worse result in bad builds when rebuilding in an
already built tree.

Move the header dependencies into a variable and use it for each
target. While some targets don't need every include as a dependency
and this will result in unnecessary rebuilds in some cases, it makes
the Makefile cleaner, easier to maintain and makes sure a dependency
isn't accidentally missed.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 09:20:51 -08:00
John Johansen
9ea0d4f89b parser: bug fix do not change auditing information when applying deny
The parser recently changed how/where deny information is applied.
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed the implicit filtering of explicit denies during
the minimization pass. The implicit clear allowed the explicit
information to be carried into the minimization pass and merged with
implicit denies. The end result being a minimized dfa with the explicit
deny information available to be applied post minimization, and
then dropped later at permission encoding in the accept entries.

Extended permission however enable carrying explicit deny information
into the kernel to fix certain bugs like complain mode not being
able to distinguish between implicit and explicit deny rules (ie.
deny rules get ignored in complain mode). However keeping explicit
deny information when unnecessary result in a larger state machine
than necessary and slower compiles.

commit 179c1c1ba ("parser: fix minimization check for filtering_deny")
Moved the explicit apply_and_clear_deny() pass to before minimization
to restore mnimization's ability to create a minimized dfa with
explicit and implicit deny information merged but this also cleared
the explicit deny information that used to be carried through
minimization. This meant that when the deny information was applied
post minimization it resulted in the audit and quiet information
being cleared.

This resulted in the query_label tests failing as they are checking
for the expected audit infomation in the permissions.

Fixes: 179c1c1ba ("parser: fix minimization check for filtering_deny")
Bug: https://gitlab.com/apparmor/apparmor/-/issues/461
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 09:11:43 -08:00
Ryan Lee
a2df3143d1 Remove aa_query_file_{path,link}_len wrappers
The prefix can be done in higher-level languages via slicing and having an explicit length exposes an out-of-bounds memory read footgun to those higher level languages

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 09:10:56 -08:00
Ryan Lee
53e3116350 Write test for aa_gettaskcon SWIG wrapper
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 09:10:56 -08:00
Ryan Lee
d199c2ae33 Write custom SWIG typemap for pid_t
Surprisingly, SWIG did not pick up the "typedef int pid_t" from the C headers.
As such, we need to provide our own wrapper. We don't just replicate the typdef
because we still support systems that have 16-bit PIDs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 09:10:45 -08:00
John Johansen
8df57ba89c Merge aa.py is_known_rule(): remove obsolete sanity check
We use ProfileStorage everywhere, which makes checking if a specific
rule_type exists obsolete.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1405
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:13:36 +00:00
John Johansen
e36ef0ed43 Merge aa.py: drop unused function profile_exists()
I don't know when (or even: if) this function was in use. A quick look
at the git history of aa.py shows that the function was (blindly?)
updated a few times. However, I didn't find a commit that uses or stops
using profile_exists(), so maybe it was never used at all.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1404
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:12:38 +00:00
John Johansen
5ebbe788ea Merge aa-mergeprof: prevent backtrace if file not found
If a user specifies a non-existing file to merge into the profiles
(`aa-mergeprof /file/not/found`), this results in a backtrace showing an
AppArmorBug because that file unsurprisingly doesn't end up in the
active_profiles filelist.

Handle this more gracefully by adding a read_error_fatal parameter to
read_profile() that, if set, forwards the exception. With that,
aa-mergeprof doesn't try to list the profiles in this non-existing file.

Note that all other callers of read_profile() continue to ignore read
errors, because aborting just because a single file in /etc/apparmor.d/
(for example a broken symlink) isn't readable would be a bad idea.

This bug was introduced in 4e09f315c3, therefore I propose this patch for 3.0..master

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1403
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:11:27 +00:00
John Johansen
3c40aab1a0 Merge profiles: update dconf abstraction to use @{etc_ro}
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1402
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:06:57 +00:00
John Johansen
bfa9147182 Merge php-fpm: widen allowed socket paths
It is common for packaged PHP applications to ship a PHP-FPM
configuration using a scheme of "$app.sock" or or "$app.socket" instead
of using a generic FPM socket.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1406
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:05:29 +00:00
John Johansen
14f54f3df2 Merge Misc small fixes to resolve some compiler warnings in regression test suite
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1407
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:04:18 +00:00
Ryan Lee
2ce217b873 Test SWIG Python bindings for aa_query_file_path
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
edb4a72c8c SWIG aa_query helper bitmask constants and stdint header
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
5db4908fd7 SWIG Python test for change_hat type signatures
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
930fca1e39 SWIG Python test refactoring of AppArmor enabled checks
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
369c9e73de Test aa_getcon SWIG bindings and leave some comments for untested ones
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
48901f2118 Write a test for aa_splitcon's SWIG bindings
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
c471acbe44 Typemaps for allowed, audited outputs of query functions
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
cdb3e4a14e Add typemap for Python SWIG aa_change_hatv so it can take a string list
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:35 -08:00
Ryan Lee
ea2c957f14 Write basic test for Python aa_find_mountpoint
Also exercises aa_is_enabled

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:37:51 -08:00
Ryan Lee
04da4c86b0 Write custom typemap for aa_splitcon
Can't use %cstring_mutable because aa_splitcon also returns a ptr

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:37:51 -08:00
Ryan Lee
f05112b5e9 aa_is_enabled now returns a boolean in Python
Because boooleans are a subclass of ints in Python, this isn't a breaking change

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:37:51 -08:00
Ryan Lee
a15768b0bf Write an output typemap for errno-based functions
In Python, return status is signalled by exceptions (or lack thereof)
instead of int. Keep the typemap portable for any other languages we may
add in the future.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:37:23 -08:00
Ryan Lee
50d26beb00 Include cstring.i and some cstring output typemaps for libapparmor SWIG
This includes a custom typemap to handle (char **label, char **mode)
pairs and a cstring_output_allocate declaration for char **mnt.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 16:40:48 -08:00
Ryan Lee
d273055ebf Use fn arg in pivot_root _clone instead of hardcoding everywhere
The only use of this _clone function passes in the same function that was
hardcoded, so this doesn't change any functionality.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 12:34:44 -08:00
Ryan Lee
823d14df80 Reserve enough space for full possible fd length
Even if file descriptor values would not exercise the full range provided
by int, it doesn't hurt to allocate enough space for all ints.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 12:34:12 -08:00
Georg Pfuetzenreuter
f575817b68 php-fpm: widen allowed socket paths
It is common for packaged PHP applications to ship a PHP-FPM
configuration using a scheme of "$app.sock" or or "$app.socket" instead
of using a generic FPM socket.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>
2024-11-05 20:03:11 +01:00
Christian Boltz
db1ca4f5e8 aa.py is_known_rule(): remove obsolete sanity check
We use ProfileStorage everywhere, which makes checking if a specific
rule_type exists obsolete.
2024-11-03 21:21:39 +01:00
Christian Boltz
c10b39f3fe aa.py: drop unused function profile_exists()
I don't know when (or even: if) this function was in use. A quick look
at the git history of aa.py shows that the function was (blindly?)
updated a few times. However, I didn't find a commit that uses or stops
using profile_exists(), so maybe it was never used at all.
2024-11-03 21:06:27 +01:00
Christian Boltz
e5479bd7ef aa-mergeprof: prevent backtrace if file not found
If a user specifies a non-existing file to merge into the profiles
(`aa-mergeprof /file/not/found`), this results in a backtrace showing an
AppArmorBug because that file unsurprisingly doesn't end up in the
active_profiles filelist.

Handle this more gracefully by adding a read_error_fatal parameter to
read_profile() that, if set, forwards the exception. With that,
aa-mergeprof doesn't try to list the profiles in this non-existing file.

Note that all other callers of read_profile() continue to ignore read
errors, because aborting just because a single file in /etc/apparmor.d/
(for example a broken symlink) isn't readable would be a bad idea.
2024-11-01 22:39:32 +01:00
Georgia Garcia
cbe8d295a5 profiles: update dconf abstraction to use @{etc_ro}
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-31 09:52:07 +01:00
Georgia Garcia
f7b5d0e783 Merge Improvements to Postfix profiles
* Support /usr/libexec/postfix/ path
* Added abstractions/{nameservice,postfix-common} to postfix-postscreen
* Added postfix-tlsproxy, postscreen & spawn to postfix-master
    * Added missing postfix-tlsproxy profile
* Added postscreen cache map (see <https://www.postfix.org/postconf.5.html#postscreen_cache_map>)
* Added /{var/spool/postfix/,}pid/pass.smtpd to postfix-smtpd

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1330
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-30 10:43:47 +00:00
John Johansen
3d1a3493af Merge profiles: add support for ArchLinux php-legacy package to php-fpm
ArchLinux ships a secondary PHP package called php-legacy with different
paths. As of now, the php-fpm profile will cover this binary but
inadequately restrict it.

Fixes: #454

Closes #454
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1401
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-10-30 09:37:08 +00:00
Christian Pfeiffer
6a5432b2b0 profiles: add support for ArchLinux php-legacy package to php-fpm
ArchLinux ships a secondary PHP package called php-legacy with different
paths. As of now, the php-fpm profile will cover this binary but
inadequately restrict it.

Fixes: #454
2024-10-30 09:39:37 +01:00
pyllyukko
4ccf567d31 Improvements to Postfix profiles
* Support /usr/libexec/postfix/ path
* Added abstractions/{nameservice,postfix-common} to postfix-postscreen
* Added postfix-tlsproxy, postscreen & spawn to postfix-master
    * Added missing postfix-tlsproxy profile
* Added postscreen cache map (see <https://www.postfix.org/postconf.5.html#postscreen_cache_map>)
* Added /{var/spool/postfix/,}pid/pass.smtpd to postfix-smtpd
2024-10-29 20:35:28 +02:00
John Johansen
4fe3e30abc Merge abstractions/nameservice: include nameservice-strict
... and drop all rules it contains from abstractions/nameservice.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1373
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 15:26:43 +00:00
John Johansen
82a4e70248 Merge zgrep: deny passwd access
Bash will try to read the passwd database to find the shell of a user if
$SHELL is not set. This causes zgrep to trigger

```
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
```

if called in a sanitized environment. As the functionality of zgrep is
not impacted by a limited Bash environment, add deny rules to avoid the
potentially misleading AVC messages.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1361
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 13:50:06 +00:00
John Johansen
d5777c0403 Merge ProfileStorage: store correct name
Instead of always storing the name of the main profile, store the child
profile/hat name if we are in a child profile or hat.

As a result, we always get the correct "profile xy" header even for
child profiles when dumping the ProfileStorage object.

Also extend the tests to check that the name gets stored correctly.

.

Add aa-complain tests for profile with hats and subprofiles

So far, change_profile_flags() in aa.py is the only user of
ProfileStorage's 'name'.

Rewrite minitools test_cleanprof() so that most of its code can be
reused, and add a test that runs 'aa-complain
/usr/bin/a/simple/cleanprof/test/profile' on cleanprof.in to ensure
aa-complain still works as expected on subprofiles and hats.

Note: aa-complain $profilename will change the flags of hats, but not
child profiles. This is a known issue, and doesn't change with this MR.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1359
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 13:48:30 +00:00
John Johansen
e48ab421b5 Merge Check if all profiles and abstractions contain abi/4.0
... and add abi/4.0 where it was missing

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1358
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 12:48:41 +00:00
John Johansen
ab16377838 Merge zgrep: allow reading /etc/nsswitch.conf and /etc/passwd
Seen on various VMs, my guess is that bash wants to translate a uid to a
username.

Log events (slightly shortened)

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

I propose this patch for 3.0..master

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1357
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 12:45:48 +00:00
John Johansen
ac704a5ba6 Merge Small fixset 1 for parser code nits
Numbered as 1 because I expect to find and fix more things like this as I continue to dig into the parser code.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1400
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 12:33:02 +00:00
Georgia Garcia
a8b6c90d29 Merge common_test setup_aa(): drop try/except
... which only existed for historical reasons

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1389
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-29 11:16:30 +00:00
Georgia Garcia
45a3bbb2c9 Merge Several test-libapparmor-test_multi.py fixes
Several fixes for test-libapparmor-test_multi.py and the expected profiles. The most important fix is that testing exec events/rules now works.

Please check the individual commits for details and readable diffs.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1390
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-29 11:15:09 +00:00
John Johansen
99261bad11 Merge Fix memory leak in aare_rules UniquePermsCache
When the find fails but the insertion also fails, we leak the new node
that we generated. Delete the new node in this case to avoid leaking
memory.

The question remains, however, as to whether we should implement `operator==` in addition to `operator<` so that they are consistent with each other and `find` works correctly.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1399
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 11:12:05 +00:00
Georgia Garcia
c6edb65fc1 Merge Add a test for aa-autodep
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1398
Approved-by: Maxime Bélair <maxime.belair@canonical.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-29 11:07:14 +00:00
Ryan Lee
6a1e9f916b Replace BOOL,TRUE,FALSE macros with actual C++ boolean type
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-28 12:35:57 +01:00
Ryan Lee
b43f1c4073 Make parser_include push_include_stack take const char because it doesn't actually modify it
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-28 12:35:26 +01:00
Ryan Lee
81950dae4e Fix memory leak in aare_rules UniquePermsCache
When the find fails but the insertion also fails, we leak the new node
that we generated. Delete the new node in this case to avoid leaking
memory.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-28 12:23:17 +01:00
John Johansen
e9d6e0ba14 Merge parser: fix minimization check for filtering deny
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed implicit filtering of explicit denies in the
minimization pass (the information was ignored in building the set of
final accept states).

The filtering of explicit denies reduces the size of the produced
dfa. Since we need to be smarter about when explicit denies are
kept (eg. during complain mode), and most dfas are limited to 65k
states we currently need to filter explicit deny perms by default.

To compensate commit 2737cb2c2 ("parser: minimization - remove
unnecessary second minimization pass") moved the
apply_and_clear_deny() to before minimization. However its check to
apply removal denials before minimization is broken. Remove minimization
triggering apply_and_clear_deny() and just set the FILTER_DENY flag
by default, until we have better selection of rules/conditions where
explicit deny information should be carried through to the backend.

Fixes: 2737cb2c2 ("parser: minimization - remove unnecessary second minimization pass")
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1397
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-10-28 11:22:57 +00:00
John Johansen
a5da9d5b5d Merge parser: fix integer overflow bug in rule priority comparisons
There is an integer overflow when comparing priorities when cmp is
used because it uses subtraction to find lessthan, equal, and greater
than in one operation.

But INT_MAX and INT_MIN are being used by priorities and this results
in INT_MAX - INT_MIN and INT_MIN - INT_MAX which are both overflows
causing an incorrect comparison result and selection of the wrong
rule permission.

Closes: https://gitlab.com/apparmor/apparmor/-/issues/452
Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")
Signed-off-by: John Johansen <john.johansen@canonical.com>

Closes #452
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1396
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-10-28 11:21:31 +00:00
John Johansen
e2d55844a2 parser: fix integer overflow bug in rule priority comparisons
There is an integer overflow when comparing priorities when cmp is
used because it uses subtraction to find lessthan, equal, and greater
than in one operation.

But INT_MAX and INT_MIN are being used by priorities and this results
in INT_MAX - INT_MIN and INT_MIN - INT_MAX which are both overflows
causing an incorrect comparison result and selection of the wrong
rule permission.

Closes: https://gitlab.com/apparmor/apparmor/-/issues/452
Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:03:53 -07:00
Christian Boltz
1f8227e671 Add a test for aa-autodep 2024-10-27 21:55:35 +01:00
John Johansen
179c1c1ba7 parser: fix minimization check for filtering_deny
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed implicit filtering of explicit denies in the
minimization pass (the information was ignored in building the set of
final accept states).

The filtering of explicit denies reduces the size of the produced
dfa. Since we need to be smarter about when explicit denies are
kept (eg. during complain mode), and most dfas are limited to 65k
states we currently need to filter explicit deny perms by default.

To compensate commit 2737cb2c2 ("parser: minimization - remove
unnecessary second minimization pass") moved the
apply_and_clear_deny() to before minimization. However its check to
apply removal denials before minimization is broken. Remove minimization
triggering apply_and_clear_deny() and just set the FILTER_DENY flag
by default, until we have better selection of rules/conditions where
explicit deny information should be carried through to the backend.

Fixes: 2737cb2c2 ("parser: minimization - remove unnecessary second minimization pass")
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-25 01:14:18 -07:00
John Johansen
8d6270e1fe Merge Use parallelism and make --touch when building in GitLab CI for faster CI times
As per https://docs.gitlab.com/ee/ci/pipelines/compute_minutes.html#gitlab-hosted-runner-cost-factors, GitLab CI computes minutes as wall clock time per stage * a constant cost factor derived from the runner type, so using parallelism in `make -j $(nproc)` will reduce the time it takes for GitLab CI to complete without increasing usage of GitLab CI minutes.

When investigating this, I also found out that the test stages needlessly rebuilt large parts of the C code base due to mtimes not being preserved when artifacts are restored from the build stage. Adding `make --touch` updates the mtimes so that the subsequent tests do not need to rebuild binaries needlessly.

The combined changes in this MR reduce the CI time from 13 minutes and 57 seconds (cb0f84e101 of `master`, https://gitlab.com/rlee287/apparmor/-/pipelines/1501017669 on my own fork without Coverity) to 12 minutes and 49 seconds (https://gitlab.com/rlee287/apparmor/-/pipelines/1502723883). This comparison omits the `make -j $(nproc)` addition to cov-build since I do not have a way of testing its effectiveness.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1387
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-24 12:30:12 +00:00
Christian Boltz
8eff7cd98f test-multi: no longer expect testcase01, 12 and 13 to fail
'testcase01', 'testcase12' and 'testcase13' contain a strange mix of
exec and network events.

Nevertheless, there's enough information to parse them as good-enough
exec events. While this is not perfectly correct, it's better than
skipping these logs in this test.

Stop expecting that these profiles have a wrong content, and adjust them
so that they contain the (somewhat) expected exec rule.
2024-10-23 19:25:35 +02:00
Christian Boltz
02e2ce0ad9 test exec events/rules in test-libapparmor-test_multi.py
So far, exec events were accidentally skipped in
test-libapparmor-test_multi.py because aa[profile][hat] was not
initialized, and ask_exec() exited early because of this.

Initialize aa[profile][hat] in the test to fix this.

To avoid that someone needs to select "inherit" each time the tests run,
add an optional default_ans parameter to ask_exec(), and let the test
call it with 'CMD_ix'.

(In case you wonder - defaulting to CMD_cx would ask to sanitize the
environment. CMD_ix avoids this.)

Also, we have to copy over aa[profile][hat] to log_dict in the test
because ask_exec() modifies aa[...], but the test only checks its local
log_dict.

Finally, add the expected exec rules to the *.profile files
2024-10-23 19:25:35 +02:00
Christian Boltz
5d0fd65a69 test-libapparmor-test_multi.py: use reset_aa()
... instead of resetting various apparmor.aa variables manually.
2024-10-23 19:25:35 +02:00
Christian Boltz
4276e80ed5 aa.py: Add load_sev_db()
... to de-duplicate code loading the severity db.
2024-10-23 19:25:35 +02:00
Christian Boltz
183d00e087 test_multi: fix testcase_dbus_09.profile
peer name=... is invalid in dbus message rules.

Note that this testcase is currently disabled in the utils tests because
it's based on a multiline log.
2024-10-23 19:25:34 +02:00
Christian Boltz
209dd851b3 test_multi: no longer skip testcase31
It is handled correctly in the current codebase.

It would be even better if it would generate a link rule that includes
the source, but let's leave that for a later fix.
2024-10-23 19:25:32 +02:00
Christian Boltz
19b4aeb338 Merge aa.py: drop unused confirm_and_abort() and delete_profile()
confirm_and_abort() is unused (note that a function with the same name
exists in ui.py and is used there)

Also delete the now-unused delete_profile() - luckily it was never used,
because it would also have deleted profiles that were "just" modified.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1388
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-10-21 14:20:48 +00:00
Christian Boltz
8791c7c48d common_test setup_aa(): drop try/except
... which only existed for historical reasons
2024-10-20 20:49:03 +02:00
Christian Boltz
4dd3ce0097 aa.py: drop unused confirm_and_abort() and delete_profile()
confirm_and_abort() is unused (note that a function with the same name
exists in ui.py and is used there)

Also delete the now-unused delete_profile() - luckily it was never used,
because it would also have deleted profiles that were "just" modified.
2024-10-20 16:16:30 +02:00
Steve Beattie
4d3b094d9e profiles: transmission-daemon needs attach_disconnected
Systemd's PrivateTmp= in transmission service is causing mount namespaces to be used leading to disconnected paths

[395201.414562] audit: type=1400 audit(1727277774.392:573): apparmor="ALLOWED" operation="sendmsg" class="file" info="Failed name lookup - disconnected path" error=-13 profile="transmission-daemon" name="run/systemd/notify" pid=193060 comm="transmission-da" requested_mask="w" denied_mask="w" fsuid=114 ouid=0

Fixes: https://bugs.launchpad.net/bugs/2083548
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1355
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-10-18 21:47:09 +00:00
Steve Beattie
3478558904 .gitignore: add mod_apparmor and pam_apparmor files
... that are generated during `make`

I propose this patch for 3.x..master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1374
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Steve Beattie <steve+gitlab@nxnw.org>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-10-18 21:45:52 +00:00
Ryan Lee
01435aaaa3 Pass -j flag for cov-build as well
This is separated out because I have no way of testing this

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 13:15:18 -07:00
Ryan Lee
030f991320 GitLab CI: touch built files in test stages before running tests
The artifact restoration step does not preserve mtime, resulting in source files newer than built files, resulting in a needless rebuild of everything before actually running the tests.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 11:46:46 -07:00
Ryan Lee
c47943f1af Invoke tst_binaries target with parallelism in GitLab CI
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 11:46:40 -07:00
Ryan Lee
2e841655cf Add a tst_binaries target to the parser to build tst binaries
This allows building the tst_* binaries in parallel independently of running the parser test suite

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 11:46:40 -07:00
Ryan Lee
88287d4eec Update .gitlab-ci.yml file with -j $(nproc) lines for faster building
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 11:45:38 -07:00
John Johansen
cb0f84e101 Merge utils: catch TypeError exception for binary logs
When a log like system.journal is passed on to aa-genprof, for
example, the user receives a TypeError exception: in method
'parse_record', argument 1 of type 'char *'

This patch catches that exception and displays a more meaningful
message.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/436
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

Closes #436
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1354
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>
2024-10-17 18:56:25 +00:00
Ryan Lee
099d7aca5d Merge Fix parser compiler warnings about format specifiers with DEBUG set
This fixes format string specification warnings that are emitted when DEBUG=1 is set. As for %s when the pointer is null: even if gcc prints (null) this is still undefined behavior, so we should do this explicitly.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1382
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-17 18:55:28 +00:00
Georgia Garcia
bb5e69f8db utils: catch TypeError exception for binary logs
When a log like system.journal is passed on to aa-genprof, for
example, the user receives a TypeError exception: in method
'parse_record', argument 1 of type 'char *'

This patch catches that exception and displays a more meaningful
message.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/436
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-16 20:35:40 -03:00
Ryan Lee
c1480d761f Merge Future-proof the Python abstraction for beyond Python 3.19
See https://gitlab.com/apparmor/apparmor/-/merge_requests/1376#note_2161284748

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1381
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-16 22:27:21 +00:00
Ryan Lee
776c56bd7e Fix compiler warnings about format specifiers with DEBUG set
As for %s when the pointer is null: even if gcc prints (null) this is still undefined behavior, so we should do this explicitly

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-16 12:26:54 -07:00
John Johansen
e23633ff0e Merge abstractions/nameservice: tighten libnss_libvirt file access
Limit access to \*.status files located in /var/lib/libvirt/dnsmasq/ as opposed to every file in the same directory.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1379
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>
2024-10-16 18:24:04 +00:00
Ryan Lee
8eb7e7f63b Future-proof the Python abstraction for beyond Python 3.19
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-16 09:36:32 -07:00
Ryan Lee
6d20a10606 Merge Small fixes to libapparmor man pages, including type signature fix
These are small changes to the man pages, with the most important one being updating some function signatures to be consistent with apparmor.h.

We should put together a man page for aalogparse functions too, but I'm submitting this MR first to get the smaller changes in faster.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1378
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-16 16:25:12 +00:00
Christian Boltz
04c9ffbb19 Merge Replace sigalarm-based subprocess timeout with the built-in one
The timeout parameter for subprocess.Popen.communicate has been available since Python 3.3. Given the fragility of SIGALRM based mechanisms, there's no reason to reimplement our own timeout instead of using the built-in one.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1377
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-10-16 12:25:33 +00:00
Giampaolo Fresi Roglia
5be4295b5a abstractions/nameservice: tighten libnss_libvirt file access 2024-10-16 10:48:03 +02:00
Ryan Lee
ffc46247ad Proofreading of libapparmor manpages to fix a few nits
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-15 17:07:02 -07:00
Ryan Lee
6c8a5bedff Add comment to explain podchecker warning for aa_find_mountpoint.pod
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-15 17:05:42 -07:00
Ryan Lee
38e06cf09a Make libapparmor man page type signatures consistent with apparmor.h
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-15 17:05:42 -07:00
Ryan Lee
ed0b539c94 Replace sigalarm-based subprocess timeout with the built-in one
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-15 16:51:17 -07:00
Christian Boltz
2ea82b866d .gitignore: add mod_apparmor and pam_apparmor files
... that are generated during `make`
2024-10-11 22:07:54 +02:00
Christian Boltz
372ad8f6b9 abstractions/nameservice: include nameservice-strict
... and drop all rules it contains from abstractions/nameservice.
2024-10-11 20:53:52 +02:00
Georgia Garcia
68376e7fee Merge abstraction: add nameservice-strict.
As I have read multiple MR mentioning the `nameservice-strict`. Therefore, I thought it would make sense to directly import it here.

To give some context, this abstraction is probably the most commonly included abstraction (after `base`). In `apparmor.d`, it is used by over 700 profiles (only counting direct import). Therefore, adding new rules can have an important impact over a lot of profiles.

Note: the abstraction is a direct import from https://gitlab.com/roddhjav/apparmor.d. The license is the same, I obviously kept Morfikov copyright line. However, I am not sure either or not the SPDX identifier can be used here.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1368
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-11 12:36:42 +00:00
Alexandre Pujol
1966c36c1d abstraction: add missing abi in nameservice-strict. 2024-10-09 21:42:48 +01:00
Alexandre Pujol
62e7220271 abstraction: include nss-systemd in nameservice-strict. 2024-10-09 21:25:57 +01:00
Alexandre Pujol
1a8d8f3695 abstraction: add nameservice-strict.
Imported from gitlab.com/roddhjav/apparmor.d
2024-10-09 19:57:16 +01:00
Georgia Garcia
a522e11129 Merge Support name resolution via libnss-libvirt
Add support for hostname resolution via libnss-libvirt. This change has been tested against the latest oracular version 10.6.0-1ubuntu3.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1362
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-09 15:12:05 +00:00
Giampaolo Fresi Roglia
e53f300821 nameservice: add support for libnss-libvirt 2024-10-09 10:12:26 +02:00
Georg Pfuetzenreuter
48483f2ff8 zgrep: deny passwd access
Bash will try to read the passwd database to find the shell of a user if
$SHELL is not set. This causes zgrep to trigger

```
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
```

if called in a sanitized environment. As the functionality of zgrep is
not impacted by a limited Bash environment, add deny rules to avoid the
potentially misleading AVC messages.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>
2024-10-06 23:18:52 +02:00
Christian Boltz
d8e86fee75 Add aa-complain tests for profile with hats and subprofiles
So far, change_profile_flags() in aa.py is the only user of
ProfileStorage's 'name'.

Rewrite minitools test_cleanprof() so that most of its code can be
reused, and add a test that runs 'aa-complain
/usr/bin/a/simple/cleanprof/test/profile' on cleanprof.in to ensure
aa-complain still works as expected on subprofiles and hats.

Note: aa-complain $profilename will change the flags of hats, but not
child profiles. This is a known issue, and doesn't change with this MR.
2024-10-06 16:51:12 +02:00
Christian Boltz
cb943e4efc ProfileStorage: store correct name
Instead of always storing the name of the main profile, store the child
profile/hat name if we are in a child profile or hat.

As a result, we always get the correct "profile xy" header even for
child profiles when dumping the ProfileStorage object.

Also extend the tests to check that the name gets stored correctly.
2024-10-06 14:44:12 +02:00
Christian Boltz
8c80c56252 Check if all profiles and abstractions contain abi/4.0
... and add abi/4.0 where it was missing
2024-10-06 12:07:58 +02:00
Christian Boltz
68d42c3e37 zgrep: allow reading /etc/nsswitch.conf and /etc/passwd
Seen on various VMs, my guess is that bash wants to translate a uid to a
username.

Log events (slightly shortened)

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
2024-10-06 11:05:52 +02:00
John Johansen
bb460ba467 Merge .gitlab-ci.yml: run pipeline in merge requests too
Hopefully this will allow us to run pipelines in regular branches but
also run it on merge requests on the parent project. This is needed
for users that are not verified by Gitlab.
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html#run-pipelines-in-the-parent-project

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1346
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-04 21:31:51 +00:00
John Johansen
254b324a83 Merge Rename aa_log_record struct fields (C only) to allow inclusion in C++
Do an identifier rename combined with preprocessor directives and SWIG directives to allow the header to be included in C++ while keeping backwards compatibility to the extent possible.

Closes: #439 

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

Closes #439
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1342
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-04 21:28:14 +00:00
Georgia Garcia
645320ae84 profiles: transmission-daemon needs attach_disconnected
Systemd's PrivateTmp= in transmission service is causing mount namespaces to be used leading to disconnected paths

[395201.414562] audit: type=1400 audit(1727277774.392:573): apparmor="ALLOWED" operation="sendmsg" class="file" info="Failed name lookup - disconnected path" error=-13 profile="transmission-daemon" name="run/systemd/notify" pid=193060 comm="transmission-da" requested_mask="w" denied_mask="w" fsuid=114 ouid=0

Fixes: https://bugs.launchpad.net/bugs/2083548
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-04 10:54:48 -03:00
Ryan Lee
2d7440350f Basic test that uses aa_log_record struct fields via old, C++-incompatible names
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:46:00 -07:00
Ryan Lee
645b1406d1 Basic test that invokes aalogparse functions from C++ code
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:45:55 -07:00
Ryan Lee
3cb61b6b41 Add extern "C" decls to aalogparse.h for C++ usage of aalogparse
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:44:25 -07:00
Ryan Lee
e2c407c614 Add SWIG renames for fields to preserve backcompat
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:44:18 -07:00
Ryan Lee
3f5180527d Rename aa_log_record struct fields (C only) to allow inclusion in C++
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:43:33 -07:00
John Johansen
b52c9bb860 Merge Convert original_aa from hasher to dict
This requires adding some `.get()` guards at one place, but should
otherwise be a boring change.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1347
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-03 18:36:29 +00:00
Ryan Lee
5b141dd580 Merge Remove aa_query_label from SWIG bindings
This is one of those functions that never worked anyways, because it
modified the passed-in label in place. Moreover, it is a low-level
interface that requires its callers to manually construct a binary query.
As such, it would be better not to expose it and to add wrappers like
aa_query_file_path for the other query classes if that functionality is
needed later.

The removal of this function from the bindings was dropped from !1337 because it exposed functionality that was not present in wrappers around aa_query_label. However, upon further discussion, we decided that it'd be better to remove it now and add other wrappers to libapparmor itself if the functionality provided by the existing wrappers became insufficient.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1352
Approved-by: John Johansen <john@jjmx.net>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-03 18:25:52 +00:00
Ryan Lee
d3603a1f20 Remove aa_query_label from SWIG bindings
This is one of those functions that never worked anyways, because it
modified the passed-in label in place. Moreover, it is a low-level
interface that requires its callers to manually construct a binary query.
As such, it would be better not to expose it and to add wrappers like
aa_query_file_path for the other query classes if that functionality is
needed later.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 10:59:25 -07:00
Georgia Garcia
7867a46e2e Merge gitlab-ci.yml: only run coverity in the upstream project
This pipeline only makes sense to run in the upstream project where
the coverity variables are defined, so they currently fail in forks.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1351
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-03 17:46:54 +00:00
Georgia Garcia
c382efe119 gitlab-ci.yml: only run coverity in the upstream project
This pipeline only makes sense to run in the upstream project where
the coverity variables are defined, so they currently fail in forks.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-03 11:12:36 -03:00
Christian Boltz
a56516851e Convert original_aa from hasher to dict
This requires adding some `.get()` guards at one place, but should
otherwise be a boring change.
2024-10-02 22:44:04 +02:00
Georgia Garcia
248e5673ef .gitlab-ci.yml: run pipeline in merge requests too
Hopefully this will allow us to run pipelines in regular branches but
also run it on merge requests on the parent project. This is needed
for users that are not verified by Gitlab.
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html#run-pipelines-in-the-parent-project
2024-10-02 17:37:27 -03:00
John Johansen
07fe0e9a1b Merge Fix ABI break for aa_log_record
Commit 3c825eb001 adds a field called `execpath` to the `aa_log_record` struct. This field was added in the middle of the struct instead of the end, causing an ABI break in libapparmor without a corresponding major version number bump.
Bug report: https://bugs.launchpad.net/apparmor/+bug/2083435
This is fixed by simply moving execpath at the end of the struct.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1345
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-01 22:06:45 +00:00
Maxime Bélair
c86c87e886 Fix ABI break for aa_log_record 2024-10-01 22:06:45 +00:00
Ryan Lee
d35a6939be Merge Remove broken SWIG functions that we don't actually want to expose
It doesn't make sense to expose the *_raw functions or the varg version
of aa_change_hatv to higher-level languages. While technically a breaking
change, the generated bindings for these functions never actually worked
anyways:

 - aa_change_hat_vargs uses C varargs, which SWIG passes in NULL for by
   default. It does not attempt to process the passed-in arguments at all
   (and in fact caused an unused-argument compiler warning when compiling
   the generated bindings).
 - aa_getprocattr_raw and aa_getpeercon_raw both place output into a ``char
   **mode`` pointer. SWIG by default generates these as opaque pointer
   object arguments, rendering them unusable for getting output. Future
   patches would be needed to fix ``char**`` arguments for the other functions
   that use them. Moreover, these functions expect their caller to handle
   memory allocation, which is also not possible from a higher-level
   language point of view.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1337
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-01 19:16:38 +00:00
Ryan Lee
bdc8889cc0 Remove private _aa_is_blacklisted from SWIG bindings
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-01 11:58:07 -07:00
Ryan Lee
2bd1884654 Remove SWIG aa_change_hat_vargs, aa_get_procattr_raw, aa_get_peercon_raw
It doesn't make sense to expose the *_raw functions or the varg version
of aa_change_hatv to higher-level languages. While technically a breaking
change, the generated bindings for these functions never actually worked
anyways:

 - aa_change_hat_vargs uses C varargs, which SWIG passes in NULL for by
   default. It does not attempt to process the passed-in arguments at all
   (and in fact caused an unused-argument compiler warning when compiling
   the generated bindings).
 - aa_getprocattr_raw and aa_getpeercon_raw both place output into a char
   **mode pointer. SWIG by default generates these as opaque pointer
   object arguments, rendering them unusable for getting output. Future
   patches would be needed to fix char** arguments for the other functions
   that use them. Moreover, these functions expect their caller to handle
   memory allocation, which is also not possible from a higher-level
   language point of view.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-01 11:58:07 -07:00
Ryan Lee
bcab725670 Merge Improvements to the SWIG binding handling of aa_log_record and %exception memory management
This patchset adds annotations so that SWIG can automatically manage the memory lifetimes of aa_log_record objects, and ensures proper cleanup is done in the %exception handler.

This is the first of a sequence of MRs to overhaul the SWIG bindings and fix pieces that never actually worked in the first place. As fixing those other pieces will require breaking changes, I am separating out the non-breaking changes into separate MRs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1334
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-01 18:55:24 +00:00
Ryan Lee
61b1501f48 Apply 1 suggestion(s) to 1 file(s)
Co-authored-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-01 18:38:04 +00:00
Christian Boltz
4b6df10fe3 Merge ping: allow reading /proc/sys/net/ipv6/conf/all/disable_ipv6
Fixes: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1082190

I propose this patch for 3.0..master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1340
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-09-30 21:42:56 +00:00
Christian Boltz
62ff290c02 Merge abstractions/mesa: allow ~/.cache/mesa_shader_cache_db/
... which is used by Mesa 24.2.2

Reported by darix.

Fixes: https://bugs.launchpad.net/bugs/2081692

I propose this addition for 3.x, 4.0 and master

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1333
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-09-30 21:40:03 +00:00
Christian Boltz
247bdd5deb Merge apparmor.vim: Add missing units for rlimit cpu and rttime
... and allow whitespace between the number and the unit.

I propose this patch for 3.x, 4.0 and master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1336
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-09-30 21:35:40 +00:00
Christian Boltz
df4d7cb8da ping: allow reading /proc/sys/net/ipv6/conf/all/disable_ipv6
Fixes: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1082190
2024-09-27 12:05:29 +02:00
Ryan Lee
398f0790de Add DeprecationWarning emission to Python free_record wrapper
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 11:57:04 -07:00
Ryan Lee
4a7a8fa213 Make Python-side free_record a no-op to prevent double-free
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 11:57:04 -07:00
Ryan Lee
e5fd0fc636 Annotate SWIG aa_log_record alloc+dealloc
Swig generates a "thisown" attribute, which is an escape hatch in case
higher-level code does something weird and needs to tell SWIG whether to
free the C object when Python garbage collects it. Adding this attribute
is not a breaking change w.r.t access to the other attributes of the parsed
record.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 11:57:04 -07:00
Ryan Lee
436ebda9b5 Use SWIG_fail in %exception upon throwing OSError for errno
Unfortunately SWIG_exception does not support throwing OSError, so this
still requires Python-specific code.

Unlike just returning NULL, this will clean up intermediate allocations.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 11:57:04 -07:00
Steve Beattie
65e6620014 libapparmor: merge Rename aa_query_label allow and audit params in headers
This change matches the names in the .c source and the man page for aa_query_label,
and also simplifies the typemap annotations needed to make the SWIG versions usable.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1339
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-09-26 18:52:19 +00:00
Ryan Lee
0c4cda2f1c Rename aa_query_label allow and audit params in headers
This change matches the names in the .c source and the man page for aa_query_label,
and also simplifies the typemap annotations needed to make the SWIG versions usable.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 10:58:00 -07:00
Christian Boltz
49b9a20997 abstractions/mesa: allow ~/.cache/mesa_shader_cache_db/
... which is used by Mesa 24.2.2

Reported by darix.

Fixes: https://bugs.launchpad.net/bugs/2081692
2024-09-24 16:39:52 +02:00
Steve Beattie
c4bb6969f5 libapparmor: remove emove remnants of SWIG java files
Merge branch https://gitlab.com/rlee287/apparmor/-/tree/remove_swig_java

The autoconf infrastructure for building this doesn't even show up in the Git history, so there should be no issue with removing the ghosts of Java from the codebase.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1335
Approved-by: Steve Beattie <steve+gitlab@nxnw.org>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-09-24 05:37:16 +00:00
Christian Boltz
9cb591c5a7 apparmor.vim: Add missing units for rlimit cpu and rttime
... and allow whitespace between the number and the unit.
2024-09-22 16:11:24 +02:00
Ryan Lee
225ea202cf Remove remnants of SWIG java files
The autoconf infrastructure for building this doesn't even show up in the Git history, so there should be no issue with removing the ghosts of Java from the codebase

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-20 16:33:58 -07:00
Georgia Garcia
7dc167ea48 Merge Change swig prototype of aa_getprocattr to match argname
This will matter later on for adding SWIG annotations

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1329
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-18 18:05:31 +00:00
Ryan Lee
80bdd22ed7 Change swig prototype of aa_getprocattr to match argname
This will matter later on for adding SWIG annotations

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-17 10:49:41 -07:00
Christian Boltz
7a081cfc67 Merge test-file.py: don't expect translated file permissions
Translating things like 'rw' is pointless and will/should never happen.
Therefore the tests should also expect non-translated file permissions.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1328
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-09-17 15:12:32 +00:00
Christian Boltz
b8de035542 test-file.py: don't expect translated file permissions
Translating things like 'rw' is pointless and will/should never happen.
Therefore the tests should also expect non-translated file permissions.
2024-09-17 14:44:20 +02:00
John Johansen
1940b1b7cd Merge utils: fixes when handling owner file rules
Fixes: https://gitlab.com/apparmor/apparmor/-/issues/429
Fixes: https://gitlab.com/apparmor/apparmor/-/issues/430

Closes #429 and #430
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1320
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-17 09:23:30 +00:00
John Johansen
9fe5a6d853 Merge update translations pot file for C code
The parser and binutils pot file have not been recently refreshed. Update them to current code and add missing pot files for aa_load and aa_status. Also give aa_status base support for translations to populate its pot file.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1318
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-17 09:18:10 +00:00
John Johansen
b5af7d5492 Merge aa-notify: Simplify user interfaces and update man page
aa-notify: Simplify user interfaces and update man page

In notifications, Clicking on "allow" now directly adds the rule without
intermediate window, leading to a smoother UX.
Aligning man page with notify.conf.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1313
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-09-17 09:17:24 +00:00
Maxime Bélair
2b32130280 aa-notify: Simplify user interfaces and update man page 2024-09-17 09:17:23 +00:00
John Johansen
37cac653d1 Merge Make libaalogparse fully reentrant by removing its globals
Tested by using Valgrind's Helgrind and DRD against the reentrancy test that I wrote: they both report no errors with the changes while reporting many errors with the old versions.

Commits "Inline _parse_yacc in libaalogparse" and "Make parse_record take a const char pointer since it never modified str anyways" have a tiny potential to be backwards-incompatible changes: I have justified why they shouldn't be in the commit messages, but it's worth looking over in case I was mistaken and we need to back those out.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1322
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-11 07:54:39 +00:00
John Johansen
4628f8e880 Merge Add parser_sanity-no-gen target to run simple_tests
... without all the profiles generated by the gen-*.py scripts.

This target is meant for local manual testing, especially when working
on additional simple_tests profiles.

It makes local testing much faster (15 seconds for ~2k profiles vs.
several minutes for the additional ~70k profiles generated by gen-*.py)

Needless to say that the CI should continue to use the parser_sanity
target that includes all the generated profiles.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1325
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-11 07:50:01 +00:00
John Johansen
43b30b694c Merge Add more tests for network port range
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1326
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-11 07:48:01 +00:00
Christian Boltz
762df7e753 Add more tests for network port range 2024-09-10 23:10:32 +02:00
Christian Boltz
0d5ae170d4 Add parser_sanity-no-gen target to run simple_tests
... without all the profiles generated by the gen-*.py scripts.

This target is meant for local manual testing, especially when working
on additional simple_tests profiles.

It makes local testing much faster (15 seconds for ~2k profiles vs.
several minutes for the additional ~70k profiles generated by gen-*.py)

Needless to say that the CI should continue to use the parser_sanity
target that includes all the generated profiles.
2024-09-10 22:31:11 +02:00
Ryan Lee
79670745d6 Remove remnants of comments regarding old apparmor log format
The entry AA_RECORD_SYNTAX_V1 is only there for API compatibility reasons.
If we wanted to remove it, we could just renumber the other two entries
to preserve ABI compatibility. However, it seems easier to just delete the
entry if we ever break backcompat with a libapparmor2.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
78f138c37f Make parse_record take a const char pointer since it never modified str anyways
This shouldn't be a breaking change because it's fine to pass a
non-const pointer to a function taking a const pointer, but not the other way round

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
66e1439293 Add an aalogparse reentrancy test for simultaneous log parsing from different threads
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
6a55fb5613 Inline _parse_yacc in libaalogparse
This function was only ever called once inside libaalogparse.c, and it looks
simple enough to not need to be split out into its own helper function.

As this function was never exposed publicly in installed header files, removing it
is not a breaking API change.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
7ff045583d Remove manual YYDEBUG define in grammar.y
The generated grammar.h already sets the correct YYDEBUG value regardless
of whether parse.trace is defined

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
dba7669443 Also make the bison parser of libaalogparse fully reentrant
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:20 -07:00
John Johansen
ebeb89cbce Merge parser: add port range support on network policy
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1321
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-09 21:01:21 +00:00
Georgia Garcia
93f3a0fa99 parser: add equality tests for network port range
To run the network port range equality tests, we need to check if the
kernel supports the network_v8/af_inet feature. Also, a new file
features.af_inet is needed containing the af_inet feature.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-06 09:49:59 -03:00
Georgia Garcia
f9621054d7 parser: add port range support on network policy
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-05 17:01:46 -03:00
Georgia Garcia
2097e82d4a utils: 'owner' should come before 'file'
When both the owner and file keywords were used, the clean rule
generated would have owner after file which is not accepted by the
parser.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/430
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-05 14:55:41 -03:00
Georgia Garcia
39f84c3767 utils: fix file handling of old perms with owner
When the profile already contains a "file" rule containing the owner
prefix and the tool is trying to handle a new file entry, it tries to
show it in the logprof header as "old mode".

The issue is that when the owner rule is an implicit all files
permission, then the object "FileRule" is used instead of the set of
permissions. When subtracting FileRule from set() a TypeError
exception is thrown.

Fix this by "translating" FileRule.ALL perms to "mrwlkix".

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/429
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-05 14:54:59 -03:00
Ryan Lee
c5c7565357 Silence -Wyacc because we rely on GNU bison extensions to yacc
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-04 14:54:02 -07:00
Ryan Lee
e0504e697a Make libaalogparse lexer fully reentrant by removing its globals
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-04 12:00:13 -07:00
John Johansen
729e28e8b2 Merge Update apparmor-utils.pot
Followup to !1315 that updates apparmor-utils.pot. The other ones should also be updated at some point, so I'm marking this as a draft until we have a better idea of when/how we want to do that.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1316
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-03 17:07:04 +00:00
Ryan Lee
c6181c2dbe Update po README with correct directories and pygettext3 binary
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-03 09:35:05 -07:00
Ryan Lee
0f42187672 Run pygettext3 to update apparmor-utils.pot
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-03 09:33:33 -07:00
John Johansen
bdedaf61c8 binutils: add translation support to aa_status and initial pot file
Unfortunately aa_status did not support translations. Add a base support
and the initial pot file. There are no translations done at this time.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-09-03 03:39:16 -07:00
John Johansen
3ac53e75d0 binutils: add pot file for aa_load
aa_load was missing a pot file for translations. Add a pot file for
aa_load and sync it to the code.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-09-03 03:39:16 -07:00
John Johansen
0f09501a84 binutils: update pot files for aa_enabled, aa_exec, aa_features_abi
Update the pot files for message changes in aa_enabled.c, aa_exec.c
and aa_features_abi.c

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-09-03 03:39:16 -07:00
John Johansen
ef95c96f45 parser: update translations pot file to current code
The parser pot file should have been updated before beta. Make
sure it is up to date with the current code.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-09-03 03:39:16 -07:00
John Johansen
ab5f180b08 Merge utils: ignore peer when parsing logs for non-peer access modes
utils: ignore peer when parsing logs for non-peer access modes

Some access modes (create, setopt, getopt, bind, shutdown, listen,
getattr, setattr) cannot be used with a peer in network rules.

Due to how auditing is implemented in the kernel, the peer information
might be available in the log (as faddr= but not daddr=), which causes
a failure in log parsing.

When parsing the log, check if that's the case and ignore the peer,
avoiding the exception on the NetworkRule constructor.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/427

Reported-by: Evan Caville <evan.caville@canonical.com>

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

Closes #427
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1314
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>
2024-08-30 23:17:08 +00:00
Ryan Lee
8cb2e4ca9f Merge Replace 'scrub the environment' and similar wordings
The wording of "scrub the environment" with respect to execution modes is misleading, because a quick read of it could imply that it removes all environment variables. However, it actually enables ld.so's secure-execution mode, which removes a very limited subset of them. This MR rewords the relevant documentation and prompts. If proper environment variable filtering is added later, the documentation can be updated again then.

Synchronizes-with:
- Wiki page update, which I can do after this MR is approved
- Kernel patch to update wording of debug logs (patch submitted to the Apparmor mailing list [here](https://lists.ubuntu.com/archives/apparmor/2024-August/013339.html))

Things that may need updating first:

- Translations: attempting to update `utils/po/apparmor-utils.pot` resulted in a bunch of unrelated changes, so I'd like to ask about translation statuses before making a commit that updates that file properly.
- Adding info on which libc's actually behave differently based on AT_SECURE: glibc and musl libc both do, but they may do subtly different things. I don't know about other libc's.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1315
Approved-by: John Johansen <john@jjmx.net>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-08-30 16:14:13 +00:00
Georgia Garcia
1f7d7cd0e0 test_multi: add example of getattr perm with peer in the logs
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-29 17:12:54 -03:00
Georgia Garcia
7562c48c74 utils: ignore peer when parsing logs for non-peer access modes
Some access modes (create, setopt, getopt, bind, shutdown, listen,
getattr, setattr) cannot be used with a peer in network rules.

Due to how auditing is implemented in the kernel, the peer information
might be available in the log (as faddr= but not daddr=), which causes
a failure in log parsing.

When parsing the log, check if that's the case and ignore the peer,
avoiding the exception on the NetworkRule constructor.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/427
Reported-by: Evan Caville <evan.caville@canonical.com>
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-29 17:11:41 -03:00
John Johansen
74f254212a Merge profiles: enable php-fpm in /usr/bin and /usr/sbin
To enable the profile in distros that merge sbin into bin.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/421
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

Closes #421
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1301
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-29 18:46:47 +00:00
John Johansen
cf3428f774 Merge profiles: slirp4netns: allow pivot_root
`pivot_root` is required for running `slirp4netns --enable-sandbox` inside LXD.
- https://github.com/rootless-containers/slirp4netns/issues/348
- https://github.com/rootless-containers/slirp4netns/blob/v1.3.1/sandbox.c#L101-L234

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1298
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-29 18:44:15 +00:00
John Johansen
b6e9df3495 Merge parser: fix rule priority destroying rule permissions for some classes
io_uring and userns mediation are encoding permissions on the class
byte. This is a mistake that should never have been allowed.

With the addition of rule priorities the class byte mediates rule,
that ensure the kernel can determine a class is being mediated is
given the highest priority possible, to ensure class mediation can not
be removed by a deny rule. See
  61b7568e1 ("parser: bug fix mediates_X stub rules.")
for details.

Unfortunately this breaks rule classes that encode permissions on the
class byte, because those rules will always have a lower priority and
the class mediates rule will always be selected over them resulting in
only the class mediates permission being on the rule class state.

Fix this by adding the mediaties class rules for these rule classes
with the lowest priority possible. This means that any rule mediating
the class will wipe out the mediates class rule. So add a new mediates
class rule at the same priority, as the rule being added.

This is a naive implementation and does result in more mediates rules
being added than necessary. The rule class could keep track of the
highest priority rule that had been added, and use that to reduce the
number of mediates rules it adds for the class.

Technically we could also get away with not adding the rules for allow
rules, as the kernel doesn't actually check the encoded permission but
whether the class state is not the trap state. But it is required with
deny rules to ensure the deny rule doesn't result in permissions being
removed from the class, resulting in the kernel thinking it is
unmediated. We also want to ensure that mediation is encoded for other
rule types like prompt, and in the future the kernel could check the
permission so we do want to guarantee that the class state has the
MAY_READ permission on it.

Note: there is another set of classes (file, mqueue, dbus, ...) which
encodes a default rule permission as

  class .* <perm>

this encoding is unfortunate in that it will also add the permission
to the class byte, but also sets up following states with the permission.
thankfully, this accespt anything, including nothing generally isn't
valid in the nothing case (eg. a file without any absolute name). For
this set of classes, the high priority mediates rule just ensures
that the null match case does not have permission.

Fixes: 61b7568e1 parser: bug fix mediates_X stub rules.
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1307
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-08-29 18:41:51 +00:00
John Johansen
159c179193 Merge apparmor.d.pod: Fix writing of aa_change_profile
Signed-off-by: Jörg Sommer <joerg@jo-so.de>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1308
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-29 18:24:44 +00:00
Ryan Lee
8865ff69d8 Replace 'sanitise the environment' wording in aa.py ask_rule_questions
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-08-28 13:39:52 -07:00
Ryan Lee
65c84071bb Replace 'scrub the environment' wording in man pages with something more accurate
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-08-28 11:22:08 -07:00
Georgia Garcia
0ec0e2b035 Merge libapparmor: make af_protos.h consistent in different archs
af_protos.h is a generated table of the protocols created by looking
for definitions of IPPROTO_* in netinet/in.h. Depending on the
architecture, the order of the table may change when using -dM in the
compiler during the extraction of the defines.

This causes an issue because there is more than one IPPROTO defined
by the value 0: IPPROTO_IP and IPPROTO_HOPOPTS which is a header
extension used by IPv6. So if IPPROTO_HOPOPTS was first in the table,
then protocol=0 in the audit logs would be translated to hopopts.

This caused a failure in arm 32bit:

Output doesn't match expected data:
--- ./test_multi/testcase_unix_01.out	2024-08-15 01:47:53.000000000 +0000
+++ ./test_multi/out/testcase_unix_01.out	2024-08-15 23:42:10.187416392 +0000
@@ -12,7 +12,7 @@
 Peer Addr: @test_abstract_socket
 Network family: unix
 Socket type: stream
-Protocol: ip
+Protocol: hopopts
 Class: net
 Epoch: 1711454639
 Audit subid: 322

By the time protocol is resolved in grammar.y, we don't have have
access to the net family to check if it's inet6. Instead of making
protocol dependent on the net family, make the order of the
af_protos.h table consistent between architectures using -dD.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1309
Approved-by: John Johansen <john@jjmx.net>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-26 12:39:26 +00:00
John Johansen
90c1358e49 Merge Revert "utils/emacs: add apparmor-mode.el"
This reverts commit 65b0f83aea.

This has since been moved into its own repo under the apparmor gitlab project at https://gitlab.com/apparmor/apparmor-mode

Signed-off-by: Alex Murray <alex.murray@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1312
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-26 05:21:59 +00:00
Alex Murray
2b1ddef16e Revert "utils/emacs: add apparmor-mode.el"
This reverts commit 65b0f83aea.

Signed-off-by: Alex Murray <alex.murray@canonical.com>
2024-08-26 13:54:28 +09:30
Georgia Garcia
95c419dc45 libapparmor: make af_protos.h consistent in different archs
af_protos.h is a generated table of the protocols created by looking
for definitions of IPPROTO_* in netinet/in.h. Depending on the
architecture, the order of the table may change when using -dM in the
compiler during the extraction of the defines.

This causes an issue because there is more than one IPPROTO defined
by the value 0: IPPROTO_IP and IPPROTO_HOPOPTS which is a header
extension used by IPv6. So if IPPROTO_HOPOPTS was first in the table,
then protocol=0 in the audit logs would be translated to hopopts.

This caused a failure in arm 32bit:

Output doesn't match expected data:
--- ./test_multi/testcase_unix_01.out	2024-08-15 01:47:53.000000000 +0000
+++ ./test_multi/out/testcase_unix_01.out	2024-08-15 23:42:10.187416392 +0000
@@ -12,7 +12,7 @@
 Peer Addr: @test_abstract_socket
 Network family: unix
 Socket type: stream
-Protocol: ip
+Protocol: hopopts
 Class: net
 Epoch: 1711454639
 Audit subid: 322

By the time protocol is resolved in grammar.y, we don't have have
access to the net family to check if it's inet6. Instead of making
protocol dependent on the net family, make the order of the
af_protos.h table consistent between architectures using -dD.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-19 18:29:56 -03:00
Jörg Sommer
8195500a1e apparmor.d.pod: Fix writing of aa_change_profile
Signed-off-by: Jörg Sommer <joerg@jo-so.de>
2024-08-17 14:13:08 +02:00
John Johansen
204c0c5a3a parser: fix rule priority destroying rule permissions for some classes
io_uring and userns mediation are encoding permissions on the class
byte. This is a mistake that should never have been allowed.

With the addition of rule priorities the class byte mediates rule,
that ensure the kernel can determine a class is being mediated is
given the highest priority possible, to ensure class mediation can not
be removed by a deny rule. See
  61b7568e1 ("parser: bug fix mediates_X stub rules.")
for details.

Unfortunately this breaks rule classes that encode permissions on the
class byte, because those rules will always have a lower priority and
the class mediates rule will always be selected over them resulting in
only the class mediates permission being on the rule class state.

Fix this by adding the mediaties class rules for these rule classes
with the lowest priority possible. This means that any rule mediating
the class will wipe out the mediates class rule. So add a new mediates
class rule at the same priority, as the rule being added.

This is a naive implementation and does result in more mediates rules
being added than necessary. The rule class could keep track of the
highest priority rule that had been added, and use that to reduce the
number of mediates rules it adds for the class.

Technically we could also get away with not adding the rules for allow
rules, as the kernel doesn't actually check the encoded permission but
whether the class state is not the trap state. But it is required with
deny rules to ensure the deny rule doesn't result in permissions being
removed from the class, resulting in the kernel thinking it is
unmediated. We also want to ensure that mediation is encoded for other
rule types like prompt, and in the future the kernel could check the
permission so we do want to guarantee that the class state has the
MAY_READ permission on it.

Note: there is another set of classes (file, mqueue, dbus, ...) which
encodes a default rule permission as

  class .* <perm>

this encoding is unfortunate in that it will also add the permission
to the class byte, but also sets up following states with the permission.
thankfully, this accespt anything, including nothing generally isn't
valid in the nothing case (eg. a file without any absolute name). For
this set of classes, the high priority mediates rule just ensures
that the null match case does not have permission.

Fixes: 61b7568e1 parser: bug fix mediates_X stub rules.
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-08-15 03:51:20 -07:00
John Johansen
4c8a27457e Merge utils: change os.mkdir to self.mkpath to create intermediary dirs
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1306
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-15 04:45:12 +00:00
Georgia Garcia
a3eca67f38 utils: change os.mkdir to self.mkpath to create intermediary dirs
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-15 00:44:55 -03:00
Georgia Garcia
2083994513 profiles: enable php-fpm in /usr/bin and /usr/sbin
To enable the profile in distros that merge sbin into bin.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/421
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-14 10:52:53 -03:00
Akihiro Suda
bf5db67284 profiles: slirp4netns: allow pivot_root
`pivot_root` is required for running `slirp4netns --enable-sandbox` inside LXD.
- https://github.com/rootless-containers/slirp4netns/issues/348
- https://github.com/rootless-containers/slirp4netns/blob/v1.3.1/sandbox.c#L101-L234

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2024-08-14 17:29:13 +09:00
319 changed files with 7355 additions and 3296 deletions

41
.gitignore vendored
View File

@@ -1,4 +1,4 @@
apparmor-
apparmor-*
cscope.*
binutils/aa-enabled
binutils/aa-enabled.1
@@ -7,10 +7,22 @@ binutils/aa-exec.1
binutils/aa-features-abi
binutils/aa-features-abi.1
binutils/aa-load
binutils/aa-load.8
binutils/aa-status
binutils/aa-status.8
binutils/cJSON.o
binutils/po/*.mo
changehat/mod_apparmor/.libs
changehat/mod_apparmor/mod_apparmor.8
changehat/mod_apparmor/mod_apparmor.8.html
changehat/mod_apparmor/mod_apparmor.la
changehat/mod_apparmor/mod_apparmor.lo
changehat/mod_apparmor/mod_apparmor.slo
changehat/mod_apparmor/mod_apparmor.so
changehat/mod_apparmor/pod2htmd.tmp
changehat/pam_apparmor/get_options.o
changehat/pam_apparmor/pam_apparmor.o
changehat/pam_apparmor/pam_apparmor.so
parser/po/*.mo
parser/af_names.h
parser/cap_names.h
@@ -109,6 +121,18 @@ libraries/libapparmor/src/tst_aalogmisc
libraries/libapparmor/src/tst_aalogmisc.log
libraries/libapparmor/src/tst_aalogmisc.o
libraries/libapparmor/src/tst_aalogmisc.trs
libraries/libapparmor/src/tst_aalogparse_cpp
libraries/libapparmor/src/tst_aalogparse_cpp.log
libraries/libapparmor/src/tst_aalogparse_cpp.o
libraries/libapparmor/src/tst_aalogparse_cpp.trs
libraries/libapparmor/src/tst_aalogparse_reentrancy
libraries/libapparmor/src/tst_aalogparse_reentrancy.log
libraries/libapparmor/src/tst_aalogparse_reentrancy.o
libraries/libapparmor/src/tst_aalogparse_reentrancy.trs
libraries/libapparmor/src/tst_aalogparse_oldname
libraries/libapparmor/src/tst_aalogparse_oldname.log
libraries/libapparmor/src/tst_aalogparse_oldname.o
libraries/libapparmor/src/tst_aalogparse_oldname.trs
libraries/libapparmor/src/tst_features
libraries/libapparmor/src/tst_features.log
libraries/libapparmor/src/tst_features.o
@@ -169,7 +193,6 @@ libraries/libapparmor/testsuite/libaalogparse.test/Makefile
libraries/libapparmor/testsuite/libaalogparse.test/Makefile.in
libraries/libapparmor/testsuite/test_multi/out
libraries/libapparmor/testsuite/test_multi_multi-test_multi.o
changehat/mod_apparmor/.libs
utils/*.8
utils/*.8.html
utils/*.5
@@ -180,7 +203,6 @@ utils/apparmor/*.pyc
utils/apparmor/rule/*.pyc
utils/apparmor.egg-info/
utils/build/
!utils/emacs/apparmor-mode.el
utils/htmlcov/
utils/test/common_test.pyc
utils/test/.coverage
@@ -209,6 +231,7 @@ tests/regression/apparmor/chgrp
tests/regression/apparmor/chmod
tests/regression/apparmor/chown
tests/regression/apparmor/clone
tests/regression/apparmor/complain
tests/regression/apparmor/dbus_eavesdrop
tests/regression/apparmor/dbus_message
tests/regression/apparmor/dbus_service
@@ -284,3 +307,15 @@ tests/regression/apparmor/xattrs_profile
tests/regression/apparmor/coredump
**/__pycache__/
*.orig
# Patterns related to spread integration tests
*.img
*.iso
*.lock
*.log
*.qcow2
*.run
.spread-reuse.yaml
.spread-reuse.*.yaml
spread-artifacts/
spread-logs/

View File

@@ -4,25 +4,40 @@ image: ubuntu:latest
# XXX - add a deploy stage to publish man pages, docs, and coverage
# reports
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
- if: $CI_COMMIT_TAG
- if: $CI_COMMIT_BRANCH
stages:
- build
- test
.ubuntu-before_script:
.ubuntu-common:
before_script:
- export DEBIAN_FRONTEND=noninteractive
# Install build-dependencies by loading the package list from the ubuntu/debian cloud-init profile.
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" install_deps "Installing dependencies..."
- apt-get update -qq
- apt-get install --no-install-recommends -y gcc perl liblocale-gettext-perl linux-libc-dev lsb-release make
- apt-get install --yes yq make lsb-release
- |
printf 'include .image-garden.mk\n$(info $(UBUNTU_CLOUD_INIT_USER_DATA_TEMPLATE))\n.PHONY: nothing\nnothing:\n' \
| make -f - nothing \
| yq '.packages | .[]' \
| xargs apt-get install --yes --no-install-recommends
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" install_deps
after_script:
# Inspect the kernel and lsb-release.
- lsb_release -a
- uname -a
.install-c-build-deps: &install-c-build-deps
- apt-get install --no-install-recommends -y build-essential apache2-dev autoconf autoconf-archive automake bison dejagnu flex libpam-dev libtool pkg-config python3-all-dev python3-setuptools ruby-dev swig zlib1g-dev
build-all:
stage: build
extends:
- .ubuntu-before_script
- .ubuntu-common
script:
# Run the spread prepare section to build everything.
- yq -r '.prepare' <spread.yaml | SPREAD_PATH=. bash -xeu
artifacts:
name: ${CI_COMMIT_REF_NAME}-${CI_COMMIT_SHA}
expire_in: 30 days
@@ -35,39 +50,33 @@ build-all:
- changehat/mod_apparmor/
- changehat/pam_apparmor/
- profiles/
script:
- *install-c-build-deps
- cd libraries/libapparmor && ./autogen.sh && ./configure --with-perl --with-python --prefix=/usr && make && cd ../.. || { cat config.log ; exit 1 ; }
- make -C parser
- make -C binutils
- make -C utils
- make -C changehat/mod_apparmor
- make -C changehat/pam_apparmor
- make -C profiles
test-libapparmor:
stage: test
needs: ["build-all"]
extends:
- .ubuntu-before_script
- .ubuntu-common
script:
- *install-c-build-deps
# This is to touch the built files in the test stage to avoid needless rebuilding
- make -C libraries/libapparmor --touch
- make -C libraries/libapparmor check
test-parser:
stage: test
needs: ["build-all"]
extends:
- .ubuntu-before_script
- .ubuntu-common
script:
- *install-c-build-deps
# This is to touch the built files in the test stage to avoid needless rebuilding
- make -C parser --touch
- make -C parser -j $(nproc) tst_binaries
- make -C parser check
test-binutils:
stage: test
needs: ["build-all"]
extends:
- .ubuntu-before_script
- .ubuntu-common
script:
- make -C binutils check
@@ -75,9 +84,15 @@ test-utils:
stage: test
needs: ["build-all"]
extends:
- .ubuntu-before_script
- .ubuntu-common
script:
# This is to touch the built files in the test stage to avoid needless rebuilding
- make -C utils --touch
# TODO: move those to cloud-init list?
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" install_extra_deps "Installing additional dependencies..."
- apt-get install --no-install-recommends -y libc6-dev libjs-jquery libjs-jquery-throttle-debounce libjs-jquery-isonscreen libjs-jquery-tablesorter flake8 python3-coverage python3-notify2 python3-psutil python3-setuptools python3-tk python3-ttkthemes python3-gi
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" install_extra_deps
# See apparmor/apparmor#221
- make -C parser/tst gen_dbus
@@ -93,31 +108,50 @@ test-mod-apparmor:
stage: test
needs: ["build-all"]
extends:
- .ubuntu-before_script
- .ubuntu-common
script:
# This is to touch the built files in the test stage to avoid needless rebuilding
- make -C changehat/mod_apparmor --touch
- make -C changehat/mod_apparmor check
test-profiles:
stage: test
needs: ["build-all"]
extends:
- .ubuntu-before_script
- .ubuntu-common
script:
# This is to touch the built files in the test stage to avoid needless rebuilding
- make -C profiles --touch
- make -C profiles check-parser
- make -C profiles check-abstractions.d
- make -C profiles check-local
# Build the regression tests (don't run them because that needs kernel access)
test-build-regression:
stage: test
needs: ["build-all"]
extends:
- .ubuntu-common
script:
# Additional dependencies required by regression tests
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" install_extra_deps "Installing additional dependencies..."
- apt-get install --no-install-recommends -y attr fuse-overlayfs libdbus-1-dev liburing-dev
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" install_extra_deps
- make -C tests/regression/apparmor -j $(nproc)
shellcheck:
stage: test
needs: []
extends:
- .ubuntu-before_script
- .ubuntu-common
script:
- apt-get install --no-install-recommends -y python3-minimal file shellcheck xmlstarlet
- shellcheck --version
- './tests/bin/shellcheck-tree --format=checkstyle
| xmlstarlet tr tests/checkstyle2junit.xslt
> shellcheck.xml'
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" install_extra_deps "Installing additional dependencies..."
- apt-get install --no-install-recommends -y python3-minimal file shellcheck xmlstarlet
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" install_extra_deps
- shellcheck --version
- "./tests/bin/shellcheck-tree --format=checkstyle
| xmlstarlet tr tests/checkstyle2junit.xslt
> shellcheck.xml"
artifacts:
when: always
reports:
@@ -139,29 +173,26 @@ variables:
SAST_EXCLUDED_ANALYZERS: "eslint,flawfinder,semgrep,spotbugs"
SAST_BANDIT_EXCLUDED_PATHS: "*/tst/*, */test/*"
.send-to-coverity: &send-to-coverity
- curl https://scan.coverity.com/builds?project=$COVERITY_SCAN_PROJECT_NAME
--form token=$COVERITY_SCAN_TOKEN --form email=$GITLAB_USER_EMAIL
--form file=@$(ls apparmor-*-cov-int.tar.gz) --form version="$(git describe --tags)"
--form description="$(git describe --tags) / $CI_COMMIT_TITLE / $CI_COMMIT_REF_NAME:$CI_PIPELINE_ID"
coverity:
stage: .post
extends:
- .ubuntu-before_script
only:
refs:
- master
- .ubuntu-common
script:
- apt-get install --no-install-recommends -y curl git texlive-latex-recommended
- *install-c-build-deps
- curl -o /tmp/cov-analysis-linux64.tgz https://scan.coverity.com/download/linux64
--form project=$COVERITY_SCAN_PROJECT_NAME --form token=$COVERITY_SCAN_TOKEN
- tar xfz /tmp/cov-analysis-linux64.tgz
- COV_VERSION=$(ls -dt cov-analysis-linux64-* | head -1)
- PATH=$PATH:$(pwd)/$COV_VERSION/bin
- make coverity
- *send-to-coverity
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" install_extra_deps "Installing additional dependencies..."
- apt-get install --no-install-recommends -y curl git texlive-latex-recommended
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" install_extra_deps
- curl -o /tmp/cov-analysis-linux64.tgz https://scan.coverity.com/download/linux64
--form project=$COVERITY_SCAN_PROJECT_NAME --form token=$COVERITY_SCAN_TOKEN
- tar xfz /tmp/cov-analysis-linux64.tgz
- COV_VERSION=$(ls -dt cov-analysis-linux64-* | head -1)
- PATH=$PATH:$(pwd)/$COV_VERSION/bin
- make coverity
- curl https://scan.coverity.com/builds?project=$COVERITY_SCAN_PROJECT_NAME
--form token=$COVERITY_SCAN_TOKEN --form email=$GITLAB_USER_EMAIL
--form file=@$(ls apparmor-*-cov-int.tar.gz) --form version="$(git describe --tags)"
--form description="$(git describe --tags) / $CI_COMMIT_TITLE / $CI_COMMIT_REF_NAME:$CI_PIPELINE_ID"
artifacts:
paths:
- "apparmor-*.tar.gz"
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $CI_PROJECT_PATH == "apparmor/apparmor"

110
.image-garden.mk Normal file
View File

@@ -0,0 +1,110 @@
# This file is read by image-garden when spread is allocating test machines.
# All the package installation happens through cloud-init profiles defined
# below.
# This is the cloud-init user-data profile for all Debian systems. Note that it
# is an extension of the default profile necessary for operation of
# image-garden.
define DEBIAN_CLOUD_INIT_USER_DATA_TEMPLATE
$(CLOUD_INIT_USER_DATA_TEMPLATE)
packages:
- apache2-dev
- attr
- autoconf
- autoconf-archive
- automake
- bison
- build-essential
- dejagnu
- dosfstools
- flake8
- flex
- fuse-overlayfs
- gdb
- gettext
- libdbus-1-dev
- libpam0g-dev
- libtool
- liburing-dev
- pkg-config
- python3-all-dev
- python3-gi
- python3-notify2
- python3-psutil
- python3-setuptools
- python3-tk
- python3-ttkthemes
- swig
- toybox
endef
# Ubuntu shares cloud-init profile with Debian.
UBUNTU_CLOUD_INIT_USER_DATA_TEMPLATE=$(DEBIAN_CLOUD_INIT_USER_DATA_TEMPLATE)
# This is the cloud-init user-data profile for openSUSE Tumbleweed.
define OPENSUSE_tumbleweed_CLOUD_INIT_USER_DATA_TEMPLATE
$(CLOUD_INIT_USER_DATA_TEMPLATE)
- sed -i -e 's/security=selinux/security=apparmor/g' /etc/default/grub
- update-bootloader
packages:
- apache2-devel
- attr
- autoconf
- autoconf-archive
- automake
- bison
- dbus-1-devel
- dejagnu
- dosfstools
- flex
- fuse-overlayfs
- gcc
- gcc-c++
- gdb
- gettext
- gobject-introspection
- libtool
- liburing2-devel
- make
- pam-devel
- pkg-config
- python3-devel
- python3-flake8
- python3-notify2
- python3-psutil
- python3-setuptools
- python3-setuptools
- python3-tk
- python311
- python311-devel
- swig
endef
define FEDORA_CLOUD_INIT_USER_DATA_TEMPLATE
$(CLOUD_INIT_USER_DATA_TEMPLATE)
packages:
- attr
- autoconf
- autoconf-archive
- automake
- bison
- dbus-devel
- dejagnu
- dosfstools
- flex
- gdb
- gettext
- httpd-devel
- libstdc++-static
- libtool
- liburing-devel
- pam-devel
- perl
- pkg-config
- python3-devel
- python3-flake8
- python3-gobject-base
- python3-notify2
- python3-tkinter
- swig
endef

View File

@@ -59,7 +59,7 @@ coverity: snapshot
mv $(COVERITY_DIR)/build-log.txt $(COVERITY_DIR)/build-log-python-$(subst /,.,$(dir)).txt ;)
cov-build --dir $(COVERITY_DIR) -- sh -c \
"$(foreach dir, $(filter-out utils profiles tests, $(DIRS)), \
$(MAKE) -C $(SNAPSHOT_NAME)/$(dir);) "
$(MAKE) -j $$(nproc) -C $(SNAPSHOT_NAME)/$(dir);) "
tar -cvzf $(SNAPSHOT_NAME)-$(COVERITY_DIR).tar.gz $(COVERITY_DIR)
.PHONY: export_dir

View File

@@ -111,13 +111,21 @@ $ export PYTHON_VERSION=3
$ export PYTHON_VERSIONS=python3
```
Note that, in general, the build steps can be run in parallel, while the test
steps do not gain much speedup from being run in parallel. This is because the
test steps spawn a handful of long-lived test runner processes that mostly
run their tests sequentially and do not use `make`'s jobserver. Moreover,
process spawning overhead constitutes a significant part of test runtime, so
reworking the test harnesses to add parallelism (which would be a major undertaking
for the harnesses that do not have it already) would not produce much of a speedup.
### libapparmor:
```
$ cd ./libraries/libapparmor
$ sh ./autogen.sh
$ sh ./configure --prefix=/usr --with-perl --with-python # see below
$ make
$ make -j $(nproc)
$ make check
$ make install
```
@@ -130,7 +138,7 @@ generate Ruby bindings to libapparmor.]
```
$ cd binutils
$ make
$ make -j $(nproc)
$ make check
$ make install
```
@@ -139,7 +147,8 @@ $ make install
```
$ cd parser
$ make # depends on libapparmor having been built first
$ make -j $(nproc) # depends on libapparmor having been built first
$ make -j $(nproc) tst_binaries # a build step of make check that can be parallelized
$ make check
$ make install
```
@@ -149,7 +158,7 @@ $ make install
```
$ cd utils
$ make
$ make -j $(nproc)
$ make check PYFLAKES=/usr/bin/pyflakes3
$ make install
```
@@ -158,7 +167,7 @@ $ make install
```
$ cd changehat/mod_apparmor
$ make # depends on libapparmor having been built first
$ make -j $(nproc) # depends on libapparmor having been built first
$ make install
```
@@ -167,7 +176,7 @@ $ make install
```
$ cd changehat/pam_apparmor
$ make # depends on libapparmor having been built first
$ make -j $(nproc) # depends on libapparmor having been built first
$ make install
```
@@ -197,6 +206,33 @@ usage and how to update and add tests. Below is a quick overview of their
location and how to run them.
Using spread with local virtual machines
----------------------------------------
It may be convenient to use the spread tool to provision and run the test suite
in an ephemeral virtual machine. This allows testing in isolation from the
host, as well as testing across different commonly used distributions and their
real kernels.
```sh
sudo apt install git golang whois ovmf genisoimage qemu-utils qemu-system
go install github.com/snapcore/spread/cmd/spread@latest
git clone https://gitlab.com/zygoon/image-garden
make -C image-garden
sudo make -C image-garden install
image-garden make ubuntu-cloud-24.10.x86_64.run
cd $APPARMOR_PATH
git clean -xdf
~/go/bin/spread -artifacts ./spread-artifacts -v ubuntu-cloud-24.10
# or ~/go/bin/spread -v garden:ubuntu-cloud-24.04:tests/regression/apparmor:at_secure
```
Running the `run_spread.sh` script, with `spread` on `PATH` will run all the
tests across several supported systems (Debian, Ubuntu and openSUSE).
If you include a `bzImage` file in the root of the repository then that kernel
will be used in the integration test. Please look at `spread.yaml` for details.
Regression tests
----------------
For details on structure and adding tests, see
@@ -207,7 +243,7 @@ To run:
### Regression tests - using apparmor userspace installed on host
```
$ cd tests/regression/apparmor (requires root)
$ make USE_SYSTEM=1
$ make -j $(nproc) USE_SYSTEM=1
$ sudo make tests USE_SYSTEM=1
$ sudo bash open.sh -r # runs and saves the last testcase from open.sh
```
@@ -220,7 +256,7 @@ $ sudo bash open.sh -r # runs and saves the last testcase from open.sh
```
$ cd tests/regression/apparmor (requires root)
$ make
$ make -j $(nproc)
$ sudo make tests
$ sudo bash open.sh -r # runs and saves the last testcase from open.sh
```
@@ -354,6 +390,7 @@ The aa-notify tool's Python dependencies can be satisfied by installing the
following packages (Debian package names, other distros may vary):
* python3-notify2
* python3-psutil
* python3-sqlite (part of the python3.NN-stdlib package)
* python3-tk
* python3-ttkthemes
* python3-gi

View File

@@ -21,7 +21,7 @@ DESTDIR=/
BINDIR=${DESTDIR}/usr/bin
SBINDIR=${DESTDIR}/usr/sbin
LOCALEDIR=/usr/share/locale
MANPAGES=aa-enabled.1 aa-exec.1 aa-features-abi.1 aa-status.8
MANPAGES=aa-enabled.1 aa-exec.1 aa-features-abi.1 aa-load.8 aa-status.8
WARNINGS = -Wall
CPP_WARNINGS =

77
binutils/aa-load.pod Normal file
View File

@@ -0,0 +1,77 @@
# This publication is intellectual property of Canonical Ltd. Its contents
# can be duplicated, either in part or in whole, provided that a copyright
# label is visibly located on each copy.
#
# All information found in this book has been compiled with utmost
# attention to detail. However, this does not guarantee complete accuracy.
# Neither Canonical Ltd, the authors, nor the translators shall be held
# liable for possible errors or the consequences thereof.
#
# Many of the software and hardware descriptions cited in this book
# are registered trademarks. All trade names are subject to copyright
# restrictions and may be registered trade marks. Canonical Ltd
# essentially adheres to the manufacturer's spelling.
#
# Names of products and trademarks appearing in this book (with or without
# specific notation) are likewise subject to trademark and trade protection
# laws and may thus fall under copyright restrictions.
#
=pod
=head1 NAME
aa-load - load precompiled AppArmor policy from cache location(s)
=head1 SYNOPSIS
B<aa-load> [options] (cache file|cache dir|cache base dir)+
=head1 DESCRIPTION
B<aa-load> loads precompiled AppArmor policy from the specified locations.
=head1 OPTIONS
B<aa-load> accepts the following arguments:
=over 4
=item -f, --force
Force B<aa-load> to load a policy even if its abi does not match the kernel abi.
=item -d, --debug
Display debug messages.
=item -v, --verbose
Display progress and error messages.
=item -n, --dry-run
Do not actually load the specified policy/policies into the kernel.
=item -h, --help
Display a brief usage guide.
=back
=head1 EXIT STATUS
Upon exiting, B<aa-load> returns 0 upon success and 1 upon an error loading
the precompiled policy.
=head1 BUGS
If you find any bugs, please report them at
L<https://gitlab.com/apparmor/apparmor/-/issues>.
=head1 SEE ALSO
apparmor(7), apparmor.d(5), apparmor_parser(8), and L<https://wiki.apparmor.net>.
=cut

View File

@@ -309,9 +309,8 @@ static int load_arg(char *arg)
static void print_usage(const char *command)
{
printf("Usage: %s [OPTIONS] (cache file|cache dir|cache base dir)]*\n"
"Load Precompiled AppArmor policy from a cache location or \n"
"locations.\n\n"
printf("Usage: %s [OPTIONS] (cache file|cache dir|cache base dir)+\n"
"Load precompiled AppArmor policy from cache location(s)\n\n"
"Options:\n"
" -f, --force load policy even if abi does not match the kernel\n"
" -d, --debug display debug messages\n"

View File

@@ -20,6 +20,8 @@
#include <ctype.h>
#include <dirent.h>
#include <regex.h>
#include <libintl.h>
#define _(s) gettext(s)
#include <sys/apparmor.h>
#include <sys/apparmor_private.h>
@@ -131,7 +133,7 @@ const char *process_statuses[] = {"enforce", "complain", "prompt", "kill", "unco
#define eprintf(...) \
do { \
if (!quiet) \
fprintf(stderr, __VA_ARGS__); \
fprintf(stderr, __VA_ARGS__); \
} while (0)
#define dprintf(...) \
@@ -156,14 +158,14 @@ static int open_profiles(FILE **fp)
ret = stat("/sys/module/apparmor", &st);
if (ret != 0) {
eprintf("apparmor not present.\n");
eprintf(_("apparmor not present.\n"));
return AA_EXIT_DISABLED;
}
dprintf("apparmor module is loaded.\n");
dprintf(_("apparmor module is loaded.\n"));
ret = aa_find_mountpoint(&apparmorfs);
if (ret == -1) {
eprintf("apparmor filesystem is not mounted.\n");
eprintf(_("apparmor filesystem is not mounted.\n"));
return AA_EXIT_NO_CONTROL;
}
@@ -176,9 +178,9 @@ static int open_profiles(FILE **fp)
*fp = fopen(apparmor_profiles, "r");
if (*fp == NULL) {
if (errno == EACCES) {
eprintf("You do not have enough privilege to read the profile set.\n");
eprintf(_("You do not have enough privilege to read the profile set.\n"));
} else {
eprintf("Could not open %s: %s", apparmor_profiles, strerror(errno));
eprintf(_("Could not open %s: %s"), apparmor_profiles, strerror(errno));
}
return AA_EXIT_NO_PERM;
}
@@ -351,7 +353,7 @@ static int get_processes(struct profile *profiles,
continue;
} else if (rc == -1 ||
asprintf(&exe, "/proc/%s/exe", entry->d_name) == -1) {
eprintf("ERROR: Failed to allocate memory\n");
eprintf(_("ERROR: Failed to allocate memory\n"));
ret = AA_EXIT_INTERNAL_ERROR;
goto exit;
} else if (mode) {
@@ -374,7 +376,7 @@ static int get_processes(struct profile *profiles,
// ensure enough space for NUL terminator
real_exe = calloc(PATH_MAX + 1, sizeof(char));
if (real_exe == NULL) {
eprintf("ERROR: Failed to allocate memory\n");
eprintf(_("ERROR: Failed to allocate memory\n"));
ret = AA_EXIT_INTERNAL_ERROR;
goto exit;
}
@@ -488,7 +490,7 @@ static int filter_processes(struct process *processes,
*
* Return: 0 on success, else shell error code
*/
static int simple_filtered_count(FILE *outf, filters_t *filters,
static int simple_filtered_count(FILE *outf, filters_t *filters, bool json,
struct profile *profiles, size_t nprofiles)
{
struct profile *filtered = NULL;
@@ -497,7 +499,13 @@ static int simple_filtered_count(FILE *outf, filters_t *filters,
ret = filter_profiles(profiles, nprofiles, filters,
&filtered, &nfiltered);
fprintf(outf, "%zd\n", nfiltered);
if (!json) {
fprintf(outf, "%zd\n", nfiltered);
} else {
fprintf(outf, "\"profile_count\": %zd", nfiltered);
}
free_profiles(filtered, nfiltered);
return ret;
@@ -512,7 +520,7 @@ static int simple_filtered_count(FILE *outf, filters_t *filters,
*
* Return: 0 on success, else shell error code
*/
static int simple_filtered_process_count(FILE *outf, filters_t *filters,
static int simple_filtered_process_count(FILE *outf, filters_t *filters, bool json,
struct process *processes, size_t nprocesses) {
struct process *filtered = NULL;
size_t nfiltered;
@@ -520,7 +528,12 @@ static int simple_filtered_process_count(FILE *outf, filters_t *filters,
ret = filter_processes(processes, nprocesses, filters, &filtered,
&nfiltered);
fprintf(outf, "%zd\n", nfiltered);
if (!json) {
fprintf(outf, "%zd\n", nfiltered);
} else {
fprintf(outf, "\"process_count\": %zd", nfiltered);
}
free_processes(filtered, nfiltered);
return ret;
@@ -539,7 +552,12 @@ static int compare_processes_by_executable(const void *a, const void *b) {
static void json_header(FILE *outf)
{
fprintf(outf, "{\"version\": \"%s\", ", aa_status_json_version);
fprintf(outf, "{\"version\": \"%s\"", aa_status_json_version);
}
static void json_seperator(FILE *outf)
{
fprintf(outf, ", ");
}
static void json_footer(FILE *outf)
@@ -582,7 +600,7 @@ static int detailed_profiles(FILE *outf, filters_t *filters, bool json,
*/
subfilters.mode = &mode_filter;
if (regcomp(&mode_filter, profile_statuses[i], REG_NOSUB) != 0) {
eprintf("Error: failed to compile sub filter '%s'\n",
eprintf(_("Error: failed to compile sub filter '%s'\n"),
profile_statuses[i]);
return AA_EXIT_INTERNAL_ERROR;
}
@@ -607,7 +625,7 @@ static int detailed_profiles(FILE *outf, filters_t *filters, bool json,
free_profiles(filtered, nfiltered);
}
if (json)
fprintf(outf, "}, ");
fprintf(outf, "}");
return AA_EXIT_ENABLED;
}
@@ -648,7 +666,7 @@ static int detailed_processes(FILE *outf, filters_t *filters, bool json,
*/
subfilters.mode = &mode_filter;
if (regcomp(&mode_filter, process_statuses[i], REG_NOSUB) != 0) {
eprintf("Error: failed to compile sub filter '%s'\n",
eprintf(_("Error: failed to compile sub filter '%s'\n"),
profile_statuses[i]);
return AA_EXIT_INTERNAL_ERROR;
}
@@ -700,7 +718,7 @@ static int detailed_processes(FILE *outf, filters_t *filters, bool json,
fprintf(outf, "]");
}
fprintf(outf, "}\n");
fprintf(outf, "}");
}
exit:
@@ -710,7 +728,7 @@ exit:
static int print_legacy(const char *command)
{
printf("Usage: %s [OPTIONS]\n"
printf(_("Usage: %s [OPTIONS]\n"
"Legacy options and their equivalent command\n"
" --profiled --count --profiles\n"
" --enforced --count --profiles --mode=enforced\n"
@@ -718,8 +736,8 @@ static int print_legacy(const char *command)
" --kill --count --profiles --mode=kill\n"
" --prompt --count --profiles --mode=prompt\n"
" --special-unconfined --count --profiles --mode=unconfined\n"
" --process-mixed --count --ps --mode=mixed\n",
command);
" --process-mixed --count --ps --mode=mixed\n"),
command);
exit(0);
return 0;
@@ -729,7 +747,7 @@ static int usage_filters(void)
{
long unsigned int i;
printf("Usage of filters\n"
printf(_("Usage of filters\n"
"Filters are used to reduce the output of information to only\n"
"those entries that will match the filter. Filters use posix\n"
"regular expression syntax. The possible values for exes that\n"
@@ -739,7 +757,7 @@ static int usage_filters(void)
" --filter.profiles: regular expression to match displayed profile names\n"
" --filter.pid: regular expression to match displayed processes pids\n"
" --filter.exe: regular expression to match executable\n"
);
));
for (i = 0; i < ARRAY_SIZE(process_statuses); i++) {
printf("%s%s", i ? ", " : "", process_statuses[i]);
}
@@ -757,7 +775,7 @@ static int print_usage(const char *command, bool error)
status = EXIT_FAILURE;
}
printf("Usage: %s [OPTIONS]\n"
printf(_("Usage: %s [OPTIONS]\n"
"Displays various information about the currently loaded AppArmor policy.\n"
"Default if no options given\n"
" --show=all\n\n"
@@ -774,8 +792,8 @@ static int print_usage(const char *command, bool error)
" --verbose (default) displays data points about loaded policy set\n"
" --quiet don't output error messages\n"
" -h[(legacy|filters)] this message, or info on the specified option\n"
" --help[=(legacy|filters)] this message, or info on the specified option\n",
command);
" --help[=(legacy|filters)] this message, or info on the specified option\n"),
command);
exit(status);
@@ -851,7 +869,7 @@ static int parse_args(int argc, char **argv)
} else if (strcmp(optarg, "filters") == 0) {
usage_filters();
} else {
eprintf("Error: Invalid --help option '%s'.\n", optarg);
eprintf(_("Error: Invalid --help option '%s'.\n"), optarg);
print_usage(argv[0], true);
break;
}
@@ -919,7 +937,7 @@ static int parse_args(int argc, char **argv)
} else if (strcmp(optarg, "processes") == 0) {
opt_show = SHOW_PROCESSES;
} else {
eprintf("Error: Invalid --show option '%s'.\n", optarg);
eprintf(_("Error: Invalid --show option '%s'.\n"), optarg);
print_usage(argv[0], true);
break;
}
@@ -941,7 +959,7 @@ static int parse_args(int argc, char **argv)
break;
default:
eprintf("Error: Invalid command.\n");
eprintf(_("Error: Invalid command.\n"));
print_usage(argv[0], true);
break;
}
@@ -966,7 +984,7 @@ int main(int argc, char **argv)
if (argc > 1) {
int pos = parse_args(argc, argv);
if (pos < argc) {
eprintf("Error: Unknown options.\n");
eprintf(_("Error: Unknown options.\n"));
print_usage(progname, true);
}
} else {
@@ -978,24 +996,24 @@ int main(int argc, char **argv)
init_filters(&filters, &filter_set);
if (regcomp(filters.mode, opt_mode, REG_NOSUB) != 0) {
eprintf("Error: failed to compile mode filter '%s'\n",
eprintf(_("Error: failed to compile mode filter '%s'\n"),
opt_mode);
return AA_EXIT_INTERNAL_ERROR;
}
if (regcomp(filters.profile, opt_profiles, REG_NOSUB) != 0) {
eprintf("Error: failed to compile profiles filter '%s'\n",
eprintf(_("Error: failed to compile profiles filter '%s'\n"),
opt_profiles);
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}
if (regcomp(filters.pid, opt_pid, REG_NOSUB) != 0) {
eprintf("Error: failed to compile ps filter '%s'\n",
eprintf(_("Error: failed to compile ps filter '%s'\n"),
opt_pid);
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}
if (regcomp(filters.exe, opt_exe, REG_NOSUB) != 0) {
eprintf("Error: failed to compile exe filter '%s'\n",
eprintf(_("Error: failed to compile exe filter '%s'\n"),
opt_exe);
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
@@ -1010,7 +1028,7 @@ int main(int argc, char **argv)
outf_save = outf;
outf = open_memstream(&buffer, &buffer_size);
if (!outf) {
eprintf("Failed to open memstream: %m\n");
eprintf(_("Failed to open memstream: %m\n"));
return AA_EXIT_INTERNAL_ERROR;
}
}
@@ -1021,15 +1039,17 @@ int main(int argc, char **argv)
*/
ret = get_profiles(fp, &profiles, &nprofiles);
if (ret != 0) {
eprintf("Failed to get profiles: %d....\n", ret);
eprintf(_("Failed to get profiles: %d....\n"), ret);
goto out;
}
if (opt_json)
json_header(outf);
if (opt_show & SHOW_PROFILES) {
if (opt_json)
json_seperator(outf);
if (opt_count) {
ret = simple_filtered_count(outf, &filters,
ret = simple_filtered_count(outf, &filters, opt_json,
profiles, nprofiles);
} else {
ret = detailed_profiles(outf, &filters, opt_json,
@@ -1040,14 +1060,17 @@ int main(int argc, char **argv)
}
if (opt_show & SHOW_PROCESSES) {
if (opt_json)
json_seperator(outf);
struct process *processes = NULL;
size_t nprocesses = 0;
ret = get_processes(profiles, nprofiles, &processes, &nprocesses);
if (ret != 0) {
eprintf("Failed to get processes: %d....\n", ret);
eprintf(_("Failed to get processes: %d....\n"), ret);
} else if (opt_count) {
ret = simple_filtered_process_count(outf, &filters,
ret = simple_filtered_process_count(outf, &filters, opt_json,
processes, nprocesses);
} else {
ret = detailed_processes(outf, &filters, opt_json,
@@ -1071,14 +1094,14 @@ int main(int argc, char **argv)
outf = outf_save;
json = cJSON_Parse(buffer);
if (!json) {
eprintf("Failed to parse json output");
eprintf(_("Failed to parse json output"));
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}
pretty = cJSON_Print(json);
if (!pretty) {
eprintf("Failed to print pretty json");
eprintf(_("Failed to print pretty json"));
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# Translations for aa_enabled
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2020-10-14 03:52-0700\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# Translations for aa_exec
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2020-10-14 03:52-0700\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# Translations for aa_features_abi
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2011.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2020-10-14 03:52-0700\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

34
binutils/po/aa_load.pot Normal file
View File

@@ -0,0 +1,34 @@
# Translations for aa_load
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"Language: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=CHARSET\n"
"Content-Transfer-Encoding: 8bit\n"
#: ../aa_load.c:40
msgid "aa-load: WARN: "
msgstr ""
#: ../aa_load.c:41
msgid "aa-load: ERROR: "
msgstr ""
#: ../aa_load.c:51
msgid "\n"
msgstr ""
#: ../aa_load.c:52
msgid "aa-load: DEBUG: "
msgstr ""

165
binutils/po/aa_status.pot Normal file
View File

@@ -0,0 +1,165 @@
# Translations for aa_status
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2024.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2024-08-31 17:49-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"Language: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=CHARSET\n"
"Content-Transfer-Encoding: 8bit\n"
#: ../aa_status.c:161
msgid "apparmor not present.\n"
msgstr ""
#: ../aa_status.c:164
msgid "apparmor module is loaded.\n"
msgstr ""
#: ../aa_status.c:168
msgid "apparmor filesystem is not mounted.\n"
msgstr ""
#: ../aa_status.c:181
msgid "You do not have enough privilege to read the profile set.\n"
msgstr ""
#: ../aa_status.c:183
#, c-format
msgid "Could not open %s: %s"
msgstr ""
#: ../aa_status.c:356 ../aa_status.c:379
msgid "ERROR: Failed to allocate memory\n"
msgstr ""
#: ../aa_status.c:587 ../aa_status.c:653
#, c-format
msgid "Error: failed to compile sub filter '%s'\n"
msgstr ""
#: ../aa_status.c:715
#, c-format
msgid ""
"Usage: %s [OPTIONS]\n"
"Legacy options and their equivalent command\n"
" --profiled --count --profiles\n"
" --enforced --count --profiles --mode=enforced\n"
" --complaining --count --profiles --mode=complain\n"
" --kill --count --profiles --mode=kill\n"
" --prompt --count --profiles --mode=prompt\n"
" --special-unconfined --count --profiles --mode=unconfined\n"
" --process-mixed --count --ps --mode=mixed\n"
msgstr ""
#: ../aa_status.c:734
#, c-format
msgid ""
"Usage of filters\n"
"Filters are used to reduce the output of information to only\n"
"those entries that will match the filter. Filters use posix\n"
"regular expression syntax. The possible values for exes that\n"
"support filters are below\n"
"\n"
" --filter.mode: regular expression to match the profile "
"mode modes: enforce, complain, kill, unconfined, mixed\n"
" --filter.profiles: regular expression to match displayed profile names\n"
" --filter.pid: regular expression to match displayed processes pids\n"
" --filter.exe: regular expression to match executable\n"
msgstr ""
#: ../aa_status.c:762
#, c-format
msgid ""
"Usage: %s [OPTIONS]\n"
"Displays various information about the currently loaded AppArmor policy.\n"
"Default if no options given\n"
" --show=all\n"
"\n"
"OPTIONS (one only):\n"
" --enabled returns error code if AppArmor not enabled\n"
" --show=X What information to show. {profiles,processes,all}\n"
" --count print the number of entries. Implies --quiet\n"
" --filter.mode=filter see filters\n"
" --filter.profiles=filter see filters\n"
" --filter.pid=filter see filters\n"
" --filter.exe=filter see filters\n"
" --json displays multiple data points in machine-readable JSON "
"format\n"
" --pretty-json same data as --json, formatted for human consumption as "
"well\n"
" --verbose (default) displays data points about loaded policy set\n"
" --quiet don't output error messages\n"
" -h[(legacy|filters)] this message, or info on the specified option\n"
" --help[=(legacy|filters)] this message, or info on the specified option\n"
msgstr ""
#: ../aa_status.c:856
#, c-format
msgid "Error: Invalid --help option '%s'.\n"
msgstr ""
#: ../aa_status.c:924
#, c-format
msgid "Error: Invalid --show option '%s'.\n"
msgstr ""
#: ../aa_status.c:946
msgid "Error: Invalid command.\n"
msgstr ""
#: ../aa_status.c:971
msgid "Error: Unknown options.\n"
msgstr ""
#: ../aa_status.c:983
#, c-format
msgid "Error: failed to compile mode filter '%s'\n"
msgstr ""
#: ../aa_status.c:988
#, c-format
msgid "Error: failed to compile profiles filter '%s'\n"
msgstr ""
#: ../aa_status.c:994
#, c-format
msgid "Error: failed to compile ps filter '%s'\n"
msgstr ""
#: ../aa_status.c:1000
#, c-format
msgid "Error: failed to compile exe filter '%s'\n"
msgstr ""
#: ../aa_status.c:1015
#, c-format
msgid "Failed to open memstream: %m\n"
msgstr ""
#: ../aa_status.c:1026
#, c-format
msgid "Failed to get profiles: %d....\n"
msgstr ""
#: ../aa_status.c:1050
#, c-format
msgid "Failed to get processes: %d....\n"
msgstr ""
#: ../aa_status.c:1076
msgid "Failed to parse json output"
msgstr ""
#: ../aa_status.c:1083
msgid "Failed to print pretty json"
msgstr ""

View File

@@ -35,10 +35,7 @@ VERSION=$(shell cat $(COMMONDIR)/Version)
pathsearch = $(firstword $(wildcard $(addsuffix /$(1),$(subst :, ,$(PATH)))))
map = $(foreach a,$(2),$(call $(1),$(a)))
AWK:=$(shell which awk)
ifndef AWK
$(error awk utility required for build but not available)
endif
AWK?=$(or $(shell command -v awk),$(error awk utility required for build but not available))
define nl

View File

@@ -92,6 +92,8 @@ if test "$ac_cv_prog_cc_c99" = "no"; then
AC_MSG_ERROR([C99 mode is required to build libapparmor])
fi
AC_PROG_CXX
m4_ifndef([AX_CHECK_COMPILE_FLAG], [AC_MSG_ERROR(['autoconf-archive' missing])])
EXTRA_CFLAGS="-Wall $EXTRA_WARNINGS -fPIC"
AX_CHECK_COMPILE_FLAG([-flto-partition=none], , , [-Werror])
@@ -99,6 +101,7 @@ AS_VAR_IF([ax_cv_check_cflags__Werror__flto_partition_none], [yes],
[EXTRA_CFLAGS="$EXTRA_CFLAGS -flto-partition=none"]
,)
AC_SUBST([AM_CFLAGS], ["$EXTRA_CFLAGS"])
AC_SUBST([AM_CXXFLAGS], ["$EXTRA_CFLAGS"])
AC_OUTPUT(
Makefile

View File

@@ -22,15 +22,15 @@
=head1 NAME
aa_change_hat - change to or from a "hat" within a AppArmor profile
aa_change_hat - change to or from a "hat" within a AppArmor profile
=head1 SYNOPSIS
B<#include E<lt>sys/apparmor.hE<gt>>
B<int aa_change_hat (char *subprofile, unsigned long magic_token);>
B<int aa_change_hat (const char *subprofile, unsigned long magic_token);>
B<int aa_change_hatv (char *subprofiles[], unsigned long magic_token);>
B<int aa_change_hatv (const char *subprofiles[], unsigned long magic_token);>
B<int aa_change_hat_vargs (unsigned long magic_token, ...);>

View File

@@ -22,7 +22,7 @@
=head1 NAME
aa_change_profile, aa_change_onexec - change a tasks profile
aa_change_profile, aa_change_onexec - change a task's profile
=head1 SYNOPSIS
@@ -58,8 +58,8 @@ The aa_change_onexec() function is like the aa_change_profile() function
except it specifies that the profile transition should take place on the
next exec instead of immediately. The delayed profile change takes
precedence over any exec transition rules within the confining profile.
Delaying the profile boundary has a couple of advantages, it removes the
need for stub transition profiles and the exec boundary is a natural security
Delaying the profile boundary has a couple of advantages: it removes the
need for stub transition profiles, and the exec boundary is a natural security
layer where potentially sensitive memory is unmapped.
=head1 RETURN VALUE

View File

@@ -54,7 +54,7 @@ B<typedef struct aa_features aa_features;>
B<int aa_features_new(aa_features **features, int dirfd, const char *path);>
B<int aa_features_new_from_file(aa_features **features, int fd);>
B<int aa_features_new_from_file(aa_features **features, int file);>
B<int aa_features_new_from_string(aa_features **features, const char *string, size_t size);>

View File

@@ -58,6 +58,9 @@ appropriately.
=head1 ERRORS
# podchecker warns about duplicate link targets for EACCES, EBUSY, ENOENT,
# and ENOMEM, but this is a warning that is safe to ignore.
B<aa_is_enabled>
=over 4

View File

@@ -41,7 +41,7 @@ result is an intersection of all profiles which are stacked. Stacking profiles
together is desirable when wanting to ensure that confinement will never become
more permissive. When changing between two profiles, as performed with
aa_change_profile(2), there is always the possibility that the new profile is
more permissive than the old profile but that possibility is eliminated when
more permissive than the old profile, but that possibility is eliminated when
using aa_stack_profile().
To stack a profile with the current confinement context, a task can use the
@@ -68,7 +68,7 @@ The aa_stack_onexec() function is like the aa_stack_profile() function
except it specifies that the stacking should take place on the next exec
instead of immediately. The delayed profile change takes precedence over any
exec transition rules within the confining profile. Delaying the stacking
boundary has a couple of advantages, it removes the need for stub transition
boundary has a couple of advantages: it removes the need for stub transition
profiles and the exec boundary is a natural security layer where potentially
sensitive memory is unmapped.

View File

@@ -19,6 +19,10 @@
#ifndef __LIBAALOGPARSE_H_
#define __LIBAALOGPARSE_H_
#ifdef __cplusplus
extern "C" {
#endif
#define AA_RECORD_EXEC_MMAP 1
#define AA_RECORD_READ 2
#define AA_RECORD_WRITE 4
@@ -26,10 +30,10 @@
#define AA_RECORD_LINK 16
/**
* This is just for convenience now that we have two
* wildly different grammars.
* Enum representing which syntax version the log entry used.
* Support for V1 parsing was completely removed in 2011 and that enum entry
* is only still there for API compatibility reasons.
*/
typedef enum
{
AA_RECORD_SYNTAX_V1,
@@ -48,70 +52,23 @@ typedef enum
AA_RECORD_STATUS /* Configuration change */
} aa_record_event_type;
/**
* With the sole exception of active_hat, this is a 1:1
* mapping from the keys that the new syntax uses.
/*
* Use this preprocessor dance to maintain backcompat for field names
* This will break C code that used the C++ reserved keywords "namespace"
* and "class" as identifiers, but this is bad practice anyways, and we
* hope that we are the only ones in a given C file that messed up this way
*
* Some examples of the old syntax and how they're mapped with the aa_log_record struct:
*
* "PERMITTING r access to /path (program_name(12345) profile /profile active hat)"
* - operation: access
* - requested_mask: r
* - pid: 12345
* - profile: /profile
* - name: /path
* - info: program_name
* - active_hat: hat
*
* "REJECTING mkdir on /path/to/something (bash(23415) profile /bin/freak-aa-out active /bin/freak-aa-out"
* - operation: mkdir
* - name: /path/to/something
* - info: bash
* - pid: 23415
* - profile: /bin/freak-aa-out
* - active_hat: /bin/freak-aa-out
*
* "REJECTING xattr set on /path/to/something (bash(23415) profile /bin/freak-aa-out active /bin/freak-aa-out)"
* - operation: xattr
* - attribute: set
* - name: /path/to/something
* - info: bash
* - pid: 23415
* - profile: /bin/freak-aa-out
* - active_hat: /bin/freak-aa-out
*
* "PERMITTING attribute (something) change to /else (bash(23415) profile /bin/freak-aa-out active /bin/freak-aa-out)"
* - operation: setattr
* - attribute: something
* - name: /else
* - info: bash
* - pid: 23415
* - profile: /bin/freak-aa-out
* - active_hat: /bin/freak-aa-out
*
* "PERMITTING access to capability 'cap' (bash(23415) profile /bin/freak-aa-out active /bin/freak-aa-out)"
* - operation: capability
* - name: cap
* - info: bash
* - pid: 23415
* - profile: /bin/freak-aa-out
* - active_hat: /bin/freak-aa-out
*
* "LOGPROF-HINT unknown_hat TESTHAT pid=27764 profile=/change_hat_test/test_hat active=/change_hat_test/test_hat"
* - operation: change_hat
* - name: TESTHAT
* - info: unknown_hat
* - pid: 27764
* - profile: /change_hat_test/test_hat
* - active_hat: /change_hat_test/test_hat
*
* "LOGPROF-HINT fork pid=27764 child=38229"
* - operation: clone
* - task: 38229
* - pid: 27764
**/
* TODO: document this in a man page for aalogparse?
*/
#if defined(SWIG) && defined(__cplusplus)
#error "SWIG and __cplusplus are defined together"
#elif !defined(SWIG) && !defined(__cplusplus)
/* Use SWIG's %rename feature to preserve backcompat */
#define class rule_class
#define namespace aa_namespace
#endif
typedef struct
typedef struct aa_log_record
{
aa_record_syntax_version version;
aa_record_event_type event; /* Event type */
@@ -134,7 +91,7 @@ typedef struct
char *comm; /* Command that triggered msg */
char *name;
char *name2;
char *namespace;
char *aa_namespace;
char *attribute;
unsigned long parent;
char *info;
@@ -149,8 +106,6 @@ typedef struct
char *net_foreign_addr;
unsigned long net_foreign_port;
char *execpath;
char *dbus_bus;
char *dbus_path;
char *dbus_interface;
@@ -163,10 +118,11 @@ typedef struct
char *flags;
char *src_name;
char *class;
char *rule_class;
char *net_addr;
char *peer_addr;
char *execpath;
} aa_log_record;
/**
@@ -177,7 +133,7 @@ typedef struct
* @return Parsed data.
*/
aa_log_record *
parse_record(char *str);
parse_record(const char *str);
/**
* Frees all struct data.
@@ -186,5 +142,9 @@ parse_record(char *str);
void
free_record(aa_log_record *record);
#ifdef __cplusplus
}
#endif
#endif

View File

@@ -105,8 +105,8 @@ extern int aa_getpeercon(int fd, char **label, char **mode);
#define AA_QUERY_CMD_LABEL "label"
#define AA_QUERY_CMD_LABEL_SIZE sizeof(AA_QUERY_CMD_LABEL)
extern int aa_query_label(uint32_t mask, char *query, size_t size, int *allow,
int *audit);
extern int aa_query_label(uint32_t mask, char *query, size_t size, int *allowed,
int *audited);
extern int aa_query_file_path_len(uint32_t mask, const char *label,
size_t label_len, const char *path,
size_t path_len, int *allowed, int *audited);

View File

@@ -44,7 +44,7 @@ include $(COMMONDIR)/Make.rules
BUILT_SOURCES = grammar.h scanner.h af_protos.h
AM_LFLAGS = -v
AM_YFLAGS = -d -p aalogparse_
AM_YFLAGS = -Wno-yacc -d -p aalogparse_
AM_CPPFLAGS = -D_GNU_SOURCE -I$(top_srcdir)/include/
scanner.h: scanner.l
$(LEX) -v $<
@@ -52,7 +52,7 @@ scanner.h: scanner.l
scanner.c: scanner.l
af_protos.h:
echo '#include <netinet/in.h>' | $(CC) $(CPPFLAGS) -E -dM - | LC_ALL=C sed -n -e "/IPPROTO_MAX/d" -e "s/^\#define[ \\t]\\+IPPROTO_\\([A-Z0-9_]\\+\\)\\(.*\\)$$/AA_GEN_PROTO_ENT(\\UIPPROTO_\\1, \"\\L\\1\")/p" > $@
echo '#include <netinet/in.h>' | $(CC) $(CPPFLAGS) -E -dD - | LC_ALL=C sed -n -e "/IPPROTO_MAX/d" -e "s/^\#define[ \\t]\\+IPPROTO_\\([A-Z0-9_]\\+\\)\\(.*\\)$$/AA_GEN_PROTO_ENT(\\UIPPROTO_\\1, \"\\L\\1\")/p" > $@
lib_LTLIBRARIES = libapparmor.la
noinst_HEADERS = grammar.h parser.h scanner.h af_protos.h private.h PMurHash.h
@@ -73,6 +73,16 @@ CLEANFILES = libapparmor.pc
tst_aalogmisc_SOURCES = tst_aalogmisc.c
tst_aalogmisc_LDADD = .libs/libapparmor.a
tst_aalogparse_cpp_SOURCES = tst_aalogparse_cpp.cpp
tst_aalogparse_cpp_LDADD = .libs/libapparmor.a
tst_aalogparse_oldname_SOURCES = tst_aalogparse_oldname.c
tst_aalogparse_oldname_LDADD = .libs/libapparmor.a
tst_aalogparse_reentrancy_SOURCES = tst_aalogparse_reentrancy.c
tst_aalogparse_reentrancy_LDADD = .libs/libapparmor.a
tst_aalogparse_reentrancy_LDFLAGS = -pthread
tst_features_SOURCES = tst_features.c
tst_features_LDADD = .libs/libapparmor.a
@@ -80,7 +90,7 @@ tst_kernel_SOURCES = tst_kernel.c
tst_kernel_LDADD = .libs/libapparmor.a
tst_kernel_LDFLAGS = -pthread
check_PROGRAMS = tst_aalogmisc tst_features tst_kernel
check_PROGRAMS = tst_aalogmisc tst_aalogparse_cpp tst_aalogparse_reentrancy tst_aalogparse_oldname tst_features tst_kernel
TESTS = $(check_PROGRAMS)
.PHONY: check-local

View File

@@ -15,17 +15,15 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/* aalogparse_error now requires visibility of the aa_log_record type
* Also include in a %code requires block to add it to the header
*/
%code requires{
#include <aalogparse.h>
}
%{
/* set the following to non-zero to get bison to emit debugging
* information about tokens given and rules matched.
* Also:
* Uncomment the %defines
* parse.error
* parse.trace
*/
#define YYDEBUG 0
#include <string.h>
#include <aalogparse.h>
#include "parser.h"
@@ -41,12 +39,10 @@
#define debug_unused_ unused_
#endif
aa_log_record *ret_record;
/* Since we're a library, on any errors we don't want to print out any
* error messages. We should probably add a debug interface that does
* emit messages when asked for. */
void aalogparse_error(unused_ void *scanner, debug_unused_ char const *s)
void aalogparse_error(unused_ void *scanner, aa_log_record *ret_record, debug_unused_ char const *s)
{
#if (YYDEBUG != 0)
printf("ERROR: %s\n", s);
@@ -89,9 +85,10 @@ aa_record_event_type lookup_aa_event(unsigned int type)
%define parse.trace
*/
%define api.pure
%define api.pure full
%lex-param{void *scanner}
%parse-param{void *scanner}
%parse-param{aa_log_record *ret_record}
%union
{
@@ -284,8 +281,9 @@ audit_user_msg: TOK_KEY_MSG TOK_EQUALS audit_id audit_user_msg_tail
audit_id: TOK_AUDIT TOK_OPEN_PAREN TOK_AUDIT_DIGITS TOK_PERIOD TOK_AUDIT_DIGITS TOK_COLON TOK_AUDIT_DIGITS TOK_CLOSE_PAREN TOK_COLON
{
if (!asprintf(&ret_record->audit_id, "%s.%s:%s", $3, $5, $7))
yyerror(scanner, YY_("Out of memory"));
if (!asprintf(&ret_record->audit_id, "%s.%s:%s", $3, $5, $7)) {
yyerror(scanner, ret_record, YY_("Out of memory"));
}
ret_record->epoch = atol($3);
ret_record->audit_sub_id = atoi($7);
free($3);
@@ -308,7 +306,7 @@ key: TOK_KEY_OPERATION TOK_EQUALS TOK_QUOTED_STRING
| TOK_KEY_NAME TOK_EQUALS safe_string
{ ret_record->name = $3;}
| TOK_KEY_NAMESPACE TOK_EQUALS safe_string
{ ret_record->namespace = $3;}
{ ret_record->aa_namespace = $3;}
| TOK_KEY_NAME2 TOK_EQUALS safe_string
{ ret_record->name2 = $3;}
| TOK_KEY_MASK TOK_EQUALS TOK_QUOTED_STRING
@@ -440,7 +438,7 @@ key: TOK_KEY_OPERATION TOK_EQUALS TOK_QUOTED_STRING
ret_record->info = $1;
}
| TOK_KEY_CLASS TOK_EQUALS TOK_QUOTED_STRING
{ ret_record->class = $3; }
{ ret_record->rule_class = $3; }
;
apparmor_event:
@@ -477,31 +475,3 @@ protocol: TOK_QUOTED_STRING
}
;
%%
aa_log_record *
_parse_yacc(char *str)
{
/* yydebug = 1; */
YY_BUFFER_STATE lex_buf;
yyscan_t scanner;
ret_record = NULL;
ret_record = malloc(sizeof(aa_log_record));
_init_log_record(ret_record);
if (ret_record == NULL)
return NULL;
#if (YYDEBUG != 0)
yydebug = 1;
#endif
aalogparse_lex_init(&scanner);
lex_buf = aalogparse__scan_string(str, scanner);
/* Ignore return value to return an AA_RECORD_INVALID event */
(void)aalogparse_parse(scanner);
aalogparse__delete_buffer(lex_buf, scanner);
aalogparse_lex_destroy(scanner);
return ret_record;
}

View File

@@ -34,13 +34,42 @@
#include <aalogparse.h>
#include "parser.h"
#include "grammar.h"
#include "scanner.h"
/* This is mostly just a wrapper around the code in grammar.y */
aa_log_record *parse_record(char *str)
aa_log_record *parse_record(const char *str)
{
YY_BUFFER_STATE lex_buf;
yyscan_t scanner;
aa_log_record *ret_record;
if (str == NULL)
return NULL;
return _parse_yacc(str);
ret_record = malloc(sizeof(aa_log_record));
_init_log_record(ret_record);
if (ret_record == NULL)
return NULL;
struct string_buf string_buf = {.buf = NULL, .buf_len = 0, .buf_alloc = 0};
#if (YYDEBUG != 0)
/* Warning: this is still a global even in reentrant parsers */
aalogparse_debug = 1;
#endif
aalogparse_lex_init_extra(&string_buf, &scanner);
lex_buf = aalogparse__scan_string(str, scanner);
/* Ignore return value to return an AA_RECORD_INVALID event */
(void)aalogparse_parse(scanner, ret_record);
aalogparse__delete_buffer(lex_buf, scanner);
aalogparse_lex_destroy(scanner);
/* free(NULL) is a no-op */
free(string_buf.buf);
return ret_record;
}
void free_record(aa_log_record *record)
@@ -63,8 +92,8 @@ void free_record(aa_log_record *record)
free(record->name);
if (record->name2 != NULL)
free(record->name2);
if (record->namespace != NULL)
free(record->namespace);
if (record->aa_namespace != NULL)
free(record->aa_namespace);
if (record->attribute != NULL)
free(record->attribute);
if (record->info != NULL)
@@ -110,8 +139,8 @@ void free_record(aa_log_record *record)
if (record->execpath != NULL)
free(record->execpath);
if (record->class != NULL)
free(record->class);
if (record->rule_class != NULL)
free(record->rule_class);
free(record);
}

View File

@@ -19,8 +19,14 @@
#ifndef __AA_LOG_PARSER_H__
#define __AA_LOG_PARSER_H__
// Internal-only type
struct string_buf {
char *buf;
unsigned int buf_len;
unsigned int buf_alloc;
};
extern void _init_log_record(aa_log_record *record);
extern aa_log_record *_parse_yacc(char *str);
extern char *hex_to_string(char *str);
extern char *ipproto_to_string(unsigned int proto);

View File

@@ -19,6 +19,7 @@
%option nounput
%option noyy_top_state
%option reentrant
%option extra-type="struct string_buf*"
%option prefix="aalogparse_"
%option bison-bridge
%option header-file="scanner.h"
@@ -34,40 +35,37 @@
#define YY_NO_INPUT
unsigned int string_buf_alloc = 0;
unsigned int string_buf_len = 0;
char *string_buf = NULL;
void string_buf_reset()
void string_buf_reset(struct string_buf* char_buf)
{
/* rewind buffer to zero, possibly doing initial allocation too */
string_buf_len = 0;
if (string_buf == NULL) {
string_buf_alloc = 128;
string_buf = malloc(string_buf_alloc);
assert(string_buf != NULL);
char_buf->buf_len = 0;
if (char_buf->buf == NULL) {
char_buf->buf_alloc = 128;
char_buf->buf = malloc(char_buf->buf_alloc);
assert(char_buf->buf != NULL);
}
/* always start with a valid but empty string */
string_buf[0] = '\0';
char_buf->buf[0] = '\0';
}
void string_buf_append(unsigned int length, char *text)
void string_buf_append(struct string_buf* char_buf, unsigned int length, char *text)
{
unsigned int current_length = string_buf_len;
unsigned int current_length = char_buf->buf_len;
/* handle calling ..._append before ..._reset */
if (string_buf == NULL) string_buf_reset();
if (char_buf->buf == NULL) string_buf_reset(char_buf);
string_buf_len += length;
char_buf->buf_len += length;
/* expand allocation if this append would exceed the allocation */
while (string_buf_len >= string_buf_alloc) {
string_buf_alloc *= 2;
string_buf = realloc(string_buf, string_buf_alloc);
assert(string_buf != NULL);
while (char_buf->buf_len >= char_buf->buf_alloc) {
// TODO: overflow?
char_buf->buf_alloc *= 2;
char_buf->buf = realloc(char_buf->buf, char_buf->buf_alloc);
assert(char_buf->buf != NULL);
}
/* copy and unconditionally terminate */
memcpy(string_buf+current_length, text, length);
string_buf[string_buf_len] = '\0';
memcpy(char_buf->buf+current_length, text, length);
char_buf->buf[char_buf->buf_len] = '\0';
}
%}
@@ -232,7 +230,7 @@ yy_flex_debug = 0;
{open_paren} { return(TOK_OPEN_PAREN); }
{close_paren} { BEGIN(INITIAL); return(TOK_CLOSE_PAREN); }
{ws} { }
\" { string_buf_reset(); BEGIN(quoted_string); }
\" { string_buf_reset(yyextra); BEGIN(quoted_string); }
{ID}+ {
yylval->t_str = strdup(yytext);
BEGIN(INITIAL);
@@ -241,20 +239,20 @@ yy_flex_debug = 0;
{equals} { return(TOK_EQUALS); }
}
\" { string_buf_reset(); BEGIN(quoted_string); }
\" { string_buf_reset(yyextra); BEGIN(quoted_string); }
<quoted_string>\" { /* End of the quoted string */
BEGIN(INITIAL);
yylval->t_str = strdup(string_buf);
yylval->t_str = strdup(yyextra->buf);
return(TOK_QUOTED_STRING);
}
<quoted_string>\\(.|\n) { string_buf_append(1, &yytext[1]); }
<quoted_string>\\(.|\n) { string_buf_append(yyextra, 1, &yytext[1]); }
<quoted_string>[^\\\n\"]+ { string_buf_append(yyleng, yytext); }
<quoted_string>[^\\\n\"]+ { string_buf_append(yyextra, yyleng, yytext); }
<safe_string>{
\" { string_buf_reset(); BEGIN(quoted_string); }
\" { string_buf_reset(yyextra); BEGIN(quoted_string); }
{hexstring} { yylval->t_str = hex_to_string(yytext); BEGIN(INITIAL); return(TOK_HEXSTRING);}
{equals} { return(TOK_EQUALS); }
. { /* eek, error! try another state */ BEGIN(INITIAL); yyless(0); }

View File

@@ -0,0 +1,20 @@
#include <aalogparse.h>
#include <string.h>
#include "private.h"
const char* log_line = "[23342.075380] audit: type=1400 audit(1725487203.971:1831): apparmor=\"DENIED\" operation=\"open\" class=\"file\" profile=\"snap-update-ns.firmware-updater\" name=\"/proc/202964/maps\" pid=202964 comm=\"5\" requested_mask=\"r\" denied_mask=\"r\" fsuid=1000 ouid=0";
int main(void) {
int rc = 0;
/* Very basic test to ensure we can do aalogparse stuff in C++ */
aa_log_record *record = parse_record(log_line);
MY_TEST(record != NULL, "Log failed to parse");
MY_TEST(record->version == AA_RECORD_SYNTAX_V2, "Log should have parsed as v2 form");
MY_TEST(record->aa_namespace == NULL, "Log should have NULL namespace");
MY_TEST((record->rule_class != NULL) && (strcmp(record->rule_class, "file") == 0), "Log should have file class");
free_record(record);
return rc;
}

View File

@@ -0,0 +1,20 @@
#include <aalogparse.h>
#include <string.h>
#include "private.h"
const char* log_line = "[23342.075380] audit: type=1400 audit(1725487203.971:1831): apparmor=\"DENIED\" operation=\"open\" class=\"file\" profile=\"snap-update-ns.firmware-updater\" name=\"/proc/202964/maps\" pid=202964 comm=\"5\" requested_mask=\"r\" denied_mask=\"r\" fsuid=1000 ouid=0";
int main(void) {
int rc = 0;
/* Very basic test to ensure we can use the C++-incompatible field names */
aa_log_record *record = parse_record(log_line);
MY_TEST(record != NULL, "Log failed to parse");
MY_TEST(record->version == AA_RECORD_SYNTAX_V2, "Log should have parsed as v2 form");
MY_TEST(record->namespace == NULL, "Log should have NULL namespace");
MY_TEST((record->class != NULL) && (strcmp(record->class, "file") == 0), "Log should have file class");
free_record(record);
return rc;
}

View File

@@ -0,0 +1,154 @@
#include <pthread.h>
#include <string.h>
#include <aalogparse.h>
#include "private.h"
const char* log_line = "[23342.075380] audit: type=1400 audit(1725487203.971:1831): apparmor=\"DENIED\" operation=\"open\" class=\"file\" profile=\"snap-update-ns.firmware-updater\" name=\"/proc/202964/maps\" pid=202964 comm=\"5\" requested_mask=\"r\" denied_mask=\"r\" fsuid=1000 ouid=0";
const char* log_line_2 = "[ 4074.372559] audit: type=1400 audit(1725553393.143:793): apparmor=\"DENIED\" operation=\"capable\" class=\"cap\" profile=\"/usr/lib/snapd/snap-confine\" pid=19034 comm=\"snap-confine\" capability=12 capname=\"net_admin\"";
static int pthread_barrier_ok(int barrier_result) {
return barrier_result == 0 || barrier_result == PTHREAD_BARRIER_SERIAL_THREAD;
}
static int nullcmp_and_strcmp(const void *s1, const void *s2)
{
/* Return 0 if both pointers are NULL & non-zero if only one is NULL */
if (!s1 || !s2)
return s1 != s2;
return strcmp(s1, s2);
}
int aa_log_record_eq(aa_log_record *record1, aa_log_record *record2) {
int are_eq = 1;
are_eq &= (record1->version == record2->version);
are_eq &= (record1->event == record2->event);
are_eq &= (record1->pid == record2->pid);
are_eq &= (record1->peer_pid == record2->peer_pid);
are_eq &= (record1->task == record2->task);
are_eq &= (record1->magic_token == record2->magic_token);
are_eq &= (record1->epoch == record2->epoch);
are_eq &= (record1->audit_sub_id == record2->audit_sub_id);
are_eq &= (record1->bitmask == record2->bitmask);
are_eq &= (nullcmp_and_strcmp(record1->audit_id, record2->audit_id) == 0);
are_eq &= (nullcmp_and_strcmp(record1->operation, record2->operation) == 0);
are_eq &= (nullcmp_and_strcmp(record1->denied_mask, record2->denied_mask) == 0);
are_eq &= (nullcmp_and_strcmp(record1->requested_mask, record2->requested_mask) == 0);
are_eq &= (record1->fsuid == record2->fsuid);
are_eq &= (record1->ouid == record2->ouid);
are_eq &= (nullcmp_and_strcmp(record1->profile, record2->profile) == 0);
are_eq &= (nullcmp_and_strcmp(record1->peer_profile, record2->peer_profile) == 0);
are_eq &= (nullcmp_and_strcmp(record1->comm, record2->comm) == 0);
are_eq &= (nullcmp_and_strcmp(record1->name, record2->name) == 0);
are_eq &= (nullcmp_and_strcmp(record1->name2, record2->name2) == 0);
are_eq &= (nullcmp_and_strcmp(record1->namespace, record2->namespace) == 0);
are_eq &= (nullcmp_and_strcmp(record1->attribute, record2->attribute) == 0);
are_eq &= (record1->parent == record2->parent);
are_eq &= (nullcmp_and_strcmp(record1->info, record2->info) == 0);
are_eq &= (nullcmp_and_strcmp(record1->peer_info, record2->peer_info) == 0);
are_eq &= (record1->error_code == record2->error_code);
are_eq &= (nullcmp_and_strcmp(record1->active_hat, record2->active_hat) == 0);
are_eq &= (nullcmp_and_strcmp(record1->net_family, record2->net_family) == 0);
are_eq &= (nullcmp_and_strcmp(record1->net_protocol, record2->net_protocol) == 0);
are_eq &= (nullcmp_and_strcmp(record1->net_sock_type, record2->net_sock_type) == 0);
are_eq &= (nullcmp_and_strcmp(record1->net_local_addr, record2->net_local_addr) == 0);
are_eq &= (record1->net_local_port == record2->net_local_port);
are_eq &= (nullcmp_and_strcmp(record1->net_foreign_addr, record2->net_foreign_addr) == 0);
are_eq &= (record1->net_foreign_port == record2->net_foreign_port);
are_eq &= (nullcmp_and_strcmp(record1->execpath, record2->execpath) == 0);
are_eq &= (nullcmp_and_strcmp(record1->dbus_bus, record2->dbus_bus) == 0);
are_eq &= (nullcmp_and_strcmp(record1->dbus_path, record2->dbus_path) == 0);
are_eq &= (nullcmp_and_strcmp(record1->dbus_interface, record2->dbus_interface) == 0);
are_eq &= (nullcmp_and_strcmp(record1->dbus_member, record2->dbus_member) == 0);
are_eq &= (nullcmp_and_strcmp(record1->signal, record2->signal) == 0);
are_eq &= (nullcmp_and_strcmp(record1->peer, record2->peer) == 0);
are_eq &= (nullcmp_and_strcmp(record1->fs_type, record2->fs_type) == 0);
are_eq &= (nullcmp_and_strcmp(record1->flags, record2->flags) == 0);
are_eq &= (nullcmp_and_strcmp(record1->src_name, record2->src_name) == 0);
are_eq &= (nullcmp_and_strcmp(record1->class, record2->class) == 0);
are_eq &= (nullcmp_and_strcmp(record1->net_addr, record2->net_addr) == 0);
are_eq &= (nullcmp_and_strcmp(record1->peer_addr, record2->peer_addr) == 0);
return are_eq;
}
typedef struct {
const char* log;
pthread_barrier_t *barrier;
} pthread_parse_args;
void* pthread_parse_log(void* args) {
pthread_parse_args *args_real = (pthread_parse_args *) args;
int barrier_wait_result = pthread_barrier_wait(args_real->barrier);
/* Return NULL and fail test if barrier wait fails */
if (!pthread_barrier_ok(barrier_wait_result)) {
return NULL;
}
aa_log_record *record = parse_record(args_real->log);
return (void*) record;
}
#define NUM_THREADS 16
int main(void) {
pthread_t thread_ids[NUM_THREADS];
pthread_barrier_t barrier;
int barrier_wait_result;
aa_log_record* parsed_logs[NUM_THREADS];
int rc = 0;
/* Set up arguments to be passed to threads */
pthread_parse_args args = {.log=log_line, .barrier=&barrier};
pthread_parse_args args2 = {.log=log_line_2, .barrier=&barrier};
MY_TEST(NUM_THREADS > 2, "Test requires more than 2 threads");
/* Use barrier to synchronize the start of log parsing among all the threads
* This increases the likelihood of tickling race conditions, if there are any
*/
MY_TEST(pthread_barrier_init(&barrier, NULL, NUM_THREADS+1) == 0,
"Could not init pthread barrier");
for (int i=0; i<NUM_THREADS; i++) {
if (i%2 == 0) {
pthread_create(&thread_ids[i], NULL, pthread_parse_log, (void *) &args);
} else {
pthread_create(&thread_ids[i], NULL, pthread_parse_log, (void *) &args2);
}
}
/* Final barrier_wait to set off the thread race */
barrier_wait_result = pthread_barrier_wait(&barrier);
MY_TEST(pthread_barrier_ok(barrier_wait_result), "Could not wait on pthread barrier");
/* Wait for threads to finish parsing the logs */
for (int i=0; i<NUM_THREADS; i++) {
MY_TEST(pthread_join(thread_ids[i], (void*) &parsed_logs[i]) == 0, "Could not join thread");
}
/* Check that all logs parsed and are equal */
for (int i=0; i<NUM_THREADS; i++) {
MY_TEST(parsed_logs[i] != NULL, "Log failed to parse");
MY_TEST(parsed_logs[i]->version == AA_RECORD_SYNTAX_V2, "Log should have parsed as v2 form");
MY_TEST(parsed_logs[i]->event == AA_RECORD_DENIED, "Log should have parsed as denied");
/* Also check i==0 and i==1 as a sanity check for aa_log_record_eq */
if (i%2 == 0) {
MY_TEST(aa_log_record_eq(parsed_logs[0], parsed_logs[i]), "Log 0 != Log even");
} else {
MY_TEST(aa_log_record_eq(parsed_logs[1], parsed_logs[i]), "Log 1 != Log odd");
}
}
MY_TEST(!aa_log_record_eq(parsed_logs[0], parsed_logs[1]), "Log 0 and log 1 shouldn't be equal");
/* Clean up */
MY_TEST(pthread_barrier_destroy(&barrier) == 0, "Could not destroy pthread barrier");
for (int i=0; i<NUM_THREADS; i++) {
free_record(parsed_logs[i]);
}
return rc;
}

View File

@@ -1,3 +1,3 @@
SUBDIRS = perl python ruby
EXTRA_DIST = SWIG/*.i java/Makefile.am
EXTRA_DIST = SWIG/*.i

View File

@@ -5,9 +5,98 @@
#include <sys/apparmor.h>
#include <sys/apparmor_private.h>
// Include static_assert if the C compiler supports it
// static_assert standardized since C11, assert.h not needed since C23
#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L && __STDC_VERSION__ < 202311L
#include <assert.h>
#endif
%}
%include "typemaps.i"
%include <cstring.i>
%include <stdint.i>
%include <exception.i>
/*
* SWIG 4.3 included https://github.com/swig/swig/pull/2907 to distinguish
* between Py_None being returned as a default void and Py_None being returned
* as the equivalent of C NULL. Unfortunately, this turns into an API breaking
* change with our use of %append_output when we want the Python function to
* return something even when the C function has a void return type. Thus, we
* need an additional macro to smooth over the differences. Include all affected
* languages, even ones we don't build bindings for, for completeness.
*/
#if SWIG_VERSION >= 0x040300
#ifdef SWIGPYTHON
#define ISVOID_APPEND_OUTPUT(value) {$result = SWIG_Python_AppendOutput($result, value, 1);}
#elif defined(SWIGRUBY)
#define ISVOID_APPEND_OUTPUT(value) {$result = SWIG_Ruby_AppendOutput($result, value, 1);}
#elif defined(SWIGPHP)
#define ISVOID_APPEND_OUTPUT(value) {$result = SWIG_Php_AppendOutput($result, value, 1);}
#else
#define ISVOID_APPEND_OUTPUT(value) %append_output(value)
#endif
#else
#define ISVOID_APPEND_OUTPUT(value) %append_output(value)
#endif
%newobject parse_record;
%delobject free_record;
/*
* Despite its name, %delobject does not hook up destructors to language
* deletion mechanisms. Instead, it sets flags so that manually calling the
* free function and then deleting by language mechanisms doesn't cause a
* double-free.
*
* Additionally, we can manually extend the struct with a C++-like
* destructor. This ensures that the record struct is freed
* automatically when the high-level object goes out of scope.
*/
%extend aa_log_record {
~aa_log_record() {
free_record($self);
}
}
/*
* Generate a no-op free_record wrapper to avoid making a double-free footgun.
* Use rename directive to avoid colliding with the actual free_record, which
* we use above to clean up when the higher-level language deletes the object.
*
* Ideally we would not expose a free_record at all, but we need to maintain
* backwards compatibility with the existing high-level code that uses it.
*/
%rename(free_record) noop_free_record;
#ifdef SWIGPYTHON
%pythonprepend noop_free_record %{
import warnings
warnings.warn("free_record is now a no-op as the record's memory is handled automatically", DeprecationWarning)
%}
#endif
%feature("autodoc",
"This function used to free aa_log_record objects. Freeing is now handled "
"automatically, so this no-op function remains for backwards compatibility.") noop_free_record;
%inline %{
void noop_free_record(aa_log_record *record) {(void) record;}
%}
/*
* Do not autogenerate a wrapper around free_record. This does not prevent us
* from calling it ourselves in %extend C code.
*/
%ignore free_record;
/*
* Map names to preserve backwards compatibility
*/
#ifdef SWIGPYTHON
%rename("_class") aa_log_record::rule_class;
#else
%rename("class") aa_log_record::rule_class;
#endif
%rename("namespace") aa_log_record::aa_namespace;
%include <aalogparse.h>
/**
@@ -21,18 +110,75 @@
/* apparmor.h */
/*
* label is a heap-allocated pointer, but when label and mode occur together,
* the freeing of label must be deferred because mode points into label.
*
* %cstring_output_allocate((char **label, char **mode), free(*$1))
* does not handle multi-argument typemaps correctly, so we write our own
* typemap based on it instead.
*/
%typemap(in,noblock=1,numinputs=0) (char **label, char **mode) ($*1_ltype temp_label = 0, $*2_ltype temp_mode = 0) {
$1 = &temp_label;
$2 = &temp_mode;
}
%typemap(freearg,match="in") (char **label, char **mode) ""
%typemap(argout,noblock=1,fragment="SWIG_FromCharPtr") (char **label, char **mode) {
ISVOID_APPEND_OUTPUT(SWIG_FromCharPtr(*$1));
ISVOID_APPEND_OUTPUT(SWIG_FromCharPtr(*$2));
free(*$1);
}
/*
* mode also occurs in combination with con in aa_splitcon
* typemap based on %cstring_mutable but with substantial modifications
*/
%typemap(in,numinputs=1,fragment="SWIG_AsCharPtrAndSize") (char *con, char **mode) ($*2_ltype temp_mode = 0) {
int alloc_status = 0;
$1_ltype con_ptr = NULL;
size_t con_len = 0;
int char_ptr_res = SWIG_AsCharPtrAndSize($input, &con_ptr, &con_len, &alloc_status);
if (!SWIG_IsOK(char_ptr_res)) {
%argument_fail(char_ptr_res, "char *con", $symname, $argnum);
}
if (alloc_status != SWIG_NEWOBJ) {
// Unconditionally copy because the C function modifies the string in place
$1 = %new_copy_array(con_ptr, con_len+1, char);
} else {
$1 = con_ptr;
}
$2 = &temp_mode;
}
%typemap(freearg,noblock=1,match="in") (char *con, char **mode) {
%delete_array($1);
}
%typemap(argout,noblock=1,fragment="SWIG_FromCharPtr") (char *con, char **mode) {
/*
* aa_splitcon returns either con or NULL so we don't need to explicitly
* append it to the output, and we don't need the ISVOID helper here
*
* SWIG_FromCharPtr does NULL checks for us
*/
%append_output(SWIG_FromCharPtr(*$2));
}
%exception aa_splitcon {
$action
if (result == NULL) {
SWIG_exception_fail(SWIG_ValueError, "received invalid confinement context");
}
}
extern char *aa_splitcon(char *con, char **mode);
/* apparmor_private.h */
extern int _aa_is_blacklisted(const char *name);
#ifdef SWIGPYTHON
%exception {
$action
if (result < 0) {
// Unfortunately SWIG_exception does not support OSError
PyErr_SetFromErrno(PyExc_OSError);
return NULL;
SWIG_fail;
}
}
#endif
@@ -41,33 +187,206 @@ extern int _aa_is_blacklisted(const char *name);
/* apparmor.h */
/*
* aa_is_enabled returns a boolean as an int with failure reason in errno
* Therefore, aa_is_enabled either returns True or throws an exception
*
* Keep that behavior for backwards compatibilty but return a boolean on Python
* where it makes more sense, which isn't a breaking change because a boolean is
* a subclass of int
*/
#ifdef SWIGPYTHON
%typemap(out) int {
$result = PyBool_FromLong($1);
}
#endif
extern int aa_is_enabled(void);
extern int aa_find_mountpoint(char **mnt);
#ifdef SWIGPYTHON
// Based on SWIG's argcargv.i but we don't have an argc
%typemap(in,fragment="SWIG_AsCharPtr") const char *subprofiles[] (Py_ssize_t seq_len=0, int* alloc_tracking = NULL) {
void* arg_as_ptr = NULL;
int res_convertptr = SWIG_ConvertPtr($input, &arg_as_ptr, $descriptor(char*[]), 0);
if (SWIG_IsOK(res_convertptr)) {
$1 = %static_cast(arg_as_ptr, $1_ltype);
} else {
// Clear error that would be set if ptr conversion failed
PyErr_Clear();
int is_list = PyList_Check($input);
if (is_list || PyTuple_Check($input)) {
seq_len = PySequence_Length($input);
/*
* %new_array zero-inits for cleaner error handling and memory cleanup
* %delete_array(NULL) is no-op (either free or delete), and
* alloc_tracking of 0 is uninit
*
* Further note: SWIG_exception_fail jumps to the freearg typemap
*/
$1 = %new_array(seq_len+1, char *);
if ($1 == NULL) {
SWIG_exception_fail(SWIG_MemoryError, "could not allocate C subprofiles");
}
alloc_tracking = %new_array(seq_len, int);
if (alloc_tracking == NULL) {
SWIG_exception_fail(SWIG_MemoryError, "could not allocate C alloc track arr");
}
for (Py_ssize_t i=0; i<seq_len; i++) {
PyObject *o = is_list ? PyList_GetItem($input, i) : PyTuple_GetItem($input, i);
if (o == NULL) {
// Failed to get item-Python already set exception info
SWIG_fail;
} else if (o == Py_None) {
// SWIG_AsCharPtr(Py_None, ...) succeeds with ptr output being NULL
SWIG_exception_fail(SWIG_ValueError, "sequence contains a None object");
}
int res = SWIG_AsCharPtr(o, &$1[i], &alloc_tracking[i]);
if (!SWIG_IsOK(res)) {
// Could emit idx of error here, maybe?
SWIG_exception_fail(SWIG_ArgError(res), "sequence does not contain all strings");
}
}
} else {
SWIG_exception_fail(SWIG_TypeError, "subprofiles is not a list or tuple");
}
}
}
%typemap(freearg,noblock=1) const char *subprofiles[] {
/*
* If static_assert is present, use it to verify the assumption that
* allocation uninitialized (0) != SWIG_NEWOBJ
*/
%#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L
static_assert(SWIG_NEWOBJ != 0);
%#endif
if ($1 != NULL && alloc_tracking$argnum != NULL) {
for (Py_ssize_t i=0; i<seq_len$argnum; i++) {
if (alloc_tracking$argnum[i] == SWIG_NEWOBJ) {
%delete_array($1[i]);
}
}
}
%delete_array(alloc_tracking$argnum);
%delete_array($1);
}
#endif
/* These should not receive the VOID_Object typemap */
extern int aa_change_hat(const char *subprofile, unsigned long magic_token);
extern int aa_change_profile(const char *profile);
extern int aa_change_onexec(const char *profile);
extern int aa_change_hatv(const char *subprofiles[], unsigned long token);
extern int aa_change_hat_vargs(unsigned long token, int count, ...);
extern int aa_stack_profile(const char *profile);
extern int aa_stack_onexec(const char *profile);
extern int aa_getprocattr_raw(pid_t tid, const char *attr, char *buf, int len,
char **mode);
extern int aa_getprocattr(pid_t tid, const char *attr, char **buf, char **mode);
/*
* aa_find_mountpoint mnt is an output pointer to a heap-allocated string
*
* This is a replica of %cstring_output_allocate(char **mnt, free(*$1))
* that uses the ISVOID helper to work correctly on SWIG 4.3 or later.
*/
%typemap(in,noblock=1,numinputs=0) (char **mnt) ($*1_ltype temp_mnt = 0) {
$1 = &temp_mnt;
}
%typemap(freearg,match="in") (char **mnt) ""
%typemap(argout,noblock=1,fragment="SWIG_FromCharPtr") (char **mnt) {
ISVOID_APPEND_OUTPUT(SWIG_FromCharPtr(*$1));
free(*$1);
}
/* The other errno-based functions should not always be returning the int value:
* - Python exceptions signal success/failure status instead via the %exception
* handler above.
* - Perl (the other binding) has $! for accessing errno but would check the int
* return status first.
*
* The generated C code for (out) resets the return value to None
* before appending the returned data (argout generated by %cstring stuff)
*/
#ifdef SWIGPYTHON
%typemap(out,noblock=1) int {
#if defined(VOID_Object)
$result = VOID_Object;
#endif
}
#endif
/*
* We can't use "typedef int pid_t" because we still support systems
* with 16-bit PIDs and SWIG can't find sys/types.h
*
* Capture the passed-in value as an intmax_t because pid_t is guaranteed
* to be a signed integer
*/
%typemap(in,noblock=1,fragment="SWIG_AsVal_long") pid_t (int conv_pid, intmax_t pid_large) {
conv_pid = SWIG_AsVal_long($input, &pid_large);
if (!SWIG_IsOK(conv_pid)) {
%argument_fail(conv_pid, "pid_t", $symname, $argnum);
}
/*
* Cast the long to a pid_t and then cast back to check for overflow
* Technically this is implementation-defined behaviour but we should be fine
*/
$1 = (pid_t) pid_large;
if ((intmax_t) $1 != pid_large) {
SWIG_exception_fail(SWIG_OverflowError, "pid_t is too large");
}
}
extern int aa_find_mountpoint(char **mnt);
extern int aa_getprocattr(pid_t tid, const char *attr, char **label, char **mode);
extern int aa_gettaskcon(pid_t target, char **label, char **mode);
extern int aa_getcon(char **label, char **mode);
extern int aa_getpeercon_raw(int fd, char *buf, socklen_t *len, char **mode);
extern int aa_getpeercon(int fd, char **label, char **mode);
extern int aa_query_label(uint32_t mask, char *query, size_t size, int *allow,
int *audit);
extern int aa_query_file_path_len(uint32_t mask, const char *label,
size_t label_len, const char *path,
size_t path_len, int *allowed, int *audited);
/*
* Typemaps for the boolean outputs of the query functions
* Use boolean types for Python and int types elsewhere
*/
#ifdef SWIGPYTHON
// TODO: find a way to deduplicate these
%typemap(in, numinputs=0) int *allowed (int temp) {
$1 = &temp;
}
%typemap(argout) int *allowed {
ISVOID_APPEND_OUTPUT(PyBool_FromLong(*$1));
}
%typemap(in, numinputs=0) int *audited (int temp) {
$1 = &temp;
}
%typemap(argout) int *audited {
ISVOID_APPEND_OUTPUT(PyBool_FromLong(*$1));
}
#else
%apply int *OUTPUT { int *allowed };
%apply int *OUTPUT { int *audited };
#endif
/* Sync this with the apparmor.h */
/* Permission flags for the AA_CLASS_FILE mediation class */
#define AA_MAY_EXEC (1 << 0)
#define AA_MAY_WRITE (1 << 1)
#define AA_MAY_READ (1 << 2)
#define AA_MAY_APPEND (1 << 3)
#define AA_MAY_CREATE (1 << 4)
#define AA_MAY_DELETE (1 << 5)
#define AA_MAY_OPEN (1 << 6)
#define AA_MAY_RENAME (1 << 7)
#define AA_MAY_SETATTR (1 << 8)
#define AA_MAY_GETATTR (1 << 9)
#define AA_MAY_SETCRED (1 << 10)
#define AA_MAY_GETCRED (1 << 11)
#define AA_MAY_CHMOD (1 << 12)
#define AA_MAY_CHOWN (1 << 13)
#define AA_MAY_LOCK 0x8000
#define AA_EXEC_MMAP 0x10000
#define AA_MAY_LINK 0x40000
#define AA_MAY_ONEXEC 0x20000000
#define AA_MAY_CHANGE_PROFILE 0x40000000
extern int aa_query_file_path(uint32_t mask, const char *label,
const char *path, int *allowed, int *audited);
extern int aa_query_link_path_len(const char *label, size_t label_len,
const char *target, size_t target_len,
const char *link, size_t link_len,
int *allowed, int *audited);
extern int aa_query_link_path(const char *label, const char *target,
const char *link, int *allowed, int *audited);

View File

@@ -1,21 +0,0 @@
WRAPPERFILES = apparmorlogparse_wrap.c
BUILT_SOURCES = apparmorlogparse_wrap.c
all-local: apparmorlogparse_wrap.o
$(CC) -module apparmorlogparse_wrap.o -o libaalogparse.so
apparmorlogparse_wrap.o: apparmorlogparse_wrap.c
$(CC) -c apparmorlogparse_wrap.c $(CFLAGS) -I../../src -I/usr/include/classpath -fno-strict-aliasing -o apparmorlogparse_wrap.o
clean-local:
rm -rf org
apparmorlogparse_wrap.c: org/aalogparse ../SWIG/*.i
$(SWIG) -java -I../SWIG -I../../src -outdir org/aalogparse \
-package org.aalogparse -o apparmorlogparse_wrap.c libaalogparse.i
org/aalogparse:
mkdir -p org/aalogparse
EXTRA_DIST = $(BUILT_SOURCES)

View File

@@ -7,7 +7,7 @@ import sysconfig
import setuptools
if tuple(map(int, setuptools.__version__.split("."))) >= (62, 1):
if tuple(map(int, setuptools.__version__.split(".")[:2])) >= (62, 1):
identifier = sys.implementation.cache_tag
else:
identifier = "%d.%d" % sys.version_info[:2]

View File

@@ -55,10 +55,100 @@ NO_VALUE_MAP = {
'fsuid': int(ctypes.c_ulong(-1).value),
'ouid': int(ctypes.c_ulong(-1).value),
}
class AAPythonBindingsTests(unittest.TestCase):
def setUp(self):
# REPORT ALL THE OUTPUT
self.maxDiff = None
def test_aa_splitcon(self):
AA_SPLITCON_EXPECT = [
("unconfined", "unconfined", None),
("unconfined\n", "unconfined", None),
("/bin/ping (enforce)", "/bin/ping", "enforce"),
("/bin/ping (enforce)\n", "/bin/ping", "enforce"),
("/usr/sbin/rsyslog (complain)", "/usr/sbin/rsyslog", "complain"),
]
for context, expected_label, expected_mode in AA_SPLITCON_EXPECT:
actual_label, actual_mode = libapparmor.aa_splitcon(context)
if expected_label is None:
self.assertIsNone(actual_label)
else:
self.assertIsInstance(actual_label, str)
self.assertEqual(expected_label, actual_label)
if expected_mode is None:
self.assertIsNone(actual_mode)
else:
self.assertIsInstance(actual_mode, str)
self.assertEqual(expected_mode, actual_mode)
with self.assertRaises(ValueError):
libapparmor.aa_splitcon("")
def test_aa_is_enabled(self):
aa_enabled = libapparmor.aa_is_enabled()
self.assertIsInstance(aa_enabled, bool)
@unittest.skipUnless(libapparmor.aa_is_enabled(), "AppArmor is not enabled")
def test_aa_find_mountpoint(self):
mount_point = libapparmor.aa_find_mountpoint()
self.assertIsInstance(mount_point, str)
self.assertGreater(len(mount_point), 0, "mount point should not be empty")
self.assertTrue(os.path.isdir(mount_point))
# TODO: test commented out functions (or at least their prototypes)
# extern int aa_change_profile(const char *profile);
# extern int aa_change_onexec(const char *profile);
@unittest.skipUnless(libapparmor.aa_is_enabled(), "AppArmor is not enabled")
def test_change_hats(self):
# Changing hats will fail because we have no valid hats to change to
# However, we still verify that we get an OSError instead of a TypeError
with self.assertRaises(OSError):
libapparmor.aa_change_hat("nonexistent_profile", 12345678)
with self.assertRaises(OSError):
libapparmor.aa_change_hatv(["nonexistent_1", "nonexistent_2"], 0xabcdef)
libapparmor.aa_change_hatv(("nonexistent_1", "nonexistent_2"), 0xabcdef)
# extern int aa_stack_profile(const char *profile);
# extern int aa_stack_onexec(const char *profile);
# extern int aa_getprocattr(pid_t tid, const char *attr, char **label, char **mode);
# extern int aa_gettaskcon(pid_t target, char **label, char **mode);
@unittest.skipUnless(libapparmor.aa_is_enabled(), "AppArmor is not enabled")
def test_aa_gettaskcon(self):
# Our test harness should be running us as unconfined
# Get our own pid and this should be equivalent to aa_getcon
pid = os.getpid()
label, mode = libapparmor.aa_gettaskcon(pid)
self.assertEqual(label, "unconfined", "aa_gettaskcon label should be unconfined")
self.assertIsNone(mode, "aa_gettaskcon mode should be unconfined")
@unittest.skipUnless(libapparmor.aa_is_enabled(), "AppArmor is not enabled")
def test_aa_getcon(self):
# Our test harness should be running us as unconfined
label, mode = libapparmor.aa_getcon()
self.assertEqual(label, "unconfined", "aa_getcon label should be unconfined")
self.assertIsNone(mode, "aa_getcon mode should be unconfined")
# extern int aa_getpeercon(int fd, char **label, char **mode);
# extern int aa_query_file_path(uint32_t mask, const char *label,
# const char *path, int *allowed, int *audited);
@unittest.skipUnless(libapparmor.aa_is_enabled(), "AppArmor is not enabled")
def test_aa_query_file_path(self):
aa_query_mask = libapparmor.AA_MAY_EXEC | libapparmor.AA_MAY_READ | libapparmor.AA_MAY_WRITE
allowed, audited = libapparmor.aa_query_file_path(aa_query_mask, "unconfined", "/tmp/hello")
self.assertTrue(allowed)
self.assertFalse(audited)
# extern int aa_query_link_path(const char *label, const char *target,
# const char *link, int *allowed, int *audited);
class AALogParsePythonBindingsTests(unittest.TestCase):
def setUp(self):
# REPORT ALL THE OUTPUT
self.maxDiff = None
@@ -118,6 +208,9 @@ class AAPythonBindingsTests(unittest.TestCase):
# FIXME: out files should report log version?
# FIXME: or can we just deprecate v1 logs?
continue
elif key == "thisown":
# SWIG generates this key to track memory allocation
continue
elif key in NO_VALUE_MAP:
if NO_VALUE_MAP[key] == value:
continue
@@ -142,7 +235,7 @@ def main():
def stub_test(self, testname=f):
self._runtest(testname)
stub_test.__doc__ = "test " + f
setattr(AAPythonBindingsTests, 'test_' + f, stub_test)
setattr(AALogParsePythonBindingsTests, 'test_' + f, stub_test)
return unittest.main(verbosity=2)

View File

@@ -107,7 +107,7 @@ int print_results(aa_log_record *record)
print_string("Name", record->name);
print_string("Command", record->comm);
print_string("Name2", record->name2);
print_string("Namespace", record->namespace);
print_string("Namespace", record->aa_namespace);
print_string("Attribute", record->attribute);
print_long("Task", record->task, 0);
print_long("Parent", record->parent, 0);
@@ -142,7 +142,7 @@ int print_results(aa_log_record *record)
print_string("Execpath", record->execpath);
print_string("Class", record->class);
print_string("Class", record->rule_class);
print_long("Epoch", record->epoch, 0);
print_long("Audit subid", (long) record->audit_sub_id, 0);

View File

@@ -1,2 +1,4 @@
/home/cb/bin/hello.sh {
/usr/bin/rm mrix,
}

View File

@@ -1,2 +1,4 @@
/usr/bin/wireshark {
/usr/lib64/wireshark/extcap/androiddump mrix,
}

View File

@@ -1,4 +1,4 @@
/bin/ping {
ping2 ix,
/bin/ping mrix,
}

View File

@@ -1,4 +1,4 @@
/bin/ping {
/bin/ping ix,
/bin/ping mrix,
}

View File

@@ -1,4 +1,4 @@
/bin/ping {
/bin/ping ix,
/bin/ping mrix,
}

View File

@@ -0,0 +1,4 @@
/home/steve/aa-regression-tests/link {
/tmp/sdtest.8236-29816-IN8243/target l,
}

View File

@@ -0,0 +1 @@
2025-01-27T13:01:36.226987+05:30 sec-plucky-amd64 kernel: audit: type=1400 audit(1737963096.225:3240): apparmor="AUDIT" operation="getattr" class="file" profile="/usr/sbin/mosquitto" name="/etc/mosquitto/pwfile" pid=8119 comm="mosquitto" requested_mask="r" fsuid=122 ouid=122

View File

@@ -0,0 +1,15 @@
START
File: testcase36.in
Event type: AA_RECORD_AUDIT
Audit ID: 1737963096.225:3240
Operation: getattr
Mask: r
fsuid: 122
ouid: 122
Profile: /usr/sbin/mosquitto
Name: /etc/mosquitto/pwfile
Command: mosquitto
PID: 8119
Class: file
Epoch: 1737963096
Audit subid: 3240

View File

@@ -0,0 +1,4 @@
/usr/sbin/mosquitto {
/etc/mosquitto/pwfile r,
}

View File

@@ -1,3 +1,4 @@
/tmp/apparmor-2.8.0/tests/regression/apparmor/dbus_service {
dbus send bus=system path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=LookupDynamicUserByName peer=( name=org.freedesktop.systemd1, label=unconfined),
dbus send bus=system path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=LookupDynamicUserByName peer=(label=unconfined),
}

View File

@@ -0,0 +1 @@
[ 310.308737] audit: type=1400 audit(1724847289.985:631): apparmor="ALLOWED" operation="getsockname" class="net" profile="/usr/bin/wg" pid=15374 comm="wg" laddr=::ffff:127.0.0.1 lport=53131 faddr=::ffff:127.0.0.1 fport=51821 saddr=::ffff:127.0.0.1 src=53131 family="inet6" sock_type="dgram" protocol=17 requested="getattr" denied="getattr"

View File

@@ -0,0 +1,20 @@
START
File: testcase_network_12.in
Event type: AA_RECORD_ALLOWED
Audit ID: 1724847289.985:631
Operation: getsockname
Mask: getattr
Denied Mask: getattr
Profile: /usr/bin/wg
Command: wg
PID: 15374
Network family: inet6
Socket type: dgram
Protocol: udp
Local addr: ::ffff:127.0.0.1
Foreign addr: ::ffff:127.0.0.1
Local port: 53131
Foreign port: 51821
Class: net
Epoch: 1724847289
Audit subid: 631

View File

@@ -0,0 +1,4 @@
/usr/bin/wg {
network (getattr) inet6 dgram ip=::ffff:127.0.0.1 port=53131,
}

View File

@@ -38,7 +38,7 @@ MANPAGES=apparmor.d.5 apparmor.7 apparmor_parser.8 aa-teardown.8 apparmor_xattrs
# parse.error=verbose supported from 3.0 so just test on that
# TODO move to autoconf
BISON_MAJOR:=$(shell bison --version | awk '/^bison/ { print ($$NF) }' | awk -F. '{print $$1 }')
USE_PARSE_ERROR:=$(shell test ${BISON_MAJOR} -ge 3 && echo true)
USE_PARSE_ERROR:=$(shell test "${BISON_MAJOR}" -ge 3 && echo true)
YACC := bison
YFLAGS := -d
@@ -365,6 +365,9 @@ cap_names.h: generated_cap_names.h base_cap_names.h
exit 1; \
fi
.PHONY: tst_binaries
tst_binaries: $(TESTS)
tst_lib: lib.c parser.h $(filter-out lib.o, ${TEST_OBJECTS})
$(CXX) $(TEST_CFLAGS) -o $@ $< $(filter-out $(<:.c=.o), ${TEST_OBJECTS}) $(TEST_LDFLAGS) $(TEST_LDLIBS)
tst_%: parser_%.c parser.h $(filter-out parser_%.o, ${TEST_OBJECTS})

View File

@@ -175,7 +175,7 @@ B<NETWORK PEER EXPR> = 'peer' '=' '(' ( I<NETWORK IP COND> | I<NETWORK PORT COND
B<NETWORK IP COND> = 'ip' '=' ( 'none' | I<NETWORK IPV4> | I<NETWORK IPV6> )
B<NETWORK PORT COND> = 'port' '=' ( I<NETWORK PORT> )
B<NETWORK PORT COND> = 'port' '=' ( I<NETWORK PORT> | I<NETWORK PORT> '-' I<NETWORK PORT> )
B<NETWORK IPV4> = IPv4, represented by four 8-bit decimal numbers separated by '.'
@@ -388,7 +388,7 @@ aa_change_hat(2) can take advantage of subprofiles to run under different
confinements, dependent on program logic. Several aa_change_hat(2)-aware
applications exist, including an Apache module, mod_apparmor(5); a PAM
module, pam_apparmor; and a Tomcat valve, tomcat_apparmor. Applications
written or modified to use change_profile(2) transition permanently to the
written or modified to use aa_change_profile(2) transition permanently to the
specified profile. libvirt is one such application.
=head2 Profile Head
@@ -604,7 +604,7 @@ modes:
=item B<Ux>
- unconfined execute -- scrub the environment
- unconfined execute -- use ld.so(8) secure-execution mode
=item B<px>
@@ -612,7 +612,7 @@ modes:
=item B<Px>
- discrete profile execute -- scrub the environment
- discrete profile execute -- use ld.so(8) secure-execution mode
=item B<cx>
@@ -620,7 +620,7 @@ modes:
=item B<Cx>
- transition to subprofile on execute -- scrub the environment
- transition to subprofile on execute -- use ld.so(8) secure-execution mode
=item B<ix>
@@ -632,7 +632,7 @@ modes:
=item B<Pix>
- discrete profile execute with inherit fallback -- scrub the environment
- discrete profile execute with inherit fallback -- use ld.so(8) secure-execution mode
=item B<cix>
@@ -640,7 +640,7 @@ modes:
=item B<Cix>
- transition to subprofile on execute with inherit fallback -- scrub the environment
- transition to subprofile on execute with inherit fallback -- use ld.so(8) secure-execution mode
=item B<pux>
@@ -648,7 +648,7 @@ modes:
=item B<PUx>
- discrete profile execute with fallback to unconfined -- scrub the environment
- discrete profile execute with fallback to unconfined -- use ld.so(8) secure-execution mode
=item B<cux>
@@ -656,7 +656,7 @@ modes:
=item B<CUx>
- transition to subprofile on execute with fallback to unconfined -- scrub the environment
- transition to subprofile on execute with fallback to unconfined -- use ld.so(8) secure-execution mode
=item B<deny x>
@@ -715,20 +715,20 @@ constrained, see the apparmor(7) man page.
B<WARNING> 'ux' should only be used in very special cases. It enables the
designated child processes to be run without any AppArmor protection.
'ux' does not scrub the environment of variables such as LD_PRELOAD;
as a result, the calling domain may have an undue amount of influence
over the callee. Use this mode only if the child absolutely must be
'ux' does not use ld.so(8) secure-execution mode to clear variables such as
LD_PRELOAD; as a result, the calling domain may have an undue amount of
influence over the callee. Use this mode only if the child absolutely must be
run unconfined and LD_PRELOAD must be used. Any profile using this mode
provides negligible security. Use at your own risk.
Incompatible with other exec transition modes and the deny qualifier.
=item B<Ux - unconfined execute -- scrub the environment>
=item B<Ux - unconfined execute -- use ld.so(8) secure-execution mode>
'Ux' allows the named program to run in 'ux' mode, but AppArmor
will invoke the Linux Kernel's B<unsafe_exec> routines to scrub
the environment, similar to setuid programs. (See ld.so(8) for some
information on setuid/setgid environment scrubbing.)
will invoke the Linux Kernel's B<unsafe_exec> routines to set ld.so(8)
secure-execution mode and clear environment variables such as LD_PRELOAD,
similar to setuid programs. (See ld.so(8) for more information.)
B<WARNING> 'Ux' should only be used in very special cases. It enables the
designated child processes to be run without any AppArmor protection.
@@ -743,18 +743,18 @@ This mode requires that a discrete security profile is defined for a
program executed and forces an AppArmor domain transition. If there is
no profile defined then the access will be denied.
B<WARNING> 'px' does not scrub the environment of variables such as
LD_PRELOAD; as a result, the calling domain may have an undue amount of
B<WARNING> 'px' does not use ld.so(8) secure-execution mode to clear variables
such as LD_PRELOAD; as a result, the calling domain may have an undue amount of
influence over the callee.
Incompatible with other exec transition modes and the deny qualifier.
=item B<Px - Discrete Profile execute mode -- scrub the environment>
=item B<Px - Discrete Profile execute mode -- use ld.so(8) secure-execution mode>
'Px' allows the named program to run in 'px' mode, but AppArmor
will invoke the Linux Kernel's B<unsafe_exec> routines to scrub
the environment, similar to setuid programs. (See ld.so(8) for some
information on setuid/setgid environment scrubbing.)
will invoke the Linux Kernel's B<unsafe_exec> routines to set ld.so(8)
secure-execution mode and clear environment variables such as LD_PRELOAD,
similar to setuid programs. (See ld.so(8) for more information.)
Incompatible with other exec transition modes and the deny qualifier.
@@ -764,18 +764,18 @@ This mode requires that a local security profile is defined and forces an
AppArmor domain transition to the named profile. If there is no profile
defined then the access will be denied.
B<WARNING> 'cx' does not scrub the environment of variables such as
LD_PRELOAD; as a result, the calling domain may have an undue amount of
B<WARNING> 'cx' does not use ld.so(8) secure-execution mode to clear variables
such as LD_PRELOAD; as a result, the calling domain may have an undue amount of
influence over the callee.
Incompatible with other exec transition modes and the deny qualifier.
=item B<Cx - Transition to Subprofile execute mode -- scrub the environment>
=item B<Cx - Transition to Subprofile execute mode -- use ld.so(8) secure-execution mode>
'Cx' allows the named program to run in 'cx' mode, but AppArmor
will invoke the Linux Kernel's B<unsafe_exec> routines to scrub
the environment, similar to setuid programs. (See ld.so(8) for some
information on setuid/setgid environment scrubbing.)
will invoke the Linux Kernel's B<unsafe_exec> routines to set ld.so(8)
secure-execution mode and clear environment variables such as LD_PRELOAD,
similar to setuid programs. (See ld.so(8) for more information.)
Incompatible with other exec transition modes and the deny qualifier.
@@ -788,7 +788,7 @@ will inherit the current profile.
This mode is useful when a confined program needs to call another
confined program without gaining the permissions of the target's
profile, or losing the permissions of the current profile. There is no
version to scrub the environment because 'ix' executions don't change
version to set secure-execution mode because 'ix' executions don't change
privileges.
Incompatible with other exec transition modes and the deny qualifier.
@@ -996,13 +996,14 @@ can be represented by:
network ip=0.0.0.0; #allow INADDR_ANY
The network rules support the specification of local and remote IP
addresses and ports.
addresses, ports, and port ranges.
network ip=127.0.0.1 port=8080,
network peer=(ip=10.139.15.23 port=8081),
network ip=fd74:1820:b03a:b361::cf32 peer=(ip=fd74:1820:b03a:b361::a0f9),
network port=8080 peer=(port=8081),
network ip=127.0.0.1 port=8080 peer=(ip=10.139.15.23 port=8081),
network ip=127.0.0.1 port=8080-8084,
=head2 Mount Rules
@@ -1023,7 +1024,7 @@ If a conditional is specified using '=', then the rule only grants permission
for mounts matching the exactly specified options. For example, an AppArmor
policy with the following rule:
mount options=ro /dev/foo -E<gt> /mnt/,
mount options=ro /dev/foo -> /mnt/,
Would match:
@@ -1070,7 +1071,7 @@ grants permission for each set of options. This provides a shorthand when
writing mount rules which might help to logically break up a conditional. For
example, if an AppArmor policy has the following rule:
mount options=ro options=atime
mount options=ro options=atime,
both of these mount commands will match:
@@ -1318,7 +1319,7 @@ Example IO_URING rules:
=over 4
# Allow io_uring operations
io_ring,
io_uring,
# Allow creation of a polling thread
io_uring sqpoll,
@@ -1338,8 +1339,9 @@ pivot_root(2) is optionally specified in the 'pivot_root' rule using the
'oldroot=' prefix.
AppArmor 'pivot_root' rules can specify a profile transition to occur during
the pivot_root(2) system call. Note that AppArmor will only transition the
process calling pivot_root(2) to the new profile.
the pivot_root(2) system call. Note that currently, this feature is not
supported by any kernel. When this feature will be supported, AppArmor will
only transition the process calling pivot_root(2) to the new profile.
The paths specified in 'pivot_root' rules must end with '/' since they are
directories.
@@ -1688,11 +1690,11 @@ rule set. Eg.
change_profile /bin/bash -> {new_profile1,new_profile2,new_profile3},
The exec mode dictates whether or not the Linux Kernel's B<unsafe_exec>
routines should be used to scrub the environment, similar to setuid programs.
(See ld.so(8) for some information on setuid/setgid environment scrubbing.) The
B<safe> mode sets up environment scrubbing to occur when the new application is
executed and B<unsafe> mode disables AppArmor's requirement for environment
scrubbing (the kernel and/or libc may still require environment scrubbing). An
routines should be used to set ld.so(8) secure-execution mode and clear
environment variables such as LD_PRELOAD, similar to setuid programs.
(See ld.so(8) for more information.) The B<safe> mode sets up secure-execution
mode for the new application, and B<unsafe> mode disables AppArmor's
requirement for it (the kernel and/or libc may still turn it on). An
exec mode can only be specified when an exec condition is present.
change_profile safe /bin/bash -> new_profile,
@@ -2157,7 +2159,7 @@ negative values match when specifying one or the other. Eg, 'rw' matches when
=head1 SEE ALSO
apparmor(7), apparmor_parser(8), apparmor_xattrs(7), aa-complain(1),
aa-enforce(1), aa_change_hat(2), mod_apparmor(5), and
aa-enforce(1), aa_change_hat(2), aa_change_profile(2), mod_apparmor(5), and
L<https://wiki.apparmor.net>.
=cut

View File

@@ -206,8 +206,8 @@ which can help debugging profiles.
=head2 Enable debug mode
When debug mode is enabled, AppArmor will log a few extra messages to
dmesg (not via the audit subsystem). For example, the logs will tell
whether environment scrubbing has been applied.
dmesg (not via the audit subsystem). For example, the logs will state when
ld.so(8) secure-execution mode has been applied in a profile transition.
To enable debug mode, run:

View File

@@ -63,7 +63,6 @@ typedef enum capability_flags {
} capability_flags;
int name_to_capability(const char *keyword);
void capabilities_init(void);
void __debug_capabilities(uint64_t capset, const char *name);
bool add_cap_feature_mask(struct aa_features *features, capability_flags flags);
void clear_cap_flag(capability_flags flags);

View File

@@ -43,7 +43,13 @@ optflag_table_t dumpflag_table[] = {
{ 1, "dfa-progress", "Dump dfa creation as in progress",
DUMP_DFA_PROGRESS | DUMP_DFA_STATS },
{ 1, "dfa-stats", "Dump dfa creation stats", DUMP_DFA_STATS },
{ 1, "dfa-states", "Dump dfa state diagram", DUMP_DFA_STATES },
{ 1, "dfa-states", "Dump final dfa state information", DUMP_DFA_STATES },
{ 1, "dfa-compressed-states", "Dump compressed dfa state information", DUMP_DFA_COMPTRESSED_STATES },
{ 1, "dfa-states-initial", "Dump dfa state immediately after initial build", DUMP_DFA_STATES_INIT },
{ 1, "dfa-states-post-filter", "Dump dfa state immediately after filtering deny", DUMP_DFA_STATES_POST_FILTER },
{ 1, "dfa-states-post-minimize", "Dump dfa state immediately after initial build", DUMP_DFA_STATES_POST_MINIMIZE },
{ 1, "dfa-states-post-unreachable", "Dump dfa state immediately after filtering deny", DUMP_DFA_STATES_POST_UNREACHABLE },
{ 1, "dfa-perms-build", "Dump permission being built from accept node", DUMP_DFA_PERMS },
{ 1, "dfa-graph", "Dump dfa dot (graphviz) graph", DUMP_DFA_GRAPH },
{ 1, "dfa-minimize", "Dump dfa minimization", DUMP_DFA_MINIMIZE },
{ 1, "dfa-unreachable", "Dump dfa unreachable states",

View File

@@ -95,6 +95,12 @@
#define ALL_USER_EXEC (AA_USER_EXEC | AA_USER_EXEC_TYPE)
#define ALL_OTHER_EXEC (AA_OTHER_EXEC | AA_OTHER_EXEC_TYPE)
#define AA_USER_EXEC_INHERIT (AA_EXEC_INHERIT << AA_USER_SHIFT)
#define AA_OTHER_EXEC_INHERIT (AA_EXEC_INHERIT << AA_OTHER_SHIFT)
#define AA_USER_EXEC_MMAP (AA_OLD_EXEC_MMAP << AA_USER_SHIFT)
#define AA_OTHER_EXEC_MMAP (AA_OLD_EXEC_MMAP << AA_OTHER_SHIFT)
#define AA_LINK_BITS ((AA_OLD_MAY_LINK << AA_USER_SHIFT) | \
(AA_OLD_MAY_LINK << AA_OTHER_SHIFT))
@@ -175,6 +181,28 @@ static inline int is_merged_x_consistent(int a, int b)
return 1;
}
/* Arbitrary max and minimum priority that userspace can specify,
* internally we handle up to MAX_INTERNAL_PRIORITY and
* MIN_INTERNAL_PRIORITY. Do not ever allow INT_MAX, or INT_MIN
* because cmp uses subtraction and it can cause overflow. Ensure we
* don't over/underflow make internal max/min one more than allowed on
* rules.
*
* see
* note on mediates_priority
*/
#define MIN_POLICY_PRIORITY (-1000)
#define MAX_POLICY_PRIORITY (1000)
/* internally we need a priority that any policy based rule can override
* and a priority that no policy based rule can override. These are
* used on rules encoding what abi/classes are supported by the
* compiled policy.
*/
#define MIN_INTERNAL_PRIORITY (MIN_POLICY_PRIORITY - 1)
#define MAX_INTERNAL_PRIORITY (MAX_POLICY_PRIORITY + 1)
#endif /* ! _IMMUNIX_H */
/* LocalWords: MMAP

View File

@@ -127,6 +127,13 @@ int io_uring_rule::gen_policy_re(Profile &prof)
audit == AUDIT_FORCE ? perms : 0,
parseopts))
goto fail;
/* add a mediates_io_uring rule for every rule added. It
* needs to be the same priority
*/
if (!prof.policy.rules->add_rule(buf.c_str(), priority,
RULE_ALLOW, AA_MAY_READ, 0,
parseopts))
goto fail;
if (perms & AA_IO_URING_OVERRIDE_CREDS) {
buf = buffer.str(); /* update buf to have label */

View File

@@ -14,6 +14,14 @@ AR ?= ar
CFLAGS ?= -g -Wall -O2 ${EXTRA_CFLAGS} -std=gnu++0x
CXXFLAGS := ${CFLAGS} ${INCLUDE_APPARMOR}
LIB_HDRS = aare_rules.h flex-tables.h apparmor_re.h hfa.h chfa.h parse.h \
expr-tree.h policy_compat.h
OTHER_HDRS = ../common_optarg.h ../common_flags.h ../immunix.h \
../policydb.h ../perms.h ../rule.h
HDRS = ${LIB_HDRS} ${OTHER_HDRS}
ARFLAGS=-rcs
BISON := bison
@@ -27,17 +35,17 @@ libapparmor_re.a: parse.o expr-tree.o hfa.o chfa.o aare_rules.o policy_compat.o
expr-tree.o: expr-tree.cc expr-tree.h
hfa.o: hfa.cc apparmor_re.h hfa.h ../immunix.h policy_compat.h
hfa.o: hfa.cc ${HDRS}
aare_rules.o: aare_rules.cc aare_rules.h apparmor_re.h expr-tree.h hfa.h chfa.h parse.h ../immunix.h
aare_rules.o: aare_rules.cc ${HDRS}
chfa.o: chfa.cc chfa.h ../immunix.h
chfa.o: chfa.cc ${HDRS}
policy_compat.o: policy_compat.cc policy_compat.h ../perms.h ../immunix.h
policy_compat.o: policy_compat.cc ${HDRS}
parse.o : parse.cc apparmor_re.h expr-tree.h
parse.o : parse.cc ${HDRS}
parse.cc : parse.y parse.h flex-tables.h ../immunix.h
parse.cc : parse.y ${HDRS}
${BISON} -o $@ $<
clean:

View File

@@ -10,6 +10,199 @@ aare_rules.{h,cc} - code to that binds parse -> expr-tree -> hfa generation
-> chfa generation into a basic interface for converting
rules to a runtime ready state machine.
Notes on the compiler pipeline order
============================================
Front End: Program driver logic and policy text parsing into an
abstract syntax tree.
Middle Layer: Transforms and operations on the abstract syntax tree.
Converts syntax tree into expression tree for back end.
Back End: transforms of syntax tree, and creation of policy HFA from
expression trees and HFAs.
Basic order of the backend of the compiler pipe line and where the
dump information occurs in the pipeline.
===== Front End (parse -> AST ================
|
v
yyparse
|
+--->--+-->-+
| |
| +-->---- +---------------------------<-----------------------+
| | | |
| | v |
| | yylex |
| | | |
| ^ token match |
| | | |
| | +----------------------------+ |
| | | | ^
| | v v |
| +-<- rule match? preprocess |
| | | |
| early var expansion +----------+-----------+ |
| | | | | |
^ v v v v |
| new rule() / new ent include variable conditional |
| | | | | |
| v +---->-----+----->-----+----->----+
| new rule semantic check
| |
+-----<-----+
|
----------- | ------ End of Parse --------------------
|
v
post_parse_profile semantic check
|
v
post_process
|
v
add implied rules()
|
v
process_profile_variables()
|
v
rule->expand_variables()
|
+--------+
|
v
replace aliases (to be moved to backend rewrite)
|
v
merge rules
|
v
profile->merge_rules()
|
v
+-->--rule->is_mergeable()
| |
^ v
| add to table
| |
+-------+--------+
|
v
sort->cmp()/oper<()
|
rule->merge()
|
+------------+
|
v
process_profile_rules
|
v
rule->gen_policy_re()
|
v
===== Mid layer (AST -> expr tree) =================
|
+-> add_rule() (aare_rules.{h,cc})
| |
| v
| rule parse (parse.y)
| | |
| | v
| | expr tree (expr-tree.{h,cc})
| | |
| v |
| unique perms | (aare_rules.{h,cc})
| | |
| +------ +
| |
| v
| add to rules expr tree (aare_rules.{h,c})
| |
+------+
|
+------------------+
|
v
create_dfablob()
|
v
expr tree
|
v
create_chfa() (aare_rules.cc)
|
v
expr normalization (expr-tree.{h,cc})
|
v
expr simplification (expr-tree.{h,c})
|
+- D expr-tree
|
+- D expr-simplified
|
==== Back End - Create cHFA out of expr tree and other HFAs ====
v
hfa creation (hfa.{h,cc})
|
+- D dfa-node-map
|
+- D dfa-uniq-perms
|
+- D dfa-states-initial
|
v
hfa rewrite (not yet implemented)
|
v
filter deny (hfa.{h,cc})
|
+- D dfa-states-post-filter
|
v
minimization (hfa.{h,cc})
|
+- D dfa-minimize-partitions
|
+- D dfa-minimize-uniq-perms
|
+- D dfa-states-post-minimize
|
v
unreachable state removal (hfa.{h,cc})
|
+- D dfa-states-post-unreachable
|
+- D dfa-states constructed hfa
|
+- D dfa-graph
|
v
equivalence class construction
|
+- D equiv
|
diff encode (hfa.{h,cc})
|
+- D diff-encode
|
compute perms table
|
+- D compressed-dfa == perm table dump
|
compressed hfa (chfa.{h,cc}
|
+- D compressed-dfa == transition tables
|
+- D dfa-compressed-states - compress HFA in state form
|
v
Return to Mid Layer
Notes on the compress hfa file format (chfa)
==============================================

View File

@@ -258,6 +258,9 @@ CHFA *aare_rules::create_chfa(int *min_match_len,
if (opts.dump & DUMP_DFA_UNIQ_PERMS)
dfa.dump_uniq_perms("dfa");
if (opts.dump & DUMP_DFA_STATES_INIT)
dfa.dump(cerr, NULL);
/* since we are building a chfa, use the info about
* whether the chfa supports extended perms to help
* determine whether we clear the deny info.
@@ -265,25 +268,26 @@ CHFA *aare_rules::create_chfa(int *min_match_len,
* information supported by the backed
*/
if (!extended_perms ||
// TODO: we should drop DFA_MINIMIZE check here but doing
// so changes behavior. Do as a separate patch and fixup
// tests, etc.
((opts.control & CONTROL_DFA_FILTER_DENY) &&
(opts.control & CONTROL_DFA_MINIMIZE)))
((opts.control & CONTROL_DFA_FILTER_DENY))) {
dfa.apply_and_clear_deny();
if (opts.dump & DUMP_DFA_STATES_POST_FILTER)
dfa.dump(cerr, NULL);
}
if (opts.control & CONTROL_DFA_MINIMIZE) {
dfa.minimize(opts);
if (opts.dump & DUMP_DFA_MIN_UNIQ_PERMS)
dfa.dump_uniq_perms("minimized dfa");
if (opts.dump & DUMP_DFA_STATES_POST_MINIMIZE)
dfa.dump(cerr, NULL);
}
if (opts.control & CONTROL_DFA_REMOVE_UNREACHABLE)
if (opts.control & CONTROL_DFA_REMOVE_UNREACHABLE) {
dfa.remove_unreachable(opts);
if (opts.dump & DUMP_DFA_STATES_POST_UNREACHABLE)
dfa.dump(cerr, NULL);
}
if (opts.dump & DUMP_DFA_STATES)
dfa.dump(cerr);
dfa.dump(cerr, NULL);
if (opts.dump & DUMP_DFA_GRAPH)
dfa.dump_dot_graph(cerr);
@@ -310,11 +314,25 @@ CHFA *aare_rules::create_chfa(int *min_match_len,
//cerr << "Checking extended perms " << extended_perms << "\n";
if (extended_perms) {
//cerr << "creating permstable\n";
dfa.compute_perms_table(perms_table, prompt);
dfa.compute_perms_table(perms_table, prompt);
// TODO: move perms table to a class
if (opts.dump & DUMP_DFA_TRANS_TABLE && perms_table.size()) {
cerr << "Perms Table size: " << perms_table.size() << "\n";
perms_table[0].dump_header(cerr);
for (size_t i = 0; i < perms_table.size(); i++) {
perms_table[i].dump(cerr);
cerr << "accept1: 0x";
cerr << ", accept2: 0x";
cerr << "\n";
}
cerr << "\n";
}
}
chfa = new CHFA(dfa, eq, opts, extended_perms, prompt);
if (opts.dump & DUMP_DFA_TRANS_TABLE)
chfa->dump(cerr);
if (opts.dump & DUMP_DFA_COMPTRESSED_STATES)
dfa.dump(cerr, &chfa->num);
}
catch(int error) {
return NULL;

View File

@@ -90,8 +90,10 @@ public:
else
node = new MatchFlag(priority, perms, audit);
pair<iterator, bool> val = nodes.insert(make_pair(tmp, node));
if (val.second == false)
if (val.second == false) {
delete node;
return val.first->second;
}
return node;
}
return res->second;

View File

@@ -60,5 +60,11 @@
#define DUMP_RULE_MERGE (1 << 22)
#define DUMP_DFA_STATE32 (1 << 23)
#define DUMP_DFA_FLAGS_TABLE (1 << 24)
#define DUMP_DFA_STATES_INIT (1 << 25)
#define DUMP_DFA_STATES_POST_FILTER (1 << 26)
#define DUMP_DFA_STATES_POST_MINIMIZE (1 << 27)
#define DUMP_DFA_STATES_POST_UNREACHABLE (1 << 28)
#define DUMP_DFA_COMPTRESSED_STATES (1 << 29)
#define DUMP_DFA_PERMS (1 << 30)
#endif /* APPARMOR_RE_H */

View File

@@ -25,6 +25,8 @@
#include <iostream>
#include <fstream>
#include <limits>
#include <arpa/inet.h>
#include <stdio.h>
#include <string.h>
@@ -307,11 +309,16 @@ void CHFA::dump(ostream &os)
st.insert(make_pair(i->second, i->first));
}
os << "size=" << default_base.size() << " (accept, default, base): {state} -> {default state}" << "\n";
os << "size=" << default_base.size() << " (accept, accept2, default, base): {state} -> {default state}" << "\n";
for (size_t i = 0; i < default_base.size(); i++) {
os << i << ": ";
os << "(" << accept[i] << ", " << num[default_base[i].first]
<< ", " << default_base[i].second << ")";
os << "(" << accept[i] << ", ";
if (accept2.size() > 0)
os << accept2[i];
else
os << "---, ";
os << num[default_base[i].first] << ", " <<
default_base[i].second << ")";
if (st[i])
os << " " << *st[i];
if (default_base[i].first)
@@ -587,10 +594,11 @@ void CHFA::weld_file_to_policy(CHFA &file_chfa, size_t &new_start,
// to repeat
assert(accept.size() == old_base_size);
accept.resize(accept.size() + file_chfa.accept.size());
size_t size = policy_perms.size();
assert(policy_perms.size() < std::numeric_limits<ssize_t>::max());
ssize_t size = (ssize_t) policy_perms.size();
policy_perms.resize(size*2 + file_perms.size());
// shift and double the policy perms
for (size_t i = size - 1; size >= 0; i--) {
for (ssize_t i = size - 1; i >= 0; i--) {
policy_perms[i*2] = policy_perms[i];
policy_perms[i*2 + 1] = policy_perms[i];
}

View File

@@ -63,7 +63,7 @@ class CHFA {
DefaultBase default_base;
NextCheck next_check;
const State *start;
map<const State *, size_t> num;
Renumber_Map num;
map<transchar, transchar> eq;
unsigned int chfaflags;
private:

View File

@@ -245,6 +245,7 @@ ostream &operator<<(ostream &os, Node &node);
#define NODE_TYPE_MATCHFLAG (1 << 18)
#define NODE_TYPE_EXACTMATCHFLAG (1 << 19)
#define NODE_TYPE_DENYMATCHFLAG (1 << 20)
#define NODE_TYPE_PROMPTMATCHFLAG (1 << 21)
/* An abstract node in the syntax tree. */
class Node {
@@ -890,7 +891,7 @@ public:
{
type_flags |= NODE_TYPE_MATCHFLAG;
}
ostream &dump(ostream &os) { return os << "< 0x" << hex << perms << '>'; }
ostream &dump(ostream &os) { return os << "< 0x" << hex << perms << std::dec << '>'; }
int priority;
perm32_t perms;
@@ -915,7 +916,10 @@ public:
class PromptMatchFlag: public MatchFlag {
public:
PromptMatchFlag(int priority, perm32_t prompt, perm32_t audit): MatchFlag(priority, prompt, audit) {}
PromptMatchFlag(int priority, perm32_t prompt, perm32_t audit): MatchFlag(priority, prompt, audit)
{
type_flags |= NODE_TYPE_PROMPTMATCHFLAG;
}
};

View File

@@ -83,6 +83,21 @@ ostream &operator<<(ostream &os, State &state)
return os;
}
ostream &operator<<(ostream &os,
const std::pair<State * const, Renumber_Map *> &p)
{
/* dump the state label */
if (p.second && (*p.second)[p.first] != (size_t) p.first->label) {
os << '{';
os << (*p.second)[p.first];
os << " == " << *(p.first);
os << '}';
} else {
os << *(p.first);
}
return os;
}
/**
* diff_weight - Find differential compression distance between @rel and @this
* @rel: State to compare too
@@ -302,7 +317,8 @@ static void split_node_types(NodeSet *nodes, NodeSet **anodes, NodeSet **nnodes
*nnodes = nodes;
}
State *DFA::add_new_state(NodeSet *anodes, NodeSet *nnodes, State *other)
State *DFA::add_new_state(optflags const &opts, NodeSet *anodes,
NodeSet *nnodes, State *other)
{
NodeVec *nnodev, *anodev;
nnodev = nnodes_cache.insert(nnodes);
@@ -310,7 +326,7 @@ State *DFA::add_new_state(NodeSet *anodes, NodeSet *nnodes, State *other)
ProtoState proto;
proto.init(nnodev, anodev);
State *state = new State(node_map.size(), proto, other, filedfa);
State *state = new State(opts, node_map.size(), proto, other, filedfa);
pair<NodeMap::iterator,bool> x = node_map.insert(proto, state);
if (x.second == false) {
delete state;
@@ -322,7 +338,7 @@ State *DFA::add_new_state(NodeSet *anodes, NodeSet *nnodes, State *other)
return x.first->second;
}
State *DFA::add_new_state(NodeSet *nodes, State *other)
State *DFA::add_new_state(optflags const &opts, NodeSet *nodes, State *other)
{
/* The splitting of nodes should probably get pushed down into
* follow(), ie. put in separate lists from the start
@@ -330,12 +346,12 @@ State *DFA::add_new_state(NodeSet *nodes, State *other)
NodeSet *anodes, *nnodes;
split_node_types(nodes, &anodes, &nnodes);
State *state = add_new_state(anodes, nnodes, other);
State *state = add_new_state(opts, anodes, nnodes, other);
return state;
}
void DFA::update_state_transitions(State *state)
void DFA::update_state_transitions(optflags const &opts, State *state)
{
/* Compute possible transitions for state->nodes. This is done by
* iterating over all the nodes in state->nodes and combining the
@@ -358,7 +374,8 @@ void DFA::update_state_transitions(State *state)
/* check the default transition first */
if (cases.otherwise)
state->otherwise = add_new_state(cases.otherwise, nonmatching);
state->otherwise = add_new_state(opts, cases.otherwise,
nonmatching);
else
state->otherwise = nonmatching;
@@ -367,7 +384,7 @@ void DFA::update_state_transitions(State *state)
*/
for (Cases::iterator j = cases.begin(); j != cases.end(); j++) {
State *target;
target = add_new_state(j->second, nonmatching);
target = add_new_state(opts, j->second, nonmatching);
/* Don't insert transition that the otherwise transition
* already covers
@@ -414,7 +431,7 @@ void DFA::process_work_queue(const char *header, optflags const &opts)
/* Update 'from's transitions, and if it transitions to any
* unknown State create it and add it to the work_queue
*/
update_state_transitions(from);
update_state_transitions(opts, from);
} /* while (!work_queue.empty()) */
}
@@ -444,8 +461,8 @@ DFA::DFA(Node *root, optflags const &opts, bool buildfiledfa): root(root), filed
(*i)->compute_followpos();
}
nonmatching = add_new_state(new NodeSet, NULL);
start = add_new_state(new NodeSet(root->firstpos), nonmatching);
nonmatching = add_new_state(opts, new NodeSet, NULL);
start = add_new_state(opts, new NodeSet(root->firstpos), nonmatching);
/* the work_queue contains the states that need to have their
* transitions computed. This could be done with a recursive
@@ -493,11 +510,6 @@ DFA::DFA(Node *root, optflags const &opts, bool buildfiledfa): root(root), filed
*/
nnodes_cache.clear();
node_map.clear();
/* once created the priority information is no longer needed and
* can prevent sets with the same perms and different priorities
* from being merged during minimization
*/
clear_priorities();
}
DFA::~DFA()
@@ -546,6 +558,14 @@ void DFA::dump_uniq_perms(const char *s)
//TODO: add prompt
}
// make sure work_queue and reachable insertion are always done together
static void push_reachable(set<State *> &reachable, list<State *> &work_queue,
State *state)
{
work_queue.push_back(state);
reachable.insert(state);
}
/* Remove dead or unreachable states */
void DFA::remove_unreachable(optflags const &opts)
{
@@ -553,19 +573,18 @@ void DFA::remove_unreachable(optflags const &opts)
/* find the set of reachable states */
reachable.insert(nonmatching);
work_queue.push_back(start);
push_reachable(reachable, work_queue, start);
while (!work_queue.empty()) {
State *from = work_queue.front();
work_queue.pop_front();
reachable.insert(from);
if (from->otherwise != nonmatching &&
reachable.find(from->otherwise) == reachable.end())
work_queue.push_back(from->otherwise);
push_reachable(reachable, work_queue, from->otherwise);
for (StateTrans::iterator j = from->trans.begin(); j != from->trans.end(); j++) {
if (reachable.find(j->second) == reachable.end())
work_queue.push_back(j->second);
push_reachable(reachable, work_queue, j->second);
}
}
@@ -651,13 +670,6 @@ int DFA::apply_and_clear_deny(void)
return c;
}
void DFA::clear_priorities(void)
{
for (Partition::iterator i = states.begin(); i != states.end(); i++)
(*i)->perms.priority = 0;
}
/* minimize the number of dfa states */
void DFA::minimize(optflags const &opts)
@@ -1075,11 +1087,11 @@ void DFA::dump_diff_encode(ostream &os)
/**
* text-dump the DFA (for debugging).
*/
void DFA::dump(ostream & os)
void DFA::dump(ostream &os, Renumber_Map *renum)
{
for (Partition::iterator i = states.begin(); i != states.end(); i++) {
if (*i == start || (*i)->perms.is_accept()) {
os << **i;
os << make_pair(*i, renum);
if (*i == start) {
os << " <== ";
(*i)->perms.dump_header(os);
@@ -1102,7 +1114,7 @@ void DFA::dump(ostream & os)
} else {
if (first) {
first = false;
os << **i << " perms: ";
os << make_pair(*i, renum) << " perms: ";
if ((*i)->perms.is_accept())
(*i)->perms.dump(os);
else
@@ -1110,7 +1122,7 @@ void DFA::dump(ostream & os)
os << "\n";
}
os << " "; j->first.dump(os) << " -> " <<
*(j)->second;
make_pair(j->second, renum);
if ((j)->second->perms.is_accept())
os << " ", (j->second)->perms.dump(os);
os << "\n";
@@ -1120,7 +1132,7 @@ void DFA::dump(ostream & os)
if ((*i)->otherwise != nonmatching) {
if (first) {
first = false;
os << **i << " perms: ";
os << make_pair(*i, renum) << " perms: ";
if ((*i)->perms.is_accept())
(*i)->perms.dump(os);
else
@@ -1135,7 +1147,7 @@ void DFA::dump(ostream & os)
os << *k;
}
}
os << "] -> " << *(*i)->otherwise;
os << "] -> " << make_pair((*i)->otherwise, renum);
if ((*i)->otherwise->perms.is_accept())
os << " ", (*i)->otherwise->perms.dump(os);
os << "\n";
@@ -1334,8 +1346,7 @@ void DFA::compute_perms_table(vector <aa_perms> &perms_table, bool prompt)
perms_table.resize(states.size() * mult);
// nonmatching and start need to be 0 and 1 so handle outside of loop
if (filedfa)
compute_perms_table_ent(nonmatching, 0, perms_table, prompt);
compute_perms_table_ent(nonmatching, 0, perms_table, prompt);
compute_perms_table_ent(start, 1, perms_table, prompt);
for (Partition::iterator i = states.begin(); i != states.end(); i++) {
@@ -1382,84 +1393,186 @@ static inline int diff_qualifiers(perm32_t perm1, perm32_t perm2)
(perm1 & AA_EXEC_TYPE) != (perm2 & AA_EXEC_TYPE));
}
/* update a single permission based on priority - only called if match->perm | match-> audit bit set */
static int pri_update_perm(optflags const &opts, vector<int> &priority, int i,
MatchFlag *match, perms_t &perms, perms_t &exact,
bool filedfa)
{
if (priority[i] > match->priority) {
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " > " << match->priority << " SKIPPING " << hex << (match->perms) << "/" << (match->audit) << dec << "\n";
return 0;
}
perm32_t xmask = 0;
perm32_t mask = 1 << i;
perm32_t amask = mask;
// drop once we move the xindex out of the perms in the front end
if (filedfa) {
if (mask & AA_USER_EXEC) {
xmask = AA_USER_EXEC_TYPE;
// ix implies EXEC_MMAP
if (match->perms & AA_EXEC_INHERIT) {
xmask |= AA_USER_EXEC_MMAP;
//USER_EXEC_MAP = 6
if (priority[6] < match->priority)
priority[6] = match->priority;
}
amask = mask | xmask;
} else if (mask & AA_OTHER_EXEC) {
xmask = AA_OTHER_EXEC_TYPE;
// ix implies EXEC_MMAP
if (match->perms & AA_OTHER_EXEC_INHERIT) {
xmask |= AA_OTHER_EXEC_MMAP;
//OTHER_EXEC_MAP = 20
if (priority[20] < match->priority)
priority[20] = match->priority;
}
amask = mask | xmask;
} else if (((mask & AA_USER_EXEC_MMAP) &&
(match->perms & AA_USER_EXEC_INHERIT)) ||
((mask & AA_OTHER_EXEC_MMAP) &&
(match->perms & AA_OTHER_EXEC_INHERIT))) {
// if exec && ix we handled mmp above
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " <= " << match->priority << " SKIPPING mmap unmasked " << hex << match->perms << "/" << match->audit << " masked " << (match->perms & amask) << "/" << (match->audit & amask) << " data " << (perms.allow & mask) << "/" << (perms.audit & mask) << " exact " << (exact.allow & mask) << "/" << (exact.audit & mask) << dec << "\n";
return 0;
}
}
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " vs. " << match->priority << " mask: " << hex << mask << " xmask: " << xmask << " amask: " << amask << dec << "\n";
if (priority[i] < match->priority) {
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " < " << match->priority << " clearing " << hex << (perms.allow & amask) << "/" << (perms.audit & amask) << " -> " << dec;
priority[i] = match->priority;
perms.clear_bits(amask);
exact.clear_bits(amask);
if (opts.dump & DUMP_DFA_PERMS)
cerr << hex << (perms.allow & amask) << "/" << (perms.audit & amask) << dec << "\n";
}
// the if conditions in order of permission priority
if (match->is_type(NODE_TYPE_DENYMATCHFLAG)) {
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " <= " << match->priority << " deny " << hex << (match->perms & amask) << "/" << (match->audit & amask) << dec << "\n";
perms.deny |= match->perms & amask;
perms.quiet |= match->audit & amask;
perms.allow &= ~amask;
perms.audit &= ~amask;
perms.prompt &= ~amask;
} else if (match->is_type(NODE_TYPE_EXACTMATCHFLAG)) {
/* exact match only asserts dominance on the XTYPE */
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " <= " << match->priority << " exact " << hex << (match->perms & amask) << "/" << (match->audit & amask) << dec << "\n";
if (filedfa &&
!is_merged_x_consistent(exact.allow, match->perms & amask)) {
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " <= " << match->priority << " exact match conflict" << "\n";
return 1;
}
exact.allow |= match->perms & amask;
exact.audit |= match->audit & amask;
// dominance is only done for XTYPE so only clear that
// note xmask only set if setting x perm bit, so this
// won't clear for other bit types
perms.allow &= ~xmask;
perms.audit &= ~xmask;
perms.prompt &= ~xmask;
perms.allow |= match->perms & amask;
perms.audit |= match->audit & amask;
// can't specify exact prompt atm
} else if (!match->is_type(NODE_TYPE_PROMPTMATCHFLAG)) {
// allow perms, if exact has been encountered will already be set
// if overlaps x here, don't conflict, because exact will override
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " <= " << match->priority << " allow " << hex << (match->perms & amask) << "/" << (match->audit & amask) << dec << "\n";
if (filedfa && !(exact.allow & mask) &&
!is_merged_x_consistent(perms.allow, match->perms & amask)) {
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " <= " << match->priority << " allow match conflict" << "\n";
return 1;
}
// mask off if XTYPE in xmatch
if ((exact.allow | exact.audit) & mask) {
// mask == amask & ~xmask
perms.allow |= match->perms & mask;
perms.audit |= match->audit & mask;
} else {
perms.allow |= match->perms & amask;
perms.audit |= match->audit & amask;
}
} else { // if (match->is_type(NODE_TYPE_PROMPTMATCHFLAG)) {
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " <= " << match->priority << " promt " << hex << (match->perms & amask) << "/" << (match->audit & amask) << dec << "\n";
if (filedfa && !((exact.allow | perms.allow) & mask) &&
!is_merged_x_consistent(perms.allow, match->perms & amask)) {
if (opts.dump & DUMP_DFA_PERMS)
cerr << " " << match << "[" << i << "]=" << priority[i] << " <= " << match->priority << " prompt match conflict" << "\n";
return 1;
}
if ((exact.allow | exact.audit | perms.allow | perms.audit) & mask) {
// mask == amask & ~xmask
perms.prompt |= match->perms & mask;
perms.audit |= match->audit & mask;
} else {
perms.prompt |= match->perms & amask;
perms.audit |= match->audit & amask;
}
}
return 0;
}
/**
* Compute the permission flags that this state corresponds to. If we
* have any exact matches, then they override the execute and safe
* execute flags.
*/
int accept_perms(NodeVec *state, perms_t &perms, bool filedfa)
int accept_perms(optflags const &opts, NodeVec *state, perms_t &perms,
bool filedfa)
{
int error = 0;
perms_t exact;
// size of vector needs to be number of bits in the data type
// being used for the permission set.
std::vector<int> priority(sizeof(perm32_t)*8, MIN_INTERNAL_PRIORITY);
perms.clear();
if (!state)
return error;
if (opts.dump & DUMP_DFA_PERMS)
cerr << "Building\n";
for (NodeVec::iterator i = state->begin(); i != state->end(); i++) {
if (!(*i)->is_type(NODE_TYPE_MATCHFLAG))
continue;
MatchFlag *match = static_cast<MatchFlag *>(*i);
if (perms.priority > match->priority)
continue;
if (perms.priority < match->priority) {
perms.clear(match->priority);
exact.clear(match->priority);
}
if (match->is_type(NODE_TYPE_EXACTMATCHFLAG)) {
/* exact match only ever happens with x */
if (filedfa &&
!is_merged_x_consistent(exact.allow, match->perms))
error = 1;
exact.allow |= match->perms;
exact.audit |= match->audit;
} else if (match->is_type(NODE_TYPE_DENYMATCHFLAG)) {
perms.deny |= match->perms;
perms.quiet |= match->audit;
} else if (dynamic_cast<PromptMatchFlag *>(match)) {
perms.prompt |= match->perms;
perms.audit |= match->audit;
} else {
if (filedfa &&
!is_merged_x_consistent(perms.allow, match->perms))
error = 1;
perms.allow |= match->perms;
perms.audit |= match->audit;
perm32_t bit = 1;
perm32_t check = match->perms | match->audit;
if (filedfa)
check &= ~ALL_AA_EXEC_TYPE;
for (int i = 0; check; i++) {
if (check & bit) {
error = pri_update_perm(opts, priority, i, match, perms, exact, filedfa);
if (error)
goto out;
}
check &= ~bit;
bit <<= 1;
}
}
if (filedfa) {
perms.allow |= exact.allow & ~(ALL_AA_EXEC_TYPE);
perms.prompt |= exact.prompt & ~(ALL_AA_EXEC_TYPE);
perms.audit |= exact.audit & ~(ALL_AA_EXEC_TYPE);
} else {
perms.allow |= exact.allow;
perms.prompt |= exact.prompt;
perms.audit |= exact.audit;
if (opts.dump & DUMP_DFA_PERMS) {
cerr << " computed: "; perms.dump(cerr); cerr << "\n";
}
if (exact.allow & AA_USER_EXEC) {
perms.allow = (exact.allow & AA_USER_EXEC_TYPE) |
(perms.allow & ~AA_USER_EXEC_TYPE);
perms.exact = AA_USER_EXEC_TYPE;
}
if (exact.allow & AA_OTHER_EXEC) {
perms.allow = (exact.allow & AA_OTHER_EXEC_TYPE) |
(perms.allow & ~AA_OTHER_EXEC_TYPE);
perms.exact |= AA_OTHER_EXEC_TYPE;
}
if (filedfa && (AA_USER_EXEC & perms.deny))
perms.deny |= AA_USER_EXEC_TYPE;
if (filedfa && (AA_OTHER_EXEC & perms.deny))
perms.deny |= AA_OTHER_EXEC_TYPE;
perms.allow &= ~perms.deny;
perms.quiet &= perms.deny;
perms.prompt &= ~perms.deny;
perms.prompt &= ~perms.allow;
out:
if (error)
fprintf(stderr, "profile has merged rule with conflicting x modifiers\n");

View File

@@ -52,37 +52,37 @@ ostream &operator<<(ostream &os, State &state);
class perms_t {
public:
perms_t(void): priority(INT_MIN), allow(0), deny(0), prompt(0), audit(0), quiet(0), exact(0) { };
perms_t(void): allow(0), deny(0), prompt(0), audit(0), quiet(0), exact(0) { };
bool is_accept(void) { return (allow | deny | prompt | audit | quiet); }
void dump_header(ostream &os)
{
os << "priority (allow/deny/prompt/audit/quiet)";
os << "(allow/deny/prompt/audit/quiet)";
}
void dump(ostream &os)
{
os << " " << priority << " (0x " << hex
os << "(0x " << hex
<< allow << "/" << deny << "/" << "/" << prompt << "/" << audit << "/" << quiet
<< ')' << dec;
}
void clear(void) {
priority = INT_MIN;
allow = deny = prompt = audit = quiet = exact = 0;
}
void clear(int p) {
priority = p;
allow = deny = prompt = audit = quiet = exact = 0;
void clear_bits(perm32_t bits)
{
allow &= ~bits;
deny &= ~bits;
prompt &= ~bits;
audit &= ~bits;
quiet &= ~bits;
exact &= ~bits;
}
void add(perms_t &rhs, bool filedfa)
{
if (priority > rhs.priority)
return;
if (priority < rhs.priority) {
*this = rhs;
return;
} //else if (rhs.priority == priority) {
deny |= rhs.deny;
if (filedfa && !is_merged_x_consistent(allow & ALL_USER_EXEC,
@@ -131,12 +131,23 @@ public:
*/
}
/* returns true if perm is no longer accept */
int apply_and_clear_deny(void)
{
if (deny) {
allow &= ~deny;
exact &= ~deny;
prompt &= ~deny;
quiet &= deny;
/* don't change audit or quiet based on clearing
* deny at this stage. This was made unique in
* accept_perms, and the info about whether
* we are auditing or quieting based on the explicit
* deny has been discarded and can only be inferred.
* But we know it is correct from accept_perms()
* audit &= deny;
* quiet &= deny;
*/
deny = 0;
return !is_accept();
}
@@ -145,8 +156,6 @@ public:
bool operator<(perms_t const &rhs)const
{
if (priority < rhs.priority)
return priority < rhs.priority;
if (allow < rhs.allow)
return allow < rhs.allow;
if (deny < rhs.deny)
@@ -158,11 +167,11 @@ public:
return quiet < rhs.quiet;
}
int priority;
perm32_t allow, deny, prompt, audit, quiet, exact;
};
int accept_perms(NodeVec *state, perms_t &perms, bool filedfa);
int accept_perms(optflags const &opts, NodeVec *state, perms_t &perms,
bool filedfa);
/*
* ProtoState - NodeSet and ancillery information used to create a state
@@ -226,7 +235,8 @@ struct DiffDag {
*/
class State {
public:
State(int l, ProtoState &n, State *other, bool filedfa):
State(optflags const &opts, int l, ProtoState &n, State *other,
bool filedfa):
label(l), flags(0), idx(0), perms(), trans()
{
int error;
@@ -239,7 +249,7 @@ public:
proto = n;
/* Compute permissions associated with the State. */
error = accept_perms(n.anodes, perms, filedfa);
error = accept_perms(opts, n.anodes, perms, filedfa);
if (error) {
//cerr << "Failing on accept perms " << error << "\n";
throw error;
@@ -265,7 +275,7 @@ public:
ostream &dump(ostream &os)
{
cerr << *this << "\n";
os << *this << "\n";
for (StateTrans::iterator i = trans.begin(); i != trans.end(); i++) {
os << " " << i->first.c << " -> " << *i->second << "\n";
}
@@ -282,9 +292,9 @@ public:
{
accept1 = perms.allow;
if (prompt && prompt_compat_mode == PROMPT_COMPAT_DEV)
accept2 = PACK_AUDIT_CTL(perms.prompt, perms.quiet & perms.deny);
accept2 = PACK_AUDIT_CTL(perms.prompt, perms.quiet);
else
accept2 = PACK_AUDIT_CTL(perms.audit, perms.quiet & perms.deny);
accept2 = PACK_AUDIT_CTL(perms.audit, perms.quiet);
accept3 = perms.prompt;
}
@@ -338,12 +348,16 @@ public:
}
};
typedef map<const State *, size_t> Renumber_Map;
/* Transitions in the DFA. */
class DFA {
void dump_node_to_dfa(void);
State *add_new_state(NodeSet *nodes, State *other);
State *add_new_state(NodeSet *anodes, NodeSet *nnodes, State *other);
void update_state_transitions(State *state);
State *add_new_state(optflags const &opts, NodeSet *nodes,
State *other);
State *add_new_state(optflags const &opts,NodeSet *anodes,
NodeSet *nnodes, State *other);
void update_state_transitions(optflags const &opts, State *state);
void process_work_queue(const char *header, optflags const &);
void dump_diff_chain(ostream &os, map<State *, Partition> &relmap,
Partition &chain, State *state,
@@ -374,7 +388,7 @@ public:
void undiff_encode(void);
void dump_diff_encode(ostream &os);
void dump(ostream &os);
void dump(ostream &os, Renumber_Map *renum);
void dump_dot_graph(ostream &os);
void dump_uniq_perms(const char *s);

View File

@@ -182,6 +182,8 @@ struct aa_perms compute_perms_entry(uint32_t accept1, uint32_t accept2,
perms.prompt = dfa_user_allow(accept3);
perms.audit = dfa_user_audit(accept1, accept2);
perms.quiet = dfa_user_quiet(accept1, accept2);
if (accept1 & AA_COMPAT_CONT_MATCH)
perms.allow |= AA_CONT_MATCH;
/*
* This mapping is convulated due to history.

View File

@@ -247,8 +247,8 @@ static struct mnt_keyword_table mnt_opts_table[] = {
{"nodev", MS_NODEV, 0},
{"exec", 0, MS_NOEXEC},
{"noexec", MS_NOEXEC, 0},
{"sync", MS_SYNC, 0},
{"async", 0, MS_SYNC},
{"sync", MS_SYNCHRONOUS, 0},
{"async", 0, MS_SYNCHRONOUS},
{"remount", MS_REMOUNT, 0},
{"mand", MS_MAND, 0},
{"nomand", 0, MS_MAND},
@@ -313,10 +313,16 @@ static struct mnt_keyword_table mnt_conds_table[] = {
static ostream &dump_flags(ostream &os,
pair <unsigned int, unsigned int> flags)
{
bool is_first = true;
for (int i = 0; mnt_opts_table[i].keyword; i++) {
if ((flags.first & mnt_opts_table[i].set) ||
(flags.second & mnt_opts_table[i].clear))
(flags.second & mnt_opts_table[i].clear)) {
if (!is_first) {
os << ", ";
}
is_first = false;
os << mnt_opts_table[i].keyword;
}
}
return os;
}
@@ -349,7 +355,8 @@ int is_valid_mnt_cond(const char *name, int src)
static unsigned int extract_flags(struct value_list **list, unsigned int *inv)
{
unsigned int flags = 0, invflags = 0;
*inv = 0;
if (inv)
*inv = 0;
struct value_list *entry, *tmp, *prev = NULL;
list_for_each_safe(*list, entry, tmp) {
@@ -362,11 +369,7 @@ static unsigned int extract_flags(struct value_list **list, unsigned int *inv)
" => req: 0x%x inv: 0x%x\n",
entry->value, mnt_opts_table[i].set,
mnt_opts_table[i].clear, flags, invflags);
if (prev)
prev->next = tmp;
if (entry == *list)
*list = tmp;
entry->next = NULL;
list_remove_at(*list, prev, entry);
free_value_list(entry);
} else
prev = entry;
@@ -677,7 +680,7 @@ int mnt_rule::cmp(rule_t const &rhs) const {
return cmp_vec_int(opt_flagsv, rhs_mnt.opt_flagsv);
}
static int build_mnt_flags(char *buffer, int size, unsigned int flags,
static bool build_mnt_flags(char *buffer, int size, unsigned int flags,
unsigned int opt_flags)
{
char *p = buffer;
@@ -687,8 +690,8 @@ static int build_mnt_flags(char *buffer, int size, unsigned int flags,
/* all flags are optional */
len = snprintf(p, size, "%s", default_match_pattern);
if (len < 0 || len >= size)
return FALSE;
return TRUE;
return false;
return true;
}
for (i = 0; i <= 31; ++i) {
if ((opt_flags) & (1 << i))
@@ -699,7 +702,7 @@ static int build_mnt_flags(char *buffer, int size, unsigned int flags,
continue;
if (len < 0 || len >= size)
return FALSE;
return false;
p += len;
size -= len;
}
@@ -710,15 +713,15 @@ static int build_mnt_flags(char *buffer, int size, unsigned int flags,
* like the empty string
*/
if (size < 9)
return FALSE;
return false;
strcpy(p, "(\\xfe|)");
}
return TRUE;
return true;
}
static int build_mnt_opts(std::string& buffer, struct value_list *opts)
static bool build_mnt_opts(std::string& buffer, struct value_list *opts)
{
struct value_list *ent;
pattern_t ptype;
@@ -726,19 +729,19 @@ static int build_mnt_opts(std::string& buffer, struct value_list *opts)
if (!opts) {
buffer.append(default_match_pattern);
return TRUE;
return true;
}
list_for_each(opts, ent) {
ptype = convert_aaregex_to_pcre(ent->value, 0, glob_default, buffer, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
if (ent->next)
buffer.append(",");
}
return TRUE;
return true;
}
void mnt_rule::warn_once(const char *name)

View File

@@ -35,7 +35,7 @@
#define MS_DEV 0
#define MS_NOEXEC (1 << 3)
#define MS_EXEC 0
#define MS_SYNC (1 << 4)
#define MS_SYNCHRONOUS (1 << 4)
#define MS_ASYNC 0
#define MS_REMOUNT (1 << 5)
#define MS_MAND (1 << 6)
@@ -78,7 +78,7 @@
#define MS_RSHARED (MS_SHARED | MS_REC)
#define MS_ALL_FLAGS (MS_RDONLY | MS_NOSUID | MS_NODEV | MS_NOEXEC | \
MS_SYNC | MS_REMOUNT | MS_MAND | MS_DIRSYNC | \
MS_SYNCHRONOUS | MS_REMOUNT | MS_MAND | MS_DIRSYNC | \
MS_NOSYMFOLLOW | \
MS_NOATIME | MS_NODIRATIME | MS_BIND | MS_RBIND | \
MS_MOVE | MS_VERBOSE | MS_ACL | \
@@ -108,7 +108,13 @@
#define MS_MOVE_FLAGS (MS_MOVE)
#define MS_CMDS (MS_MOVE | MS_REMOUNT | MS_BIND | MS_RBIND | MS_MAKE_CMDS)
#define MS_REMOUNT_FLAGS (MS_ALL_FLAGS & ~(MS_CMDS & ~MS_REMOUNT & ~MS_BIND & ~MS_RBIND))
/*
* This allows MS_MAKE_CMDS, by design: while remount and make-* shouldn't be
* used together, real-world applications do use them together, and the Linux
* kernel ignores the make-* flags when doing a remount instead of returning
* EINVAL. See https://bugs.launchpad.net/apparmor/+bug/2091424 for an example.
*/
#define MS_REMOUNT_FLAGS (MS_ALL_FLAGS & ~MS_MOVE_FLAGS)
#define MS_NEW_FLAGS (MS_ALL_FLAGS & ~MS_CMDS)
#define MNT_SRC_OPT 1

View File

@@ -352,10 +352,41 @@ bool parse_port_number(const char *port_entry, uint16_t *port) {
return false;
}
bool parse_range(const char *range, uint16_t *from, uint16_t *to)
{
char *range_tmp = strdup(range);
char *dash = strchr(range_tmp, '-');
bool ret = false;
if (dash == NULL)
goto out;
*dash = '\0';
if (parse_port_number(range_tmp, from)) {
if (parse_port_number(dash + 1, to)) {
if (*from > *to) {
goto out;
}
ret = true;
goto out;
}
}
out:
free(range_tmp);
return ret;
}
bool network_rule::parse_port(ip_conds &entry)
{
entry.is_port = true;
return parse_port_number(entry.sport, &entry.port);
if (parse_range(entry.sport, &entry.from_port, &entry.to_port))
return true;
if (parse_port_number(entry.sport, &entry.from_port)) {
/* if range is not used, from and to have the same value */
entry.to_port = entry.from_port;
return true;
}
return false;
}
bool network_rule::parse_address(ip_conds &entry)
@@ -650,23 +681,23 @@ std::list<std::ostringstream> copy_streams_list(std::list<std::ostringstream> &s
return streams_copy;
}
bool network_rule::gen_ip_conds(Profile &prof, std::list<std::ostringstream> &streams, ip_conds &entry, bool is_peer, bool is_cmd)
bool network_rule::gen_ip_conds(Profile &prof, std::list<std::ostringstream> &streams, ip_conds &entry, bool is_peer, uint16_t port, bool is_port, bool is_cmd)
{
std::string buf;
perm32_t cond_perms;
std::list<std::ostringstream> ip_streams;
for (auto &oss : streams) {
if (entry.is_port && !(entry.is_ip && entry.is_none)) {
if (is_port && !(entry.is_ip && entry.is_none)) {
/* encode port type (privileged - 1, remote - 2, unprivileged - 0) */
if (!is_peer && perms & AA_NET_BIND && entry.port < IPPORT_RESERVED)
if (!is_peer && perms & AA_NET_BIND && port < IPPORT_RESERVED)
oss << "\\x01";
else if (is_peer)
oss << "\\x02";
else
oss << "\\x00";
oss << gen_port_cond(entry.port);
oss << gen_port_cond(port);
} else {
/* port type + port number */
oss << "...";
@@ -690,7 +721,7 @@ bool network_rule::gen_ip_conds(Profile &prof, std::list<std::ostringstream> &st
cond_perms = map_perms(perms);
if (!is_cmd && (label || is_peer))
cond_perms = (AA_CONT_MATCH << 1);
cond_perms = AA_COMPAT_CONT_MATCH;
for (auto &oss : streams) {
oss << "\\x00"; /* null transition */
@@ -764,68 +795,83 @@ bool network_rule::gen_net_rule(Profile &prof, u16 family, unsigned int type_mas
}
if (perms & AA_PEER_NET_PERMS) {
for (int peer_port = peer.from_port; peer_port <= peer.to_port; peer_port++) {
std::list<std::ostringstream> streams;
std::ostringstream cmd_buffer;
cmd_buffer << buffer.str();
streams.push_back(std::move(cmd_buffer));
if (!gen_ip_conds(prof, streams, peer, true, peer_port, peer.is_port, false))
return false;
for (auto &oss : streams) {
oss << "\\x" << std::setfill('0') << std::setw(2) << std::hex << CMD_ADDR;
}
for (int local_port = local.from_port; local_port <= local.to_port; local_port++) {
std::list<std::ostringstream> localstreams;
for (auto &oss : streams) {
/* we need to copy streams because each local_port should be an unique entry */
std::ostringstream local_buffer;
local_buffer << oss.str();
localstreams.push_back(std::move(local_buffer));
}
if (!gen_ip_conds(prof, localstreams, local, false, local_port, local.is_port, true))
return false;
}
}
}
for (int local_port = local.from_port; local_port <= local.to_port; local_port++) {
std::list<std::ostringstream> streams;
std::ostringstream cmd_buffer;
std::ostringstream common_buffer;
cmd_buffer << buffer.str();
streams.push_back(std::move(cmd_buffer));
if (!gen_ip_conds(prof, streams, peer, true, false))
common_buffer << buffer.str();
streams.push_back(std::move(common_buffer));
if (!gen_ip_conds(prof, streams, local, false, local_port, local.is_port, false))
return false;
for (auto &oss : streams) {
oss << "\\x" << std::setfill('0') << std::setw(2) << std::hex << CMD_ADDR;
if (perms & AA_NET_LISTEN) {
std::list<std::ostringstream> cmd_streams;
cmd_streams = copy_streams_list(streams);
for (auto &cmd_buffer : streams) {
std::ostringstream listen_buffer;
listen_buffer << cmd_buffer.str();
listen_buffer << "\\x" << std::setfill('0') << std::setw(2) << std::hex << CMD_LISTEN;
/* length of queue allowed - not used for now */
listen_buffer << "..";
buf = listen_buffer.str();
if (!prof.policy.rules->add_rule(buf.c_str(), priority,
rule_mode, map_perms(perms),
dedup_perms_rule_t::audit == AUDIT_FORCE ? map_perms(perms) : 0,
parseopts))
return false;
}
}
if (perms & AA_NET_OPT) {
std::list<std::ostringstream> cmd_streams;
cmd_streams = copy_streams_list(streams);
if (!gen_ip_conds(prof, streams, local, false, true))
return false;
}
std::list<std::ostringstream> streams;
std::ostringstream common_buffer;
common_buffer << buffer.str();
streams.push_back(std::move(common_buffer));
if (!gen_ip_conds(prof, streams, local, false, false))
return false;
if (perms & AA_NET_LISTEN) {
std::list<std::ostringstream> cmd_streams;
cmd_streams = copy_streams_list(streams);
for (auto &cmd_buffer : streams) {
std::ostringstream listen_buffer;
listen_buffer << cmd_buffer.str();
listen_buffer << "\\x" << std::setfill('0') << std::setw(2) << std::hex << CMD_LISTEN;
/* length of queue allowed - not used for now */
listen_buffer << "..";
buf = listen_buffer.str();
if (!prof.policy.rules->add_rule(buf.c_str(), priority,
rule_mode, map_perms(perms),
dedup_perms_rule_t::audit == AUDIT_FORCE ? map_perms(perms) : 0,
parseopts))
return false;
}
}
if (perms & AA_NET_OPT) {
std::list<std::ostringstream> cmd_streams;
cmd_streams = copy_streams_list(streams);
for (auto &cmd_buffer : streams) {
std::ostringstream opt_buffer;
opt_buffer << cmd_buffer.str();
opt_buffer << "\\x" << std::setfill('0') << std::setw(2) << std::hex << CMD_OPT;
/* level - not used for now */
opt_buffer << "..";
/* socket mapping - not used for now */
opt_buffer << "..";
buf = opt_buffer.str();
if (!prof.policy.rules->add_rule(buf.c_str(), priority,
rule_mode, map_perms(perms),
dedup_perms_rule_t::audit == AUDIT_FORCE ? map_perms(perms) : 0,
parseopts))
return false;
for (auto &cmd_buffer : streams) {
std::ostringstream opt_buffer;
opt_buffer << cmd_buffer.str();
opt_buffer << "\\x" << std::setfill('0') << std::setw(2) << std::hex << CMD_OPT;
/* level - not used for now */
opt_buffer << "..";
/* socket mapping - not used for now */
opt_buffer << "..";
buf = opt_buffer.str();
if (!prof.policy.rules->add_rule(buf.c_str(), priority,
rule_mode, map_perms(perms),
dedup_perms_rule_t::audit == AUDIT_FORCE ? map_perms(perms) : 0,
parseopts))
return false;
}
}
}

View File

@@ -130,7 +130,9 @@ public:
bool is_ip = false;
bool is_port = false;
uint16_t port;
uint16_t from_port = 0;
uint16_t to_port = 0;
struct ip_address ip;
bool is_none = false;
@@ -187,7 +189,7 @@ public:
}
};
bool gen_ip_conds(Profile &prof, std::list<std::ostringstream> &streams, ip_conds &entry, bool is_peer, bool is_cmd);
bool gen_ip_conds(Profile &prof, std::list<std::ostringstream> &streams, ip_conds &entry, bool is_peer, uint16_t port, bool is_port, bool is_cmd);
bool gen_net_rule(Profile &prof, u16 family, unsigned int type_mask, unsigned int protocol);
void set_netperm(unsigned int family, unsigned int type, unsigned int protocol);
void update_compat_net(void);

View File

@@ -53,12 +53,6 @@ using namespace std;
*/
extern int parser_token;
/* Arbitrary max and minimum priority that userspace can specify, internally
* we handle up to INT_MAX and INT_MIN. Do not ever allow INT_MAX, see
* note on mediates_priority
*/
#define MAX_PRIORITY 1000
#define MIN_PRIORITY -1000
#define WARN_RULE_NOT_ENFORCED 0x1
#define WARN_RULE_DOWNGRADED 0x2
@@ -185,8 +179,6 @@ struct var_string {
#define OPTION_STDOUT 4
#define OPTION_OFILE 5
#define BOOL int
extern int preprocess_only;
#define PATH_CHROOT_REL 0x1
@@ -219,13 +211,6 @@ do { \
errno = perror_error; \
} while (0)
#ifndef TRUE
#define TRUE (1)
#endif
#ifndef FALSE
#define FALSE (0)
#endif
#define MIN_PORT 0
#define MAX_PORT 65535
@@ -257,17 +242,6 @@ do { \
len; \
})
#define list_find_prev(LIST, ENTRY) \
({ \
typeof(ENTRY) tmp, prev = NULL; \
list_for_each((LIST), tmp) { \
if (tmp == (ENTRY)) \
break; \
prev = tmp; \
} \
prev; \
})
#define list_pop(LIST) \
({ \
typeof(LIST) _entry = (LIST); \
@@ -285,12 +259,6 @@ do { \
(LIST) = (ENTRY)->next; \
(ENTRY)->next = NULL; \
#define list_remove(LIST, ENTRY) \
do { \
typeof(ENTRY) prev = list_find_prev((LIST), (ENTRY)); \
list_remove_at((LIST), prev, (ENTRY)); \
} while (0)
#define DUP_STRING(orig, new, field, fail_target) \
do { \
@@ -429,10 +397,10 @@ extern const char *basedir;
#define glob_null 1
extern pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
std::string& pcre, int *first_re_pos);
extern int build_list_val_expr(std::string& buffer, struct value_list *list);
extern int convert_entry(std::string& buffer, char *entry);
extern bool build_list_val_expr(std::string& buffer, struct value_list *list);
extern bool convert_entry(std::string& buffer, char *entry);
extern int clear_and_convert_entry(std::string& buffer, char *entry);
extern int convert_range(std::string& buffer, bignum start, bignum end);
extern bool convert_range(std::string& buffer, bignum start, bignum end);
extern int process_regex(Profile *prof);
extern int post_process_entry(struct cod_entry *entry);
@@ -451,7 +419,6 @@ extern void free_var_string(struct var_string *var);
extern void warn_uppercase(void);
extern int is_blacklisted(const char *name, const char *path);
extern struct value_list *new_value_list(char *value);
extern struct value_list *dup_value_list(struct value_list *list);
extern void free_value_list(struct value_list *list);
extern void print_value_list(struct value_list *list);
extern struct cond_entry *new_cond_entry(char *name, int eq, struct value_list *list);

View File

@@ -142,8 +142,10 @@ static void process_entries(const void *nodep, VISIT value, int level unused)
}
if (dup) {
dup->alias_ignore = true;
/* adds to the front of the list, list iteratition
* will skip it
/* The original entry->next is in dup->next, so we don't lose
* any of the original elements of the linked list. Also, by
* setting dup->alias_ignore, we trigger the check at the start
* of the loop, skipping the new entry we just inserted.
*/
entry->next = dup;

View File

@@ -110,7 +110,12 @@ FILE *ofile = NULL;
IncludeCache_t *g_includecache;
optflags parseopts = {
.control = (optflags_t)(CONTROL_DFA_TREE_NORMAL | CONTROL_DFA_TREE_SIMPLE | CONTROL_DFA_MINIMIZE | CONTROL_DFA_DIFF_ENCODE | CONTROL_RULE_MERGE),
.control = (optflags_t)(CONTROL_DFA_TREE_NORMAL | CONTROL_DFA_TREE_SIMPLE | CONTROL_DFA_MINIMIZE | CONTROL_DFA_DIFF_ENCODE | CONTROL_RULE_MERGE |
/* TODO: remove when we have better auto
* selection on when/which explicit denies
* to remove
*/
CONTROL_DFA_FILTER_DENY),
.dump = 0,
.warn = DEFAULT_WARNINGS,
.Werror = 0

View File

@@ -202,7 +202,7 @@ static void start_include_position(const char *filename)
current_lineno = 1;
}
void push_include_stack(char *filename)
void push_include_stack(const char *filename)
{
struct include_stack_t *include = NULL;

View File

@@ -29,7 +29,7 @@ extern void parse_default_paths(void);
extern int do_include_preprocessing(char *profilename);
FILE *search_path(char *filename, char **fullpath, bool *skip);
extern void push_include_stack(char *filename);
extern void push_include_stack(const char *filename);
extern void pop_include_stack(void);
extern void reset_include_stack(const char *filename);

View File

@@ -245,7 +245,7 @@ static inline void sd_write_uint64(std::ostringstream &buf, u64 b)
static inline void sd_write_name(std::ostringstream &buf, const char *name)
{
PDEBUG("Writing name '%s'\n", name);
PDEBUG("Writing name '%s'\n", name ? name : "(null)");
if (name) {
sd_write8(buf, SD_NAME);
sd_write16(buf, strlen(name) + 1);

View File

@@ -1620,7 +1620,6 @@ int main(int argc, char *argv[])
progname = argv[0];
init_base_dir();
capabilities_init();
process_early_args(argc, argv);
process_config_file(config_file);

View File

@@ -54,6 +54,9 @@ static int file_comp(const void *c1, const void *c2)
if ((*e1)->audit != (*e2)->audit)
return (*e1)->audit < (*e2)->audit ? -1 : 1;
if ((*e1)->priority != (*e2)->priority)
return (*e2)->priority - (*e1)->priority;
return strcmp((*e1)->name, (*e2)->name);
}

View File

@@ -35,6 +35,7 @@
#include <sys/apparmor_private.h>
#include <algorithm>
#include <unordered_map>
#include "capability.h"
#include "lib.h"
@@ -61,6 +62,10 @@ void *reallocarray(void *ptr, size_t nmemb, size_t size)
}
#endif
#ifndef NULL
#define NULL nullptr
#endif
int is_blacklisted(const char *name, const char *path)
{
int retval = _aa_is_blacklisted(name);
@@ -71,12 +76,7 @@ int is_blacklisted(const char *name, const char *path)
return !retval ? 0 : 1;
}
struct keyword_table {
const char *keyword;
unsigned int token;
};
static struct keyword_table keyword_table[] = {
static const unordered_map<string, int> keyword_table = {
/* network */
{"network", TOK_NETWORK},
{"unix", TOK_UNIX},
@@ -132,11 +132,9 @@ static struct keyword_table keyword_table[] = {
{"sqpoll", TOK_SQPOLL},
{"all", TOK_ALL},
{"priority", TOK_PRIORITY},
/* terminate */
{NULL, 0}
};
static struct keyword_table rlimit_table[] = {
static const unordered_map<string, int> rlimit_table = {
{"cpu", RLIMIT_CPU},
{"fsize", RLIMIT_FSIZE},
{"data", RLIMIT_DATA},
@@ -162,37 +160,33 @@ static struct keyword_table rlimit_table[] = {
#ifdef RLIMIT_RTTIME
{"rttime", RLIMIT_RTTIME},
#endif
/* terminate */
{NULL, 0}
};
/* for alpha matches, check for keywords */
static int get_table_token(const char *name unused, struct keyword_table *table,
const char *keyword)
static int get_table_token(const char *name unused, const unordered_map<string, int> &table,
const string &keyword)
{
int i;
for (i = 0; table[i].keyword; i++) {
PDEBUG("Checking %s %s\n", name, table[i].keyword);
if (strcmp(keyword, table[i].keyword) == 0) {
PDEBUG("Found %s %s\n", name, table[i].keyword);
return table[i].token;
}
auto token_entry = table.find(keyword);
if (token_entry == table.end()) {
PDEBUG("Unable to find %s %s\n", name, keyword.c_str());
return -1;
} else {
PDEBUG("Found %s %s\n", name, keyword.c_str());
return token_entry->second;
}
PDEBUG("Unable to find %s %s\n", name, keyword);
return -1;
}
/* for alpha matches, check for keywords */
int get_keyword_token(const char *keyword)
{
return get_table_token("keyword", keyword_table, keyword);
// Can't use string_view because that requires C++17
return get_table_token("keyword", keyword_table, string(keyword));
}
int get_rlimit(const char *name)
{
return get_table_token("rlimit", rlimit_table, name);
// Can't use string_view because that requires C++17
return get_table_token("rlimit", rlimit_table, string(name));
}
@@ -208,55 +202,164 @@ struct capability_table {
capability_flags flags;
};
/*
* Enum for the results of adding a capability, with values assigned to match
* the int values returned by the old capable_add_cap function:
*
* -1: error
* 0: no change - capability already in table
* 1: added flag to capability in table
* 2: added new capability
*/
enum add_cap_result {
ERROR = -1, // Was only used for OOM conditions
ALREADY_EXISTS = 0,
FLAG_ADDED = 1,
CAP_ADDED = 2
};
static struct capability_table base_capability_table[] = {
/* capabilities */
#include "cap_names.h"
};
static const size_t BASE_CAP_TABLE_SIZE = sizeof(base_capability_table)/sizeof(struct capability_table);
/* terminate */
{NULL, 0, 0, CAPFLAGS_CLEAR}
class capability_lookup {
vector<capability_table> cap_table;
// Use unordered_map to avoid pulling in two map implementations
// We may want to switch to boost::multiindex to avoid duplication
unordered_map<string, capability_table&> name_cap_map;
unordered_map<unsigned int, capability_table&> int_cap_map;
private:
void add_capability_table_entry_raw(capability_table entry) {
cap_table.push_back(entry);
capability_table &entry_ref = cap_table.back();
name_cap_map.emplace(string(entry_ref.name), entry_ref);
int_cap_map.emplace(entry_ref.cap, entry_ref);
}
public:
capability_lookup() :
cap_table(vector<capability_table>()),
name_cap_map(unordered_map<string, capability_table&>(BASE_CAP_TABLE_SIZE)),
int_cap_map(unordered_map<unsigned int, capability_table&>(BASE_CAP_TABLE_SIZE)) {
cap_table.reserve(BASE_CAP_TABLE_SIZE);
for (size_t i=0; i<BASE_CAP_TABLE_SIZE; i++) {
add_capability_table_entry_raw(base_capability_table[i]);
}
}
capability_table* find_cap_entry_by_name(string const & name) const {
auto map_entry = this->name_cap_map.find(name);
if (map_entry == this->name_cap_map.end()) {
return NULL;
} else {
PDEBUG("Found %s %s\n", name.c_str(), map_entry->second.name);
return &map_entry->second;
}
}
capability_table* find_cap_entry_by_num(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return NULL;
} else {
PDEBUG("Found %d %d\n", cap, map_entry->second.cap);
return &map_entry->second;
}
}
int name_to_capability(string const &cap) const {
auto map_entry = this->name_cap_map.find(cap);
if (map_entry == this->name_cap_map.end()) {
PDEBUG("Unable to find %s %s\n", "capability", cap.c_str());
return -1;
} else {
return map_entry->second.cap;
}
}
const char *capability_to_name(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return "invalid-capability";
} else {
return map_entry->second.name;
}
}
int capability_backmap(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return NO_BACKMAP_CAP;
} else {
return map_entry->second.backmap;
}
}
bool capability_in_kernel(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return false;
} else {
return map_entry->second.flags & CAPFLAG_KERNEL_FEATURE;
}
}
void __debug_capabilities(uint64_t capset, const char *name) const {
printf("%s:", name);
for (auto it = this->cap_table.cbegin(); it != this->cap_table.cend(); it++) {
if ((1ull << it->cap) & capset)
printf (" %s", it->name);
}
printf("\n");
}
add_cap_result capable_add_cap(string const & str, unsigned int cap,
capability_flags flag) {
struct capability_table *ent = this->find_cap_entry_by_name(str);
if (ent) {
if (ent->cap != cap) {
pwarn(WARN_UNEXPECTED, "feature capability '%s:%d' does not equal expected %d. Ignoring ...\n", str.c_str(), cap, ent->cap);
/* TODO: make warn to error config */
return add_cap_result::ALREADY_EXISTS;
}
if (ent->flags & flag)
return add_cap_result::ALREADY_EXISTS;
ent->flags = (capability_flags) (ent->flags | flag);
return add_cap_result::FLAG_ADDED;
} else {
struct capability_table new_entry;
new_entry.name = strdup(str.c_str());
if (!new_entry.name) {
yyerror(_("Out of memory"));
return add_cap_result::ERROR;
}
new_entry.cap = cap;
new_entry.backmap = 0;
new_entry.flags = flag;
try {
this->add_capability_table_entry_raw(new_entry);
} catch (const std::bad_alloc &_e) {
yyerror(_("Out of memory"));
return add_cap_result::ERROR;
}
// TODO: exception catching for causes other than OOM
return add_cap_result::CAP_ADDED;
}
}
void clear_cap_flag(capability_flags flags)
{
for (auto it = this->cap_table.begin(); it != this->cap_table.end(); it++) {
PDEBUG("Clearing capability flag for capability \"%s\"\n", it->name);
it->flags = (capability_flags) (it->flags & ~flags);
}
}
};
static struct capability_table *cap_table;
static int cap_table_size;
void capabilities_init(void)
{
cap_table = (struct capability_table *) malloc(sizeof(base_capability_table));
if (!cap_table)
yyerror(_("Memory allocation error."));
memcpy(cap_table, base_capability_table, sizeof(base_capability_table));
cap_table_size = sizeof(base_capability_table)/sizeof(struct capability_table);
}
struct capability_table *find_cap_entry_by_name(const char *name)
{
int i;
for (i = 0; cap_table[i].name; i++) {
PDEBUG("Checking %s %s\n", name, cap_table[i].name);
if (strcmp(name, cap_table[i].name) == 0) {
PDEBUG("Found %s %s\n", name, cap_table[i].name);
return &cap_table[i];
}
}
return NULL;
}
struct capability_table *find_cap_entry_by_num(unsigned int cap)
{
int i;
for (i = 0; cap_table[i].name; i++) {
PDEBUG("Checking %d %d\n", cap, cap_table[i].cap);
if (cap == cap_table[i].cap) {
PDEBUG("Found %d %d\n", cap, cap_table[i].cap);
return &cap_table[i];
}
}
return NULL;
}
static capability_lookup cap_table;
/* don't mark up str with \0 */
static const char *strn_token(const char *str, size_t &len)
@@ -294,59 +397,6 @@ bool strcomp (const char *lhs, const char *rhs)
return null_strcmp(lhs, rhs) < 0;
}
/*
* Returns: -1: error
* 0: no change - capability already in table
* 1: added flag to capability in table
* 2: added new capability
*/
static int capable_add_cap(const char *str, int len, unsigned int cap,
capability_flags flag)
{
/* extract name from str so we can treat as a string */
autofree char *name = strndup(str, len);
if (!name) {
yyerror(_("Out of memory"));
return -1;
}
struct capability_table *ent = find_cap_entry_by_name(name);
if (ent) {
if (ent->cap != cap) {
pwarn(WARN_UNEXPECTED, "feature capability '%s:%d' does not equal expected %d. Ignoring ...\n", name, cap, ent->cap);
/* TODO: make warn to error config */
return 0;
}
if (ent->flags & flag)
return 0; /* no change */
ent->flags = (capability_flags) (ent->flags | flag);
return 1; /* modified */
} else {
struct capability_table *tmp;
tmp = (struct capability_table *) reallocarray(cap_table, sizeof(struct capability_table), cap_table_size+1);
if (!tmp) {
yyerror(_("Out of memory"));
/* TODO: change away from yyerror */
return -1;
}
cap_table = tmp;
ent = &cap_table[cap_table_size - 1]; /* overwrite null */
ent->name = strndup(name, len);
if (!ent->name) {
/* TODO: change away from yyerror */
yyerror(_("Out of memory"));
return -1;
}
ent->cap = cap;
ent->flags = flag;
cap_table[cap_table_size].name = NULL; /* new null */
cap_table_size++;
}
return 2; /* added */
}
bool add_cap_feature_mask(struct aa_features *features, capability_flags flags)
{
autofree char *value = NULL;
@@ -363,7 +413,8 @@ bool add_cap_feature_mask(struct aa_features *features, capability_flags flags)
for (capstr = strn_token(value, len);
capstr;
capstr = strn_token(capstr + len, len)) {
if (capable_add_cap(capstr, len, n, flags) < 0)
string capstr_as_str = string(capstr, len);
if (cap_table.capable_add_cap(capstr_as_str, n, flags) < 0)
return false;
n++;
if (len > valuelen) {
@@ -379,70 +430,32 @@ bool add_cap_feature_mask(struct aa_features *features, capability_flags flags)
void clear_cap_flag(capability_flags flags)
{
int i;
for (i = 0; cap_table[i].name; i++) {
PDEBUG("Clearing capability flag for capability \"%s\"\n", cap_table[i].name);
cap_table[i].flags = (capability_flags) (cap_table[i].flags & ~flags);
}
cap_table.clear_cap_flag(flags);
}
int name_to_capability(const char *cap)
{
struct capability_table *ent;
ent = find_cap_entry_by_name(cap);
if (ent)
return ent->cap;
PDEBUG("Unable to find %s %s\n", "capability", cap);
return -1;
return cap_table.name_to_capability(string(cap));
}
const char *capability_to_name(unsigned int cap)
{
struct capability_table *ent;
ent = find_cap_entry_by_num(cap);
if (ent)
return ent->name;
return "invalid-capability";
return cap_table.capability_to_name(cap);
}
int capability_backmap(unsigned int cap)
{
struct capability_table *ent;
ent = find_cap_entry_by_num(cap);
if (ent)
return ent->backmap;
return NO_BACKMAP_CAP;
return cap_table.capability_backmap(cap);
}
bool capability_in_kernel(unsigned int cap)
{
struct capability_table *ent;
ent = find_cap_entry_by_num(cap);
if (ent)
return ent->flags & CAPFLAG_KERNEL_FEATURE;
return false;
return cap_table.capability_in_kernel(cap);
}
void __debug_capabilities(uint64_t capset, const char *name)
{
unsigned int i;
printf("%s:", name);
for (i = 0; cap_table[i].name; i++) {
if ((1ull << cap_table[i].cap) & capset)
printf (" %s", cap_table[i].name);
}
printf("\n");
cap_table.__debug_capabilities(capset, name);
}
char *processunquoted(const char *string, int len)
@@ -1079,6 +1092,8 @@ void debug_cod_entries(struct cod_entry *list)
debug_base_perm_mask(SHIFT_TO_BASE(item->perms, AA_USER_SHIFT));
printf(":");
debug_base_perm_mask(SHIFT_TO_BASE(item->perms, AA_OTHER_SHIFT));
printf(" priority=%d ", item->priority);
if (item->name)
printf("\tName:\t(%s)\n", item->name);
else
@@ -1122,6 +1137,8 @@ bool entry_add_prefix(struct cod_entry *entry, const prefixes &p, const char *&e
else if (p.owner == 2)
entry->perms &= (AA_OTHER_PERMS | AA_SHARED_PERMS);
entry->priority = p.priority;
/* implied audit modifier */
if (p.audit == AUDIT_FORCE && (entry->rule_mode != RULE_DENY))
entry->audit = AUDIT_FORCE;
@@ -1196,37 +1213,6 @@ void free_value_list(struct value_list *list)
}
}
struct value_list *dup_value_list(struct value_list *list)
{
struct value_list *entry, *dup, *head = NULL;
char *value;
list_for_each(list, entry) {
value = NULL;
if (list->value) {
value = strdup(list->value);
if (!value)
goto fail2;
}
dup = new_value_list(value);
if (!dup)
goto fail;
if (head)
list_append(head, dup);
else
head = dup;
}
return head;
fail:
free(value);
fail2:
free_value_list(head);
return NULL;
}
void print_value_list(struct value_list *list)
{
struct value_list *entry;

View File

@@ -50,7 +50,7 @@ enum error_type {
void filter_slashes(char *path)
{
char *sptr, *dptr;
BOOL seen_slash = 0;
bool seen_slash = false;
if (!path || (strlen(path) < 2))
return;
@@ -69,7 +69,7 @@ void filter_slashes(char *path)
++sptr;
} else {
*dptr++ = *sptr++;
seen_slash = TRUE;
seen_slash = true;
}
} else {
seen_slash = 0;
@@ -111,14 +111,14 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
#define MAX_ALT_DEPTH 50
*first_re_pos = 0;
int ret = TRUE;
int ret = 1;
/* flag to indicate input error */
enum error_type error;
const char *sptr;
pattern_t ptype;
BOOL bEscape = 0; /* flag to indicate escape */
bool bEscape = false; /* flag to indicate escape */
int ingrouping = 0; /* flag to indicate {} context */
int incharclass = 0; /* flag to indicate [ ] context */
int grouping_count[MAX_ALT_DEPTH] = {0};
@@ -150,7 +150,7 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
if (bEscape) {
pcre.append("\\\\");
} else {
bEscape = TRUE;
bEscape = true;
++sptr;
continue; /*skip turning bEscape off */
} /* bEscape */
@@ -393,7 +393,7 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
break;
} /* switch (*sptr) */
bEscape = FALSE;
bEscape = false;
++sptr;
} /* while error == e_no_error && *sptr) */
@@ -419,12 +419,12 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
PERROR(_("%s: Unable to parse input line '%s'\n"),
progname, aare);
ret = FALSE;
ret = 0;
goto out;
}
out:
if (ret == FALSE)
if (ret == 0)
ptype = ePatternInvalid;
if (parseopts.dump & DUMP_DFA_RULE_EXPR)
@@ -464,7 +464,7 @@ static void warn_once_xattr(const char *name)
common_warn_once(name, "xattr attachment conditional ignored", &warned_name);
}
static int process_profile_name_xmatch(Profile *prof)
static bool process_profile_name_xmatch(Profile *prof)
{
std::string tbuf;
pattern_t ptype;
@@ -479,7 +479,7 @@ static int process_profile_name_xmatch(Profile *prof)
/* don't filter_slashes for profile names, do on attachment */
name = strdup(local_name(prof->name));
if (!name)
return FALSE;
return false;
}
filter_slashes(name);
ptype = convert_aaregex_to_pcre(name, 0, glob_default, tbuf,
@@ -491,7 +491,7 @@ static int process_profile_name_xmatch(Profile *prof)
PERROR(_("%s: Invalid profile name '%s' - bad regular expression\n"), progname, name);
if (!prof->attachment)
free(name);
return FALSE;
return false;
}
if (!prof->attachment)
@@ -506,11 +506,11 @@ static int process_profile_name_xmatch(Profile *prof)
/* build a dfa */
aare_rules *rules = new aare_rules();
if (!rules)
return FALSE;
return false;
if (!rules->add_rule(tbuf.c_str(), 0, RULE_ALLOW,
AA_MAY_EXEC, 0, parseopts)) {
delete rules;
return FALSE;
return false;
}
if (prof->altnames) {
struct alt_name *alt;
@@ -525,7 +525,7 @@ static int process_profile_name_xmatch(Profile *prof)
RULE_ALLOW, AA_MAY_EXEC,
0, parseopts)) {
delete rules;
return FALSE;
return false;
}
}
}
@@ -567,7 +567,7 @@ static int process_profile_name_xmatch(Profile *prof)
&len);
if (!rules->append_rule(tbuf.c_str(), true, true, parseopts)) {
delete rules;
return FALSE;
return false;
}
}
}
@@ -581,10 +581,10 @@ build:
prof->xmatch = rules->create_dfablob(&prof->xmatch_size, &prof->xmatch_len, prof->xmatch_perms_table, parseopts, false, false, false);
delete rules;
if (!prof->xmatch)
return FALSE;
return false;
}
return TRUE;
return true;
}
static int warn_change_profile = 1;
@@ -606,21 +606,21 @@ static bool is_change_profile_perms(perm32_t perms)
return perms & AA_CHANGE_PROFILE;
}
static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
{
std::string tbuf;
pattern_t ptype;
int pos;
if (!entry) /* shouldn't happen */
return TRUE;
return false;
if (!is_change_profile_perms(entry->perms))
filter_slashes(entry->name);
ptype = convert_aaregex_to_pcre(entry->name, 0, glob_default, tbuf, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
entry->pattern_type = ptype;
@@ -649,13 +649,13 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
entry->perms & ~(AA_LINK_BITS | AA_CHANGE_PROFILE),
entry->audit == AUDIT_FORCE ? entry->perms & ~(AA_LINK_BITS | AA_CHANGE_PROFILE) : 0,
parseopts))
return FALSE;
return false;
} else if (!is_change_profile_perms(entry->perms)) {
if (!dfarules->add_rule(tbuf.c_str(), entry->priority,
entry->rule_mode, entry->perms,
entry->audit == AUDIT_FORCE ? entry->perms : 0,
parseopts))
return FALSE;
return false;
}
if (entry->perms & (AA_LINK_BITS)) {
@@ -669,7 +669,7 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
filter_slashes(entry->link_name);
ptype = convert_aaregex_to_pcre(entry->link_name, 0, glob_default, lbuf, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
if (entry->subset)
perms |= LINK_TO_LINK_SUBSET(perms);
vec[1] = lbuf.c_str();
@@ -681,7 +681,7 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
entry->rule_mode, perms,
entry->audit == AUDIT_FORCE ? perms & AA_LINK_BITS : 0,
2, vec, parseopts, false))
return FALSE;
return false;
}
if (is_change_profile_perms(entry->perms)) {
const char *vec[3];
@@ -702,7 +702,7 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (entry->onexec) {
ptype = convert_aaregex_to_pcre(entry->onexec, 0, glob_default, xbuf, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
vec[0] = xbuf.c_str();
} else
/* allow change_profile for all execs */
@@ -713,14 +713,14 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (!parse_label(&stack, &ns, &name,
tbuf.c_str(), false)) {
return FALSE;
return false;
}
if (stack) {
fprintf(stderr,
_("The current kernel does not support stacking of named transitions: %s\n"),
tbuf.c_str());
return FALSE;
return false;
}
if (ns)
@@ -734,13 +734,13 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (!dfarules->add_rule_vec(entry->priority, entry->rule_mode,
AA_CHANGE_PROFILE | onexec_perms,
0, index - 1, &vec[1], parseopts, false))
return FALSE;
return false;
/* onexec rules - both rules are needed for onexec */
if (!dfarules->add_rule_vec(entry->priority, entry->rule_mode,
onexec_perms,
0, 1, vec, parseopts, false))
return FALSE;
return false;
/**
* pick up any exec bits, from the frontend parser, related to
@@ -750,19 +750,19 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (!dfarules->add_rule_vec(entry->priority, entry->rule_mode,
onexec_perms, 0, index, vec,
parseopts, false))
return FALSE;
return false;
}
return TRUE;
return true;
}
int post_process_entries(Profile *prof)
bool post_process_entries(Profile *prof)
{
int ret = TRUE;
int ret = true;
struct cod_entry *entry;
list_for_each(prof->entries, entry) {
if (!process_dfa_entry(prof->dfa.rules, entry))
ret = FALSE;
ret = false;
}
return ret;
@@ -815,7 +815,7 @@ out:
return error;
}
int build_list_val_expr(std::string& buffer, struct value_list *list)
bool build_list_val_expr(std::string& buffer, struct value_list *list)
{
struct value_list *ent;
pattern_t ptype;
@@ -823,7 +823,7 @@ int build_list_val_expr(std::string& buffer, struct value_list *list)
if (!list) {
buffer.append(default_match_pattern);
return TRUE;
return true;
}
buffer.append("(");
@@ -840,12 +840,12 @@ int build_list_val_expr(std::string& buffer, struct value_list *list)
}
buffer.append(")");
return TRUE;
return true;
fail:
return FALSE;
return false;
}
int convert_entry(std::string& buffer, char *entry)
bool convert_entry(std::string& buffer, char *entry)
{
pattern_t ptype;
int pos;
@@ -853,12 +853,12 @@ int convert_entry(std::string& buffer, char *entry)
if (entry) {
ptype = convert_aaregex_to_pcre(entry, 0, glob_default, buffer, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
} else {
buffer.append(default_match_pattern);
}
return TRUE;
return true;
}
int clear_and_convert_entry(std::string& buffer, char *entry)
@@ -959,7 +959,7 @@ static std::string generate_regex_range(bignum start, bignum end)
return result.str();
}
int convert_range(std::string& buffer, bignum start, bignum end)
bool convert_range(std::string& buffer, bignum start, bignum end)
{
pattern_t ptype;
int pos;
@@ -969,24 +969,24 @@ int convert_range(std::string& buffer, bignum start, bignum end)
if (!regex_range.empty()) {
ptype = convert_aaregex_to_pcre(regex_range.c_str(), 0, glob_default, buffer, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
} else {
buffer.append(default_match_pattern);
}
return TRUE;
return true;
}
int post_process_policydb_ents(Profile *prof)
bool post_process_policydb_ents(Profile *prof)
{
for (RuleList::iterator i = prof->rule_ents.begin(); i != prof->rule_ents.end(); i++) {
if ((*i)->skip())
continue;
if ((*i)->gen_policy_re(*prof) == RULE_ERROR)
return FALSE;
return false;
}
return TRUE;
return true;
}
@@ -1093,9 +1093,21 @@ static const char *deny_file = ".*";
*
* Note: it turns out the above bug does exist for dbus rules in parsers
* that do not support priority, and we don't have a way to fix it.
* We fix it here by capping user specified priority to be < INT_MAX.
* We fix it here by capping user specified priority to be less than
* MAX_INTERNAL_PRIORITY.
*/
static int mediates_priority = INT_MAX;
static int mediates_priority = MAX_INTERNAL_PRIORITY;
/* some rule types unfortunately encoded permissions on the class byte
* to fix the above bug, they need a different solution. The generic
* mediates rule will get encoded at the minimum priority, and then
* for every rule of those classes a mediates rule of the same priority
* will be added. This way the mediates rule never has higher priority,
* which would wipe out the rule permissions encoded on the class state,
* and it is guaranteed to have the same priority as the highest priority
* rule.
*/
static int perms_onclass_mediates_priority = MIN_INTERNAL_PRIORITY;
int process_profile_policydb(Profile *prof)
{
@@ -1112,7 +1124,7 @@ int process_profile_policydb(Profile *prof)
* to be supported
*/
if (features_supports_userns &&
!prof->policy.rules->add_rule(mediates_ns, mediates_priority, RULE_ALLOW, AA_MAY_READ, 0, parseopts))
!prof->policy.rules->add_rule(mediates_ns, perms_onclass_mediates_priority, RULE_ALLOW, AA_MAY_READ, 0, parseopts))
goto out;
/* don't add mediated classes to unconfined profiles */
@@ -1148,7 +1160,7 @@ int process_profile_policydb(Profile *prof)
!prof->policy.rules->add_rule(mediates_sysv_mqueue, mediates_priority, RULE_ALLOW, AA_MAY_READ, 0, parseopts))
goto out;
if (features_supports_io_uring &&
!prof->policy.rules->add_rule(mediates_io_uring, mediates_priority, RULE_ALLOW, AA_MAY_READ, 0, parseopts))
!prof->policy.rules->add_rule(mediates_io_uring, perms_onclass_mediates_priority, RULE_ALLOW, AA_MAY_READ, 0, parseopts))
goto out;
}

View File

@@ -79,7 +79,7 @@ struct var_string *split_out_var(const char *string)
{
struct var_string *n = NULL;
const char *sptr;
BOOL bEscape = 0; /* flag to indicate escape */
bool bEscape = false; /* flag to indicate escape */
if (!string) /* shouldn't happen */
return NULL;
@@ -89,15 +89,11 @@ struct var_string *split_out_var(const char *string)
while (!n && *sptr) {
switch (*sptr) {
case '\\':
if (bEscape) {
bEscape = FALSE;
} else {
bEscape = TRUE;
}
bEscape = !bEscape;
break;
case '@':
if (bEscape) {
bEscape = FALSE;
bEscape = false;
} else if (*(sptr + 1) == '{') {
const char *eptr = get_var_end(sptr + 2);
if (!eptr)
@@ -111,8 +107,7 @@ struct var_string *split_out_var(const char *string)
}
break;
default:
if (bEscape)
bEscape = FALSE;
bEscape = false;
}
sptr++;
}

View File

@@ -640,10 +640,10 @@ opt_priority: { $$ = 0; }
yyerror("invalid priority %s", $3);
free($3);
/* see note on mediates_priority */
if (tmp > MAX_PRIORITY)
yyerror("invalid priority %l > %d", tmp, MAX_PRIORITY);
if (tmp < MIN_PRIORITY)
yyerror("invalid priority %l > %d", tmp, MIN_PRIORITY);
if (tmp > MAX_POLICY_PRIORITY)
yyerror("invalid priority %l > %d", tmp, MAX_POLICY_PRIORITY);
if (tmp < MIN_POLICY_PRIORITY)
yyerror("invalid priority %l > %d", tmp, MIN_POLICY_PRIORITY);
$$ = tmp;
}
@@ -704,7 +704,7 @@ rules: rules opt_prefix block
if (($2).priority != 0) {
yyerror(_("priority is not allowed on rule blocks"));
}
PDEBUG("matched: %s%s%sblock\n",
PDEBUG("matched: %s%s%s%sblock\n",
$2.audit == AUDIT_FORCE ? "audit " : "",
$2.rule_mode == RULE_DENY ? "deny " : "",
$2.rule_mode == RULE_PROMPT ? "prompt " : "",

View File

@@ -24,6 +24,11 @@
* older versions
*/
#include <ostream>
#include <iostream>
using std::ostream;
using std::cerr;
#include <stdint.h>
#include <sys/apparmor.h>
@@ -65,6 +70,9 @@
#define AA_MAY_DELEGATE
#define AA_CONT_MATCH 0x08000000
// TODO: move into a reworked immunix.h that is dependent on perms.h
#define AA_COMPAT_CONT_MATCH (AA_CONT_MATCH << 1)
#define AA_MAY_STACK 0x10000000
#define AA_MAY_ONEXEC 0x20000000 /* either stack or change_profile */
#define AA_MAY_CHANGE_PROFILE 0x40000000
@@ -79,7 +87,7 @@
* - exec type - which determines how the executable name and index are used
* - flags - which modify how the destination name is applied
*/
#define AA_X_INDEX_MASK AA_INDEX_MASK
#define AA_X_INDEX_MASK 0xffffff
#define AA_X_TYPE_MASK 0x0c000000
#define AA_X_NONE AA_INDEX_NONE
@@ -93,7 +101,8 @@
typedef uint32_t perm32_t;
struct aa_perms {
class aa_perms {
public:
perm32_t allow;
perm32_t deny; /* explicit deny, or conflict if allow also set */
@@ -112,6 +121,33 @@ struct aa_perms {
uint32_t xindex;
uint32_t tag; /* tag string index, if present */
uint32_t label; /* label string index, if present */
void dump_header(ostream &os)
{
os << "(allow/deny/prompt//audit/quiet//xindex)\n";
}
void dump(ostream &os)
{
os << std::hex << "(0x" << allow << "/0x" << deny << "/0x"
<< prompt << "//0x" << audit << "/0x" << quiet
<< std::dec << "//";
if (xindex & AA_X_UNSAFE)
os << "unsafe ";
if (xindex & AA_X_TYPE_MASK) {
if (xindex & AA_X_CHILD)
os << "c";
else
os << "p";
}
if (xindex & AA_X_INHERIT)
os << "i";
if (xindex & AA_X_UNCONFINED)
os << "u";
os << (xindex & AA_X_INDEX_MASK);
os << ")";
}
};
#endif /* __AA_PERM_H */

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# Translations for apparmor_parser
# Copyright (C) 2024 YEAR Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2011.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2020-10-14 03:51-0700\n"
"POT-Creation-Date: 2024-08-31 15:55-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
@@ -86,37 +86,41 @@ msgid "Permission denied; attempted to load a profile while confined?\n"
msgstr ""
#: ../parser_interface.c:99 ../parser_interface.c:102 ../parser_interface.c:79
#: ../parser_interface.c:82
#: ../parser_interface.c:82 ../parser_interface.c:84
#, c-format
msgid "Unknown error (%d): %s\n"
msgstr ""
#: ../parser_interface.c:116 ../parser_interface.c:119 ../parser_interface.c:96
#: ../parser_interface.c:100
#: ../parser_interface.c:100 ../parser_interface.c:102
#, c-format
msgid "%s: Unable to add \"%s\". "
msgstr ""
#: ../parser_interface.c:121 ../parser_interface.c:124
#: ../parser_interface.c:101 ../parser_interface.c:105
#: ../parser_interface.c:107
#, c-format
msgid "%s: Unable to replace \"%s\". "
msgstr ""
#: ../parser_interface.c:126 ../parser_interface.c:129
#: ../parser_interface.c:106 ../parser_interface.c:110
#: ../parser_interface.c:112
#, c-format
msgid "%s: Unable to remove \"%s\". "
msgstr ""
#: ../parser_interface.c:131 ../parser_interface.c:134
#: ../parser_interface.c:111 ../parser_interface.c:115
#: ../parser_interface.c:117
#, c-format
msgid "%s: Unable to write to stdout\n"
msgstr ""
#: ../parser_interface.c:135 ../parser_interface.c:138
#: ../parser_interface.c:115 ../parser_interface.c:119
#: ../parser_interface.c:121
#, c-format
msgid "%s: Unable to write to output file\n"
msgstr ""
@@ -125,24 +129,28 @@ msgstr ""
#: ../parser_interface.c:141 ../parser_interface.c:165
#: ../parser_interface.c:118 ../parser_interface.c:142
#: ../parser_interface.c:123 ../parser_interface.c:147
#: ../parser_interface.c:125 ../parser_interface.c:149
#, c-format
msgid "%s: ASSERT: Invalid option: %d\n"
msgstr ""
#: ../parser_interface.c:147 ../parser_interface.c:150
#: ../parser_interface.c:127 ../parser_interface.c:132
#: ../parser_interface.c:134
#, c-format
msgid "Addition succeeded for \"%s\".\n"
msgstr ""
#: ../parser_interface.c:151 ../parser_interface.c:154
#: ../parser_interface.c:131 ../parser_interface.c:136
#: ../parser_interface.c:138
#, c-format
msgid "Replacement succeeded for \"%s\".\n"
msgstr ""
#: ../parser_interface.c:155 ../parser_interface.c:158
#: ../parser_interface.c:135 ../parser_interface.c:140
#: ../parser_interface.c:142
#, c-format
msgid "Removal succeeded for \"%s\".\n"
msgstr ""
@@ -154,6 +162,7 @@ msgstr ""
#: ../parser_interface.c:656 ../parser_interface.c:658
#: ../parser_interface.c:446 ../parser_interface.c:476
#: ../parser_interface.c:542
#, c-format
msgid "profile %s network rules not enforced\n"
msgstr ""
@@ -199,16 +208,18 @@ msgstr ""
#: ../parser_interface.c:839 ../parser_interface.c:831
#: ../parser_interface.c:593 ../parser_interface.c:579
#: ../parser_interface.c:673
#, c-format
msgid "%s: Unable to write entire profile entry to cache\n"
msgstr ""
#: parser_lex.l:100 parser_lex.l:163 parser_lex.l:169
#: parser_lex.l:100 parser_lex.l:163 parser_lex.l:169 parser_lex.l:192
#, c-format
msgid "Could not open '%s'"
msgstr ""
#: parser_lex.l:104 parser_lex.l:167 parser_lex.l:173 parser_lex.l:174
#: parser_lex.l:197
#, c-format
msgid "fstat failed for '%s'"
msgstr ""
@@ -223,7 +234,7 @@ msgstr ""
msgid "stat failed for '%s'"
msgstr ""
#: parser_lex.l:155 parser_lex.l:133 parser_lex.l:139
#: parser_lex.l:155 parser_lex.l:133 parser_lex.l:139 parser_lex.l:148
#, c-format
msgid "Could not open '%s' in '%s'"
msgstr ""
@@ -235,6 +246,7 @@ msgid "Found unexpected character: '%s'"
msgstr ""
#: parser_lex.l:386 parser_lex.l:418 parser_lex.l:428 parser_lex.l:474
#: parser_lex.l:516
msgid "Variable declarations do not accept trailing commas"
msgstr ""
@@ -254,7 +266,7 @@ msgid "%s: Could not allocate memory for subdomainbase mount point\n"
msgstr ""
#: ../parser_main.c:577 ../parser_main.c:616 ../parser_main.c:479
#: ../parser_main.c:1444
#: ../parser_main.c:1444 ../parser_main.c:1654
#, c-format
msgid ""
"Warning: unable to find a suitable fs in %s, is it mounted?\n"
@@ -270,7 +282,7 @@ msgid ""
msgstr ""
#: ../parser_main.c:604 ../parser_main.c:642 ../parser_main.c:505
#: ../parser_main.c:828
#: ../parser_main.c:828 ../parser_main.c:891
#, c-format
msgid ""
"%s: Warning! You've set this program setuid root.\n"
@@ -280,6 +292,7 @@ msgstr ""
#: ../parser_main.c:704 ../parser_main.c:813 ../parser_main.c:836
#: ../parser_main.c:946 ../parser_main.c:860 ../parser_main.c:1038
#: ../parser_main.c:1127
#, c-format
msgid "Error: Could not read profile %s: %s.\n"
msgstr ""
@@ -308,29 +321,36 @@ msgstr ""
#: parser_yacc.y:1278 parser_yacc.y:1288 parser_yacc.y:1382 parser_yacc.y:1460
#: parser_yacc.y:1592 parser_yacc.y:1597 parser_yacc.y:1674 parser_yacc.y:1692
#: parser_yacc.y:1699 parser_yacc.y:1748 ../network.c:315 ../af_unix.cc:194
#: ../parser_misc.c:226 ../parser_misc.c:970 parser_yacc.y:379
#: parser_yacc.y:403 parser_yacc.y:571 parser_yacc.y:581 parser_yacc.y:673
#: parser_yacc.y:744 parser_yacc.y:1073 parser_yacc.y:1160 parser_yacc.y:1169
#: parser_yacc.y:1173 parser_yacc.y:1183 parser_yacc.y:1193 parser_yacc.y:1287
#: parser_yacc.y:1365 parser_yacc.y:1561 parser_yacc.y:1569 parser_yacc.y:1619
#: parser_yacc.y:1624 parser_yacc.y:1701 parser_yacc.y:1750 ../network.cc:899
#: ../af_unix.cc:197 ../all_rule.cc:102 ../all_rule.cc:131
msgid "Memory allocation error."
msgstr ""
#: ../parser_main.c:740 ../parser_main.c:872 ../parser_main.c:757
#: ../parser_main.c:975
#: ../parser_main.c:975 ../parser_main.c:1062
#, c-format
msgid "Cached load succeeded for \"%s\".\n"
msgstr ""
#: ../parser_main.c:744 ../parser_main.c:876 ../parser_main.c:761
#: ../parser_main.c:979
#: ../parser_main.c:979 ../parser_main.c:1066
#, c-format
msgid "Cached reload succeeded for \"%s\".\n"
msgstr ""
#: ../parser_main.c:910 ../parser_main.c:1058 ../parser_main.c:967
#: ../parser_main.c:1132
#: ../parser_main.c:1132 ../parser_main.c:1221
#, c-format
msgid "%s: Errors found in file. Aborting.\n"
msgstr ""
#: ../parser_misc.c:426 ../parser_misc.c:597 ../parser_misc.c:339
#: ../parser_misc.c:532
#: ../parser_misc.c:532 ../parser_misc.c:563
msgid ""
"Uppercase qualifiers \"RWLIMX\" are deprecated, please convert to lowercase\n"
"See the apparmor.d(5) manpage for details.\n"
@@ -338,17 +358,18 @@ msgstr ""
#: ../parser_misc.c:467 ../parser_misc.c:474 ../parser_misc.c:638
#: ../parser_misc.c:645 ../parser_misc.c:380 ../parser_misc.c:387
#: ../parser_misc.c:573 ../parser_misc.c:580
#: ../parser_misc.c:573 ../parser_misc.c:580 ../parser_misc.c:604
#: ../parser_misc.c:611
msgid "Conflict 'a' and 'w' perms are mutually exclusive."
msgstr ""
#: ../parser_misc.c:491 ../parser_misc.c:662 ../parser_misc.c:404
#: ../parser_misc.c:597
#: ../parser_misc.c:597 ../parser_misc.c:628
msgid "Exec qualifier 'i' invalid, conflicting qualifier already specified"
msgstr ""
#: ../parser_misc.c:502 ../parser_misc.c:673 ../parser_misc.c:415
#: ../parser_misc.c:608
#: ../parser_misc.c:608 ../parser_misc.c:639
#, c-format
msgid ""
"Unconfined exec qualifier (%c%c) allows some dangerous environment variables "
@@ -357,14 +378,16 @@ msgstr ""
#: ../parser_misc.c:510 ../parser_misc.c:551 ../parser_misc.c:681
#: ../parser_misc.c:722 ../parser_misc.c:423 ../parser_misc.c:464
#: ../parser_misc.c:616 ../parser_misc.c:657
#: ../parser_misc.c:616 ../parser_misc.c:657 ../parser_misc.c:647
#: ../parser_misc.c:688
#, c-format
msgid "Exec qualifier '%c' invalid, conflicting qualifier already specified"
msgstr ""
#: ../parser_misc.c:537 ../parser_misc.c:545 ../parser_misc.c:708
#: ../parser_misc.c:716 ../parser_misc.c:450 ../parser_misc.c:458
#: ../parser_misc.c:643 ../parser_misc.c:651
#: ../parser_misc.c:643 ../parser_misc.c:651 ../parser_misc.c:674
#: ../parser_misc.c:682
#, c-format
msgid "Exec qualifier '%c%c' invalid, conflicting qualifier already specified"
msgstr ""
@@ -376,7 +399,7 @@ msgid "Internal: unexpected mode character '%c' in input"
msgstr ""
#: ../parser_misc.c:615 ../parser_misc.c:786 ../parser_misc.c:528
#: ../parser_misc.c:721
#: ../parser_misc.c:721 ../parser_misc.c:752
#, c-format
msgid "Internal error generated invalid perm 0x%llx\n"
msgstr ""
@@ -388,12 +411,12 @@ msgid "AppArmor parser error: %s\n"
msgstr ""
#: ../parser_merge.c:92 ../parser_merge.c:91 ../parser_merge.c:83
#: ../parser_merge.c:71
#: ../parser_merge.c:71 ../parser_merge.c:74
msgid "Couldn't merge entries. Out of Memory\n"
msgstr ""
#: ../parser_merge.c:111 ../parser_merge.c:113 ../parser_merge.c:105
#: ../parser_merge.c:93
#: ../parser_merge.c:93 ../parser_merge.c:97
#, c-format
msgid "profile %s: has merged rule %s with conflicting x modifiers\n"
msgstr ""
@@ -403,28 +426,34 @@ msgid "Profile attachment must begin with a '/'."
msgstr ""
#: parser_yacc.y:260 parser_yacc.y:302 parser_yacc.y:348 parser_yacc.y:407
#: parser_yacc.y:446
msgid ""
"Profile names must begin with a '/', namespace or keyword 'profile' or 'hat'."
msgstr ""
#: parser_yacc.y:296 parser_yacc.y:338 parser_yacc.y:384 parser_yacc.y:449
#: parser_yacc.y:487
#, c-format
msgid "Failed to create alias %s -> %s\n"
msgstr ""
#: parser_yacc.y:417 parser_yacc.y:460 parser_yacc.y:506 parser_yacc.y:581
#: ../profile.h:272
msgid "Profile flag chroot_relative conflicts with namespace_relative"
msgstr ""
#: parser_yacc.y:421 parser_yacc.y:464 parser_yacc.y:510 parser_yacc.y:585
#: ../profile.h:276
msgid "Profile flag mediate_deleted conflicts with delegate_deleted"
msgstr ""
#: parser_yacc.y:424 parser_yacc.y:467 parser_yacc.y:513 parser_yacc.y:588
#: ../profile.h:279
msgid "Profile flag attach_disconnected conflicts with no_attach_disconnected"
msgstr ""
#: parser_yacc.y:427 parser_yacc.y:470 parser_yacc.y:516 parser_yacc.y:591
#: ../profile.h:282
msgid "Profile flag chroot_attach conflicts with chroot_no_attach"
msgstr ""
@@ -433,12 +462,13 @@ msgid "Profile flag 'debug' is no longer valid."
msgstr ""
#: parser_yacc.y:463 parser_yacc.y:506 parser_yacc.y:552 parser_yacc.y:629
#: ../profile.h:220
#, c-format
msgid "Invalid profile flag: %s."
msgstr ""
#: parser_yacc.y:498 parser_yacc.y:520 parser_yacc.y:548 parser_yacc.y:594
#: parser_yacc.y:673
#: parser_yacc.y:673 parser_yacc.y:687
msgid "Assert: `rule' returned NULL."
msgstr ""
@@ -464,55 +494,66 @@ msgid "Assert: `network_rule' return invalid protocol."
msgstr ""
#: parser_yacc.y:649 parser_yacc.y:696 parser_yacc.y:786 parser_yacc.y:867
#: parser_yacc.y:776
msgid "Assert: `change_profile' returned NULL."
msgstr ""
#: parser_yacc.y:680 parser_yacc.y:720 parser_yacc.y:810 parser_yacc.y:905
#: parser_yacc.y:814
msgid "Assert: 'hat rule' returned NULL."
msgstr ""
#: parser_yacc.y:689 parser_yacc.y:729 parser_yacc.y:819 parser_yacc.y:914
#: parser_yacc.y:823
msgid "Assert: 'local_profile rule' returned NULL."
msgstr ""
#: parser_yacc.y:824 parser_yacc.y:885 parser_yacc.y:992 parser_yacc.y:1077
#: ../cond_expr.cc:36
#, c-format
msgid "Unset boolean variable %s used in if-expression"
msgstr ""
#: parser_yacc.y:882 parser_yacc.y:986 parser_yacc.y:1092 parser_yacc.y:1181
#: parser_yacc.y:1083
msgid "unsafe rule missing exec permissions"
msgstr ""
#: parser_yacc.y:901 parser_yacc.y:954 parser_yacc.y:1060 parser_yacc.y:1148
#: parser_yacc.y:1050
msgid "subset can only be used with link rules."
msgstr ""
#: parser_yacc.y:903 parser_yacc.y:956 parser_yacc.y:1062 parser_yacc.y:1150
#: parser_yacc.y:1052
msgid "link and exec perms conflict on a file rule using ->"
msgstr ""
#: parser_yacc.y:905 parser_yacc.y:958 parser_yacc.y:1064 parser_yacc.y:1152
#: parser_yacc.y:1054
msgid "link perms are not allowed on a named profile transition.\n"
msgstr ""
#: parser_yacc.y:921 parser_yacc.y:1003 parser_yacc.y:1109 parser_yacc.y:1198
#: parser_yacc.y:1100
#, c-format
msgid "missing an end of line character? (entry: %s)"
msgstr ""
#: parser_yacc.y:975 parser_yacc.y:985 parser_yacc.y:1057 parser_yacc.y:1067
#: parser_yacc.y:1145 parser_yacc.y:1155 parser_yacc.y:1234 parser_yacc.y:1244
#: ../network.cc:484
msgid "Invalid network entry."
msgstr ""
#: parser_yacc.y:1039 parser_yacc.y:1048 parser_yacc.y:1254 parser_yacc.y:1510
#: parser_yacc.y:1617
#: parser_yacc.y:1617 parser_yacc.y:1644
#, c-format
msgid "Invalid capability %s."
msgstr ""
#: parser_yacc.y:1066 parser_yacc.y:1269 parser_yacc.y:1525 parser_yacc.y:1637
#: parser_yacc.y:1664
#, c-format
msgid "AppArmor parser error for %s%s%s at line %d: %s\n"
msgstr ""
@@ -560,13 +601,13 @@ msgid "%s: Unable to parse input line '%s'\n"
msgstr ""
#: ../parser_regex.c:397 ../parser_regex.c:405 ../parser_regex.c:421
#: ../parser_regex.c:487
#: ../parser_regex.c:487 ../parser_regex.c:491
#, c-format
msgid "%s: Invalid profile name '%s' - bad regular expression\n"
msgstr ""
#: ../parser_policy.c:202 ../parser_policy.c:402 ../parser_policy.c:375
#: ../parser_policy.c:383
#: ../parser_policy.c:383 ../parser_policy.c:231
#, c-format
msgid "ERROR merging rules for profile %s, failed to load\n"
msgstr ""
@@ -580,19 +621,19 @@ msgid ""
msgstr ""
#: ../parser_policy.c:279 ../parser_policy.c:359 ../parser_policy.c:332
#: ../parser_policy.c:340
#: ../parser_policy.c:340 ../parser_policy.c:193
#, c-format
msgid "ERROR processing regexs for profile %s, failed to load\n"
msgstr ""
#: ../parser_policy.c:306 ../parser_policy.c:389 ../parser_policy.c:362
#: ../parser_policy.c:370
#: ../parser_policy.c:370 ../parser_policy.c:218
#, c-format
msgid "ERROR expanding variables for profile %s, failed to load\n"
msgstr ""
#: ../parser_policy.c:390 ../parser_policy.c:382 ../parser_policy.c:355
#: ../parser_policy.c:363
#: ../parser_policy.c:363 ../profile.cc:366
#, c-format
msgid "ERROR adding hat access rule for profile %s\n"
msgstr ""
@@ -622,7 +663,7 @@ msgstr ""
msgid "%s: Errors found in combining rules postprocessing. Aborting.\n"
msgstr ""
#: parser_lex.l:180 parser_lex.l:186 parser_lex.l:187
#: parser_lex.l:180 parser_lex.l:186 parser_lex.l:187 parser_lex.l:211
#, c-format
msgid "Could not process include directory '%s' in '%s'"
msgstr ""
@@ -634,6 +675,8 @@ msgstr ""
#: ../parser_main.c:1115 ../parser_main.c:1132 ../parser_main.c:1024
#: ../parser_main.c:1041 ../parser_main.c:1332 ../parser_main.c:1354
#: ../parser_misc.c:280 ../parser_misc.c:299 ../parser_misc.c:308
#: ../parser_main.c:1511 ../parser_main.c:1535 ../parser_misc.c:310
#: ../parser_misc.c:329 ../parser_misc.c:338
msgid "Out of memory"
msgstr ""
@@ -682,20 +725,20 @@ msgstr ""
msgid "owner prefix not allow on capability rules"
msgstr ""
#: parser_yacc.y:1357 parser_yacc.y:1613 parser_yacc.y:1722
#: parser_yacc.y:1357 parser_yacc.y:1613 parser_yacc.y:1722 parser_yacc.y:1724
#, c-format
msgid "invalid mount conditional %s%s"
msgstr ""
#: parser_yacc.y:1374 parser_yacc.y:1628 parser_yacc.y:1737
#: parser_yacc.y:1374 parser_yacc.y:1628 parser_yacc.y:1737 parser_yacc.y:1739
msgid "bad mount rule"
msgstr ""
#: parser_yacc.y:1381 parser_yacc.y:1635 parser_yacc.y:1744
#: parser_yacc.y:1381 parser_yacc.y:1635 parser_yacc.y:1744 parser_yacc.y:1746
msgid "mount point conditions not currently supported"
msgstr ""
#: parser_yacc.y:1398 parser_yacc.y:1650 parser_yacc.y:1759
#: parser_yacc.y:1398 parser_yacc.y:1650 parser_yacc.y:1759 parser_yacc.y:1761
#, c-format
msgid "invalid pivotroot conditional '%s'"
msgstr ""
@@ -712,11 +755,13 @@ msgid "%s: Regex grouping error: Exceeded maximum nesting of {}\n"
msgstr ""
#: ../parser_policy.c:366 ../parser_policy.c:339 ../parser_policy.c:347
#: ../parser_policy.c:200
#, c-format
msgid "ERROR processing policydb rules for profile %s, failed to load\n"
msgstr ""
#: ../parser_policy.c:396 ../parser_policy.c:369 ../parser_policy.c:377
#: ../parser_policy.c:225
#, c-format
msgid "ERROR replacing aliases for profile %s, failed to load\n"
msgstr ""
@@ -741,7 +786,7 @@ msgstr ""
msgid "Internal: unexpected %s mode character '%c' in input"
msgstr ""
#: ../parser_misc.c:599 ../parser_misc.c:792
#: ../parser_misc.c:599 ../parser_misc.c:792 ../parser_misc.c:823
#, c-format
msgid "Internal error generated invalid %s perm 0x%x\n"
msgstr ""
@@ -766,16 +811,16 @@ msgstr ""
msgid "owner prefix not allowed on unix rules"
msgstr ""
#: parser_yacc.y:794 parser_yacc.y:885
#: parser_yacc.y:794 parser_yacc.y:885 ../all_rule.cc:141
msgid "owner prefix not allowed on capability rules"
msgstr ""
#: parser_yacc.y:1293 parser_yacc.y:1377
#: parser_yacc.y:1293 parser_yacc.y:1377 parser_yacc.y:1282
#, c-format
msgid "dbus rule: invalid conditional group %s=()"
msgstr ""
#: parser_yacc.y:1371 parser_yacc.y:1455
#: parser_yacc.y:1371 parser_yacc.y:1455 parser_yacc.y:1360
#, c-format
msgid "unix rule: invalid conditional group %s=()"
msgstr ""
@@ -785,183 +830,183 @@ msgstr ""
msgid "%s: Regex error: trailing '\\' escape character\n"
msgstr ""
#: ../parser_common.c:112
#: ../parser_common.c:112 ../parser_common.c:134
#, c-format
msgid "%s from %s (%s%sline %d): %s"
msgstr ""
#: ../parser_common.c:113
#: ../parser_common.c:113 ../parser_common.c:135
msgid "Warning converted to Error"
msgstr ""
#: ../parser_common.c:113
#: ../parser_common.c:113 ../parser_common.c:135
msgid "Warning"
msgstr ""
#: ../parser_interface.c:524
#: ../parser_interface.c:524 ../parser_interface.c:618
#, c-format
msgid "Unable to open stdout - %s\n"
msgstr ""
#: ../parser_interface.c:533
#: ../parser_interface.c:533 ../parser_interface.c:627
#, c-format
msgid "Unable to open output file - %s\n"
msgstr ""
#: parser_lex.l:326
#: parser_lex.l:326 parser_lex.l:363
msgid "Failed to process filename\n"
msgstr ""
#: parser_lex.l:720
#: parser_lex.l:720 parser_lex.l:799
#, c-format
msgid "Lexer found unexpected character: '%s' (0x%x) in state: %s"
msgstr ""
#: ../parser_main.c:915
#: ../parser_main.c:915 ../parser_main.c:1002
#, c-format
msgid "Unable to print the cache directory: %m\n"
msgstr ""
#: ../parser_main.c:951
#: ../parser_main.c:951 ../parser_main.c:1038
#, c-format
msgid "Error: Could not load profile %s: %s\n"
msgstr ""
#: ../parser_main.c:961
#: ../parser_main.c:961 ../parser_main.c:1048
#, c-format
msgid "Error: Could not replace profile %s: %s\n"
msgstr ""
#: ../parser_main.c:966
#: ../parser_main.c:966 ../parser_main.c:1053
#, c-format
msgid "Error: Invalid load option specified: %d\n"
msgstr ""
#: ../parser_main.c:1077
#: ../parser_main.c:1077 ../parser_main.c:1166
#, c-format
msgid "Could not get cachename for '%s'\n"
msgstr ""
#: ../parser_main.c:1434
#: ../parser_main.c:1434 ../parser_main.c:1644
msgid "Kernel features abi not found"
msgstr ""
#: ../parser_main.c:1438
#: ../parser_main.c:1438 ../parser_main.c:1648
msgid "Failed to add kernel capabilities to known capabilities set"
msgstr ""
#: ../parser_main.c:1465
#: ../parser_main.c:1465 ../parser_main.c:1675
#, c-format
msgid "Failed to clear cache files (%s): %s\n"
msgstr ""
#: ../parser_main.c:1474
#: ../parser_main.c:1474 ../parser_main.c:1684
msgid ""
"The --create-cache-dir option is deprecated. Please use --write-cache.\n"
msgstr ""
#: ../parser_main.c:1479
#: ../parser_main.c:1479 ../parser_main.c:1689
#, c-format
msgid "Failed setting up policy cache (%s): %s\n"
msgstr ""
#: ../parser_misc.c:904
#: ../parser_misc.c:904 ../parser_misc.c:935
#, c-format
msgid "Namespace not terminated: %s\n"
msgstr ""
#: ../parser_misc.c:906
#: ../parser_misc.c:906 ../parser_misc.c:937
#, c-format
msgid "Empty namespace: %s\n"
msgstr ""
#: ../parser_misc.c:908
#: ../parser_misc.c:908 ../parser_misc.c:939
#, c-format
msgid "Empty named transition profile name: %s\n"
msgstr ""
#: ../parser_misc.c:910
#: ../parser_misc.c:910 ../parser_misc.c:941
#, c-format
msgid "Unknown error while parsing label: %s\n"
msgstr ""
#: parser_yacc.y:306
#: parser_yacc.y:306 parser_yacc.y:334
msgid "Failed to setup default policy feature abi"
msgstr ""
#: parser_yacc.y:308
#: parser_yacc.y:308 parser_yacc.y:336
#, c-format
msgid ""
"%s: File '%s' missing feature abi, falling back to default policy feature "
"abi\n"
msgstr ""
#: parser_yacc.y:313
#: parser_yacc.y:313 parser_yacc.y:341
msgid "Failed to add policy capabilities to known capabilities set"
msgstr ""
#: parser_yacc.y:350
#: parser_yacc.y:350 parser_yacc.y:386
msgid "Profile names must begin with a '/' or a namespace"
msgstr ""
#: parser_yacc.y:372
#: parser_yacc.y:372 parser_yacc.y:408
msgid "Profile attachment must begin with a '/' or variable."
msgstr ""
#: parser_yacc.y:375
#: parser_yacc.y:375 parser_yacc.y:411
#, c-format
msgid "profile id: invalid conditional group %s=()"
msgstr ""
#: parser_yacc.y:404
#: parser_yacc.y:404 parser_yacc.y:443
msgid ""
"The use of file paths as profile names is deprecated. See man apparmor.d for "
"more information\n"
msgstr ""
#: parser_yacc.y:573
#: parser_yacc.y:573 ../profile.h:264
#, c-format
msgid "Profile flag '%s' conflicts with '%s'"
msgstr ""
#: parser_yacc.y:954
#: parser_yacc.y:954 parser_yacc.y:862
msgid "RLIMIT 'cpu' no units specified using default units of seconds\n"
msgstr ""
#: parser_yacc.y:966
#: parser_yacc.y:966 parser_yacc.y:874
msgid ""
"RLIMIT 'rttime' no units specified using default units of microseconds\n"
msgstr ""
#: parser_yacc.y:1582
#: parser_yacc.y:1582 parser_yacc.y:1609
msgid "Exec condition is required when unsafe or safe keywords are present"
msgstr ""
#: parser_yacc.y:1584
#: parser_yacc.y:1584 parser_yacc.y:1611
msgid "Exec condition must begin with '/'."
msgstr ""
#: parser_yacc.y:1643
#: parser_yacc.y:1643 parser_yacc.y:1670
#, c-format
msgid "AppArmor parser error at line %d: %s\n"
msgstr ""
#: parser_yacc.y:1790
#: parser_yacc.y:1790 parser_yacc.y:1797
#, c-format
msgid "Could not open '%s': %m"
msgstr ""
#: parser_yacc.y:1795
#: parser_yacc.y:1795 parser_yacc.y:1802
#, c-format
msgid "fstat failed for '%s': %m"
msgstr ""
#: parser_yacc.y:1809
#: parser_yacc.y:1809 parser_yacc.y:1816
#, c-format
msgid "failed to find features abi '%s': %m"
msgstr ""
#: parser_yacc.y:1813
#: parser_yacc.y:1813 parser_yacc.y:1821
#, c-format
msgid ""
"%s: %s features abi '%s' differs from policy declared feature abi, using the "
@@ -973,7 +1018,124 @@ msgstr ""
msgid "%s: Invalid glob type %d\n"
msgstr ""
#: ../parser_regex.c:693
#: ../parser_regex.c:693 ../parser_regex.c:721
#, c-format
msgid "The current kernel does not support stacking of named transitions: %s\n"
msgstr ""
#: ../parser_interface.c:76
#, c-format
msgid ""
"%s: Permission denied. You need policy admin privileges to manage profiles.\n"
"\n"
msgstr ""
#: ../parser_interface.c:80
#, c-format
msgid ""
"%s: Access denied. You need policy admin privileges to manage profiles.\n"
"\n"
msgstr ""
#: parser_lex.l:653 parser_lex.l:668
msgid "deprecated use of '#include'\n"
msgstr ""
#: ../parser_misc.c:730
#, c-format
msgid "Internal: unexpected perms character '%c' in input"
msgstr ""
#: ../parser_misc.c:799
#, c-format
msgid "Internal: unexpected %s perms character '%c' in input"
msgstr ""
#: ../parser_misc.c:1098
msgid ""
"Invalid perms, in deny rules 'x' must not be preceded by exec qualifier 'i', "
"'p', or 'u'"
msgstr ""
#: ../parser_misc.c:1102
msgid "Invalid perms, 'x' must be preceded by exec qualifier 'i', 'p', or 'u'"
msgstr ""
#: parser_yacc.y:689 parser_yacc.y:716 ../all_rule.cc:105 ../all_rule.cc:134
#, c-format
msgid "%s"
msgstr ""
#: parser_yacc.y:705
msgid "priority is not allowed on rule blocks"
msgstr ""
#: parser_yacc.y:778
msgid "owner conditional not allowed on unix rules"
msgstr ""
#: parser_yacc.y:794
msgid "owner conditional not allowed on capability rules"
msgstr ""
#: parser_yacc.y:1119 parser_yacc.y:1132 parser_yacc.y:1146
#, c-format
msgid "network rule: invalid conditional group %s=()"
msgstr ""
#: parser_yacc.y:1827
msgid "failed features abi not set but include cache skipped\n"
msgstr ""
#: ../parser_variable.c:295
msgid "attach_disconnected_path value must begin with a /"
msgstr ""
#: ../mount.cc:897
msgid ""
"The use of source as mount point for propagation type flags is deprecated.\n"
msgstr ""
#: ../network.h:200
msgid "priority prefix not allowed on network rules"
msgstr ""
#: ../network.h:204
msgid "owner prefix not allowed on network rules"
msgstr ""
#: ../profile.h:287
#, c-format
msgid ""
"Profile flag attach_disconnected set to conflicting values: '%s' and '%s'"
msgstr ""
#: ../profile.h:297
#, c-format
msgid "Profile flag kill.signal set to conflicting values: '%d' and '%d'"
msgstr ""
#: ../profile.h:307
#, c-format
msgid "Profile flag error set to conflicting values: '%s' and '%s'"
msgstr ""
#: ../userns.h:36
msgid "owner prefix not allowed on userns rules"
msgstr ""
#: ../mqueue.h:106
msgid "owner prefix not allowed on mqueue rules"
msgstr ""
#: ../io_uring.h:42
msgid "owner prefix not allowed on io_uring rules"
msgstr ""
#: ../all_rule.h:36
msgid "priority prefix not allowed on all rules"
msgstr ""
#: ../all_rule.h:40
msgid "owner prefix not allowed on all rules"
msgstr ""

View File

@@ -226,13 +226,13 @@ static bool add_proc_access(Profile *prof, const char *rule)
char *buffer = strdup("/proc/*/attr/apparmor/");
if (!buffer) {
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
new_ent = new_entry(buffer, AA_MAY_READ, NULL);
if (!new_ent) {
free(buffer);
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
add_entry_to_policy(prof, new_ent);
@@ -240,13 +240,13 @@ static bool add_proc_access(Profile *prof, const char *rule)
buffer = strdup("/sys/module/apparmor/parameters/enabled");
if (!buffer) {
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
new_ent = new_entry(buffer, AA_MAY_READ, NULL);
if (!new_ent) {
free(buffer);
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
add_entry_to_policy(prof, new_ent);
@@ -254,17 +254,17 @@ static bool add_proc_access(Profile *prof, const char *rule)
buffer = strdup(rule);
if (!buffer) {
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
new_ent = new_entry(buffer, AA_MAY_WRITE, NULL);
if (!new_ent) {
free(buffer);
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
add_entry_to_policy(prof, new_ent);
return TRUE;
return true;
}
#define CHANGEPROFILE_PATH "/proc/*/attr/{apparmor/,}{current,exec}"

View File

@@ -363,7 +363,7 @@ public:
struct cond_entry_list xattrs;
/* char *sub_name; */ /* subdomain name or NULL */
/* int default_deny; */ /* TRUE or FALSE */
/* bool default_deny; */
bool local;
Profile *parent;

View File

@@ -253,7 +253,7 @@ remove_profiles() {
retval=0
# We filter child profiles as removing the parent will remove
# the children
sed -e "s/ (\(enforce\|complain\|unconfined\))$//" "$SFS_MOUNTPOINT/profiles" | \
sed -e "s/ (\(enforce\|complain\|prompt\|kill\|unconfined\))$//" "$SFS_MOUNTPOINT/profiles" | \
LC_COLLATE=C sort | grep -v // | {
while read -r profile ; do
printf "%s" "$profile" > "$SFS_MOUNTPOINT/.remove"

View File

@@ -182,6 +182,8 @@ public:
{
bool output = true;
if (priority != 0)
os << "priority=" << priority << " ";
switch (audit) {
case AUDIT_FORCE:
os << "audit";
@@ -252,9 +254,9 @@ public:
tmp = (int) rule_mode - (int) rhs.rule_mode;
if (tmp != 0)
return tmp;
if ((uint) owner < (uint) rhs.owner)
if ((unsigned int) owner < (unsigned int) rhs.owner)
return -1;
if ((uint) owner > (uint) rhs.owner)
if ((unsigned int) owner > (unsigned int) rhs.owner)
return 1;
return 0;
}

View File

@@ -23,7 +23,7 @@
#include <iomanip>
#include <string>
#include <sstream>
#include <map>
#include <unordered_map>
#include "parser.h"
#include "profile.h"
@@ -35,7 +35,7 @@
#define MAXRT_SIG 32 /* Max RT above MINRT_SIG */
/* Signal names mapped to and internal ordering */
static struct signal_map { const char *name; int num; } signal_map[] = {
static unordered_map<string, int> signal_map = {
{"hup", 1},
{"int", 2},
{"quit", 3},
@@ -55,7 +55,8 @@ static struct signal_map { const char *name; int num; } signal_map[] = {
{"chld", 17},
{"cont", 18},
{"stop", 19},
{"stp", 20},
{"stp", 20}, // parser's previous name for SIGTSTP
{"tstp", 20},
{"ttin", 21},
{"ttou", 22},
{"urg", 23},
@@ -64,14 +65,12 @@ static struct signal_map { const char *name; int num; } signal_map[] = {
{"vtalrm", 26},
{"prof", 27},
{"winch", 28},
{"io", 29},
{"io", 29}, // SIGIO == SIGPOLL
{"poll", 29},
{"pwr", 30},
{"sys", 31},
{"emt", 32},
{"exists", 35},
/* terminate */
{NULL, 0}
};
/* this table is ordered post sig_map[sig] mapping */
@@ -96,7 +95,7 @@ static const char *const sig_names[MAXMAPPED_SIG + 1] = {
"chld",
"cont",
"stop",
"stp",
"tstp",
"ttin",
"ttou",
"urg",
@@ -105,7 +104,7 @@ static const char *const sig_names[MAXMAPPED_SIG + 1] = {
"vtalrm",
"prof",
"winch",
"io",
"io", // SIGIO == SIGPOLL
"pwr",
"sys",
"emt",
@@ -130,12 +129,14 @@ int find_signal_mapping(const char *sig)
return -1;
return MINRT_SIG + n;
} else {
for (int i = 0; signal_map[i].name; i++) {
if (strcmp(sig, signal_map[i].name) == 0)
return signal_map[i].num;
// Can't use string_view because that requires C++17
auto sigmap = signal_map.find(string(sig));
if (sigmap != signal_map.end()) {
return sigmap->second;
} else {
return -1;
}
}
return -1;
}
void signal_rule::extract_sigs(struct value_list **list)

View File

@@ -6,7 +6,7 @@ PARSER_BIN=apparmor_parser
PARSER=$(PARSER_DIR)/$(PARSER_BIN)
# parser.conf to use in tests. Note that some test scripts have the parser options hardcoded, so passing PARSER_ARGS=... is not enough to override it.
PARSER_ARGS=--config-file=./parser.conf
PROVE_ARG=-f --directives
PROVE_ARG=-f --directives -j2
ifeq ($(VERBOSE),1)
PROVE_ARG+=-v
@@ -37,6 +37,11 @@ error_output: $(PARSER)
parser_sanity: $(PARSER) gen_xtrans gen_dbus
$(Q)LANG=C APPARMOR_PARSER="$(PARSER)" ${PROVE} ${PROVE_ARG} ${TESTS}
# use this target for faster manual testing if you don't want/need to test all the profiles generated by gen-*.py
parser_sanity-no-gen: clean $(PARSER)
@echo WARNING: not creating the profiles using the gen-*.py scripts
$(Q)LANG=C APPARMOR_PARSER="$(PARSER)" ${PROVE} ${PROVE_ARG} ${TESTS}
caching: $(PARSER)
LANG=C ./caching.py -p "$(PARSER)" $(PYTEST_ARG)

View File

@@ -28,11 +28,104 @@ APPARMOR_PARSER="${APPARMOR_PARSER:-${_SCRIPTDIR}/../apparmor_parser}"
fails=0
errors=0
verbose="${VERBOSE:-}"
default_features_file="features.all"
features_file=$default_features_file
retain=0
dumpdfa=0
testtype=""
description="Manually run test"
tmpdir=$(mktemp -d /tmp/eq.$$-XXXXXX)
chmod 755 ${tmpdir}
export tmpdir
map_priority()
{
if [ -z "$1" -o "$1" == "priority=0" ] ; then
echo "0";
elif [ "$1" == "priority=-1" ] ; then
echo "-1"
elif [ "$1" == "priority=1" ] ;then
echo "1"
else
echo "unknown priority '$1'"
exit 1
fi
}
priority_eq()
{
local p1=$(map_priority "$1")
local p2=$(map_priority "$2")
if [ $p1 -eq $p2 ] ; then
return 0
fi
return 1
}
priority_lt()
{
local p1=$(map_priority "$1")
local p2=$(map_priority "$2")
if [ $p1 -lt $p2 ] ; then
return 0
fi
return 1
}
priority_gt()
{
local p1=$(map_priority "$1")
local p2=$(map_priority "$2")
if [ $p1 -gt $p2 ] ; then
return 0
fi
return 1
}
hash_binary_policy()
{
printf %s "$1" | ${APPARMOR_PARSER} --features-file "${_SCRIPTDIR}/features_files/features.all" -qS 2>/dev/null| md5sum | cut -d ' ' -f 1
return $?
local hash="parser_failure"
local dump="/dev/null"
local flags="-QKSq"
local rc=0
if [ $dumpdfa -ne 0 ] ; then
flags="$flags -D rule-exprs -D dfa-states"
dump="${tmpdir}/$1.state"
fi
printf %s "$2" | ${APPARMOR_PARSER} --features-file "${_SCRIPTDIR}/features_files/$features_file" ${flags} > "$tmpdir/$1.bin" 2>"$dump"
rc=$?
if [ $rc -eq 0 ] ; then
hash=$(sha256sum "${tmpdir}/$1.bin" | cut -d ' ' -f 1)
rc=$?
fi
printf %s $hash
if [ $retain -eq 0 -a $rc -ne 0 ] ; then
rm ${tmpdir}/*
else
mv "${tmpdir}/$1.bin" "${tmpdir}/$1.bin.$hash"
if [ $dumpdfa -ne 0 ] ; then
mv "${tmpdir}/$1.state" "$tmpdir/$1.state.$hash"
fi
fi
return $rc
}
check_retain()
{
if [ ${retain} -ne 0 ] ; then
printf " files retained in \"%s/\"\n" ${tmpdir} 1>&2
exit $ret
fi
}
# verify_binary - compares the binary policy of multiple profiles
@@ -55,26 +148,28 @@ verify_binary()
shift
shift
if [ "$t" != "equality" ] && [ "$t" != "inequality" ]
if [ "$t" != "equality" ] && [ "$t" != "inequality" ] && \
[ "$t" != "xequality" ] && [ "$t" != "xinequality" ]
then
printf "\nERROR: Unknown test mode:\n%s\n\n" "$t" 1>&2
((errors++))
return $((ret + 1))
fi
rm -f $tmpdir/*
if [ -n "$verbose" ] ; then printf "Binary %s %s" "$t" "$desc" ; fi
if ! good_hash=$(hash_binary_policy "$good_profile")
then
if ! good_hash=$(hash_binary_policy "known" "$good_profile") ; then
if [ -z "$verbose" ] ; then printf "Binary %s %s" "$t" "$desc" ; fi
printf "\nERROR: Error hashing the following \"known-good\" profile:\n%s\n\n" \
"$good_profile" 1>&2
((errors++))
rm -f ${tmpdir}/*
return $((ret + 1))
fi
for profile in "$@"
do
if ! hash=$(hash_binary_policy "$profile")
if ! hash=$(hash_binary_policy "test" "$profile")
then
if [ -z "$verbose" ] ; then printf "Binary %s %s" "$t" "$desc" ; fi
printf "\nERROR: Error hashing the following profile:\n%s\n\n" \
@@ -84,20 +179,52 @@ verify_binary()
elif [ "$t" == "equality" ] && [ "$hash" != "$good_hash" ]
then
if [ -z "$verbose" ] ; then printf "Binary %s %s" "$t" "$desc" ; fi
printf "\nFAIL: Hash values do not match\n" 2>&1
printf "known-good (%s) != profile-under-test (%s) for the following profile:\n%s\n\n" \
"$good_hash" "$hash" "$profile" 1>&2
printf "\nFAIL: Hash values do not match\n" 1>&2
printf "parser: %s -QKSq --features-file=%s\n" "${APPARMOR_PARSER}" "${_SCRIPTDIR}/features_files/$features_file" 1>&2
printf "known-good (%s) != profile-under-test (%s) for the following profiles:\nknown-good %s\nprofile-under-test %s\n\n" \
"$good_hash" "$hash" "$good_profile" "$profile" 1>&2
((fails++))
((ret++))
check_retain
elif [ "$t" == "xequality" ] && [ "$hash" == "$good_hash" ]
then
if [ -z "$verbose" ] ; then printf "Binary %s %s" "$t" "$desc" ; fi
printf "\nunexpected PASS: equality test with known problem, Hash values match\n" 1>&2
printf "parser: %s -QKSq --features-file=%s\n" "${APPARMOR_PARSER}" "${_SCRIPTDIR}/features_files/$features_file" 1>&2
printf "known-good (%s) == profile-under-test (%s) for the following profile:\nknown-good %s\nprofile-under-test %s\n\n" \
"$good_hash" "$hash" "$good_profile" "$profile" 1>&2
((fails++))
((ret++))
check_retain
elif [ "$t" == "xequality" ] && [ "$hash" != "$good_hash" ]
then
printf "\nknown problem %s %s: unchanged" "$t" "$desc" 1>&2
elif [ "$t" == "inequality" ] && [ "$hash" == "$good_hash" ]
then
if [ -z "$verbose" ] ; then printf "Binary %s %s" "$t" "$desc" ; fi
printf "\nFAIL: Hash values match\n" 2>&1
printf "known-good (%s) == profile-under-test (%s) for the following profile:\n%s\n\n" \
"$good_hash" "$hash" "$profile" 1>&2
printf "\nFAIL: Hash values match\n" 1>&2
printf "parser: %s -QKSq --features-file=%s\n" "${APPARMOR_PARSER}" "${_SCRIPTDIR}/features_files/$features_file" 1>&2
printf "known-good (%s) == profile-under-test (%s) for the following profiles:\nknown-good %s\nprofile-under-test %s\n\n" \
"$good_hash" "$hash" "$good_profile" "$profile" 1>&2
((fails++))
((ret++))
check_retain
elif [ "$t" == "xinequality" ] && [ "$hash" != "$good_hash" ]
then
if [ -z "$verbose" ] ; then printf "Binary %s %s" "$t" "$desc" ; fi
printf "\nunexpected PASS: inequality test with known problem, Hash values do not match\n" 1>&2
printf "parser: %s -QKSq --features-file %s\n" "${APPARMOR_PARSER}" "${_SCRIPTDIR}/features_files/$features_file" 1>&2
printf "known-good (%s) != profile-under-test (%s) for the following profile:\nknown-good %s\nprofile-under-test %s\n\n" \
"$good_hash" "$hash" "$good_profile" "$profile" 1>&2
((fails++))
((ret++))
check_retain
elif [ "$t" == "xinequality" ] && [ "$hash" == "$good_hash" ]
then
printf "\nknown problem %s %s: unchanged" "$t" "$desc" 1>&2
printf "parser: %s -QKSq --features-file=%s\n" "${APPARMOR_PARSER}" "${_SCRIPTDIR}/features_files/$features_file" 1>&2
fi
rm -f ${tmpdir}/test*
done
if [ $ret -eq 0 ]
@@ -117,11 +244,56 @@ verify_binary_equality()
verify_binary "equality" "$@"
}
# test we want to be equal but is currently a known problem
verify_binary_xequality()
{
verify_binary "xequality" "$@"
}
verify_binary_inequality()
{
verify_binary "inequality" "$@"
}
# test we want to be not equal but is currently a know problem
verify_binary_xinequality()
{
verify_binary "xinequality" "$@"
}
# kernel_features - test whether path(s) are present
# $@: feature path(s) to test
# Returns: 0 and outputs "true" if all paths exist
# 1 and error message if features dir is not available
# 2 and error message if path does not exist
kernel_features()
{
features_dir="/sys/kernel/security/apparmor/features/"
if [ ! -e "$features_dir" ] ; then
echo "Kernel feature masks not supported."
return 1;
fi
for f in $@ ; do
if [ ! -e "$features_dir/$f" ] ; then
# check if feature is in file
feature=$(basename "$features_dir/$f")
file=$(dirname "$features_dir/$f")
if [ -f $file ]; then
if ! grep -q $feature $file; then
echo "Required feature '$f' not available."
return 2;
fi
else
echo "Required feature '$f' not available."
return 3;
fi
fi
done
echo "true"
return 0;
}
##########################################################################
### wrapper fn, should be indented but isn't to reduce wrap
@@ -129,7 +301,7 @@ verify_set()
{
local p1="$1"
local p2="$2"
echo -e "\n equality $e of '$p1' vs '$p2'\n"
[ -n "${verbose}" ] && echo -e "\n equality $e of '$p1' vs '$p2'\n"
verify_binary_equality "'$p1'x'$p2' dbus send" \
"/t { $p1 dbus send, }" \
@@ -477,37 +649,86 @@ do
"pix -> b" "Pix -> b" "cux -> b" "Cux -> b" \
"cix -> b" "Cix -> b"
do
if [ "$perm1" == "$perm2" ] ; then
# Fixme: have to do special handling for -> b, as this
# creates an entry in the transition table. However
# priority rules can make it so the reference to the
# transition table is removed, but the parser still keeps
# the tranition. This can lead to a situation where the
# test dfa with a "-> b" transition is functionally equivalent
# but will fail equality comparison.
# fix this by adding two none overlapping x rules to add
# xtable entries
# /c -> /t//b, for cx rules being converted to px -> /t//b
# /a -> b, for px rules
# the rules must come last guarantee xtable order
if [ "$perm1" == "$perm2" ] || priority_gt "$p1" "" ; then
verify_binary_equality "'$p1'x'$p2' Exec perm \"${perm1}\" - most specific match: same as glob" \
"/t { $p1 /* ${perm1}, /f ${perm2}, }" \
"/t { $p2 /* ${perm1}, }"
"/t { $p1 /f* ${perm1}, /f ${perm2}, /a px -> b, /c px -> /t//b, }" \
"/t { $p2 /f* ${perm1}, /a px -> b, /c px -> /t//b, }"
else
verify_binary_inequality "'$p1'x'$p2' Exec \"${perm1}\" vs \"${perm2}\" - most specific match: different from glob" \
"/t { $p1 /* ${perm1}, /f ${perm2}, }" \
"/t { $p2 /* ${perm1}, }"
"/t { $p1 /f* ${perm1}, /f ${perm2}, /a px -> b, /c px -> /t//b, }" \
"/t { $p2 /f* ${perm1}, /a px -> b, /c px -> /t//b, }"
fi
done
verify_binary_inequality "'$p1'x'$p2' Exec \"${perm1}\" vs deny x - most specific match: different from glob" \
"/t { $p1 /* ${perm1}, audit deny /f x, }" \
"/t { $p2 /* ${perm1}, }"
if priority_gt "$p1" "" ; then
# priority stops permission carve out
verify_binary_equality "'$p1'x'$p2' Exec \"${perm1}\" vs deny x - most specific match: different from glob" \
"/t { $p1 /* ${perm1}, audit deny /f x, }" \
"/t { $p2 /* ${perm1}, }"
else
# deny rule carves out some of the match
verify_binary_inequality "'$p1'x'$p2' Exec \"${perm1}\" vs deny x - most specific match: different from glob" \
"/t { $p1 /* ${perm1}, audit deny /f x, }" \
"/t { $p2 /* ${perm1}, }"
fi
done
#Test deny carves out permission
verify_binary_inequality "'$p1'x'$p2' Deny removes r perm" \
if priority_gt "$p1" "" ; then
verify_binary_equality "'$p1'x'$p2' Deny removes r perm" \
"/t { $p1 /foo/[abc] r, audit deny /foo/b r, }" \
"/t { $p2 /foo/[abc] r, }"
verify_binary_equality "'$p1'x'$p2' Deny removes r perm" \
verify_binary_inequality "'$p1'x'$p2' Deny removes r perm" \
"/t { $p1 /foo/[abc] r, audit deny /foo/b r, }" \
"/t { $p2 /foo/[ac] r, }"
#this one may not be true in the future depending on if the compiled profile
#is explicitly including deny permissions for dynamic composition
verify_binary_equality "'$p1'x'$p2' Deny of ungranted perm" \
verify_binary_equality "'$p1'x'$p2' Deny of ungranted perm" \
"/t { $p1 /foo/[abc] r, audit deny /foo/b w, }" \
"/t { $p2 /foo/[abc] r, }"
elif priority_eq "$p1" "" ; then
verify_binary_inequality "'$p1'x'$p2' Deny removes r perm" \
"/t { $p1 /foo/[abc] r, audit deny /foo/b r, }" \
"/t { $p2 /foo/[abc] r, }"
verify_binary_equality "'$p1'x'$p2' Deny removes r perm" \
"/t { $p1 /foo/[abc] r, audit deny /foo/b r, }" \
"/t { $p2 /foo/[ac] r, }"
#this one may not be true in the future depending on if the compiled profile
#is explicitly including deny permissions for dynamic composition
verify_binary_equality "'$p1'x'$p2' Deny of ungranted perm" \
"/t { $p1 /foo/[abc] r, audit deny /foo/b w, }" \
"/t { $p2 /foo/[abc] r, }"
else
verify_binary_inequality "'$p1'x'$p2' Deny removes r perm" \
"/t { $p1 /foo/[abc] r, audit deny /foo/b r, }" \
"/t { $p2 /foo/[abc] r, }"
verify_binary_equality "'$p1'x'$p2' Deny removes r perm" \
"/t { $p1 /foo/[abc] r, audit deny /foo/b r, }" \
"/t { $p2 /foo/[ac] r, }"
#this one may not be true in the future depending on if the compiled profile
#is explicitly including deny permissions for dynamic composition
verify_binary_equality "'$p1'x'$p2' Deny of ungranted perm" \
"/t { $p1 /foo/[abc] r, audit deny /foo/b w, }" \
"/t { $p2 /foo/[abc] r, }"
fi
verify_binary_equality "'$p1'x'$p2' change_profile == change_profile -> **" \
"/t { $p1 change_profile, }" \
@@ -587,9 +808,25 @@ verify_binary_equality "'$p1'x'$p2' @{profile_name} is literal in peer with esc
# the "write" permission in the second profile and the test will fail.
# If the parser is adding the change_hat proc attr rules then the
# rules should merge and be equivalent.
verify_binary_equality "'$p1'x'$p2' change_hat rules automatically inserted"\
"/t { $p1 owner /proc/[0-9]*/attr/{apparmor/,}current a, ^test { $p2 owner /proc/[0-9]*/attr/{apparmor/,}current a, /f r, }}" \
#
# if priorities are different then the implied rule priority then the
# implied rule will completely override or completely be overriden.
# (the change_hat implied rule has a priority of 0)
# because of the difference in 'a' vs 'w' permission the two rules should
# only be equal when the append rule has the same priority as the implied
# rule (allowing them to combine) AND the other rule is not overridden by
# the implied rule, or both being overridden by the implied rule
# the implied rule
if { priority_lt "$p1" "" && priority_lt "$p2" "" ; } ||
{ priority_eq "$p1" "" && ! priority_lt "$p2" "" ; }; then
verify_binary_equality "'$p1'x'$p2' change_hat rules automatically inserted"\
"/t { $p1 owner /proc/[0-9]*/attr/{apparmor/,}current a, ^test { $p1 owner /proc/[0-9]*/attr/{apparmor/,}current a, /f r, }}" \
"/t { $p2 owner /proc/[0-9]*/attr/{apparmor/,}current w, ^test { $p2 owner /proc/[0-9]*/attr/{apparmor/,}current w, /f r, }}"
else
verify_binary_equality "'$p1'x'$p2' change_hat rules automatically inserted"\
"/t { $p1 owner /proc/[0-9]*/attr/{apparmor/,}current a, ^test { $p1 owner /proc/[0-9]*/attr/{apparmor/,}current a, /f r, }}" \
"/t { $p2 owner /proc/[0-9]*/attr/{apparmor/,}current w, ^test { $p2 owner /proc/[0-9]*/attr/{apparmor/,}current w, /f r, }}"
fi
# verify slash filtering for unix socket address paths.
# see https://bugs.launchpad.net/apparmor/+bug/1856738
@@ -677,7 +914,7 @@ verify_binary_equality "'$p1'x'$p2' mount specific deny doesn't affect non-overl
if [ $fails -ne 0 ] || [ $errors -ne 0 ]
then
printf "ERRORS: %d\nFAILS: %d\n" $errors $fails 2>&1
printf "ERRORS: %d\nFAILS: %d\n" $errors $fails 1>&2
exit $((fails + errors))
fi
@@ -758,12 +995,14 @@ verify_binary_equality "'$p1'x'$p2' dbus slash filtering for paths" \
}
printf "Equality Tests:\n"
run_tests()
{
printf "Equality Tests:\n"
#rules that don't support priority
#rules that don't support priority
# verify rlimit data conversions
verify_binary_equality "set rlimit rttime <= 12 weeks" \
# verify rlimit data conversions
verify_binary_equality "set rlimit rttime <= 12 weeks" \
"/t { set rlimit rttime <= 12 weeks, }" \
"/t { set rlimit rttime <= $((12 * 7)) days, }" \
"/t { set rlimit rttime <= $((12 * 7 * 24)) hours, }" \
@@ -773,7 +1012,7 @@ verify_binary_equality "set rlimit rttime <= 12 weeks" \
"/t { set rlimit rttime <= $((12 * 7 * 24 * 60 * 60 * 1000 * 1000)) us, }" \
"/t { set rlimit rttime <= $((12 * 7 * 24 * 60 * 60 * 1000 * 1000)), }"
verify_binary_equality "set rlimit cpu <= 42 weeks" \
verify_binary_equality "set rlimit cpu <= 42 weeks" \
"/t { set rlimit cpu <= 42 weeks, }" \
"/t { set rlimit cpu <= $((42 * 7)) days, }" \
"/t { set rlimit cpu <= $((42 * 7 * 24)) hours, }" \
@@ -781,39 +1020,209 @@ verify_binary_equality "set rlimit cpu <= 42 weeks" \
"/t { set rlimit cpu <= $((42 * 7 * 24 * 60 * 60)) seconds, }" \
"/t { set rlimit cpu <= $((42 * 7 * 24 * 60 * 60)), }"
verify_binary_equality "set rlimit memlock <= 2GB" \
verify_binary_equality "set rlimit memlock <= 2GB" \
"/t { set rlimit memlock <= 2GB, }" \
"/t { set rlimit memlock <= $((2 * 1024)) MB, }" \
"/t { set rlimit memlock <= $((2 * 1024 * 1024)) KB, }" \
"/t { set rlimit memlock <= $((2 * 1024 * 1024 * 1024)) , }"
# verify combinations of different priority levels
# for single rule comparisons, rules should keep same expected result
# even when the priorities are different.
# different priorities within a profile comparison resulting in
# different permission could affected expected results
priorities="none 0 1 -1"
for pri1 in $priorities ; do
if [ "$pri1" = "none" ] ; then
priority1=""
else
priority1="priority=$pri1"
fi
for pri2 in $priorities ; do
if [ "$pri2" = "none" ] ; then
priority2=""
run_port_range=$(kernel_features network_v8/af_inet)
if [ "$run_port_range" != "true" ]; then
echo -e "\nSkipping network af_inet tests. $run_port_range\n"
else
priority2="priority=$pri2"
# network port range
# select features file that contains netv8 af_inet
features_file="features.af_inet"
verify_binary_equality "network port range" \
"/t { network port=3456-3460, }" \
"/t { network port=3456, \
network port=3457, \
network port=3458, \
network port=3459, \
network port=3460, }"
verify_binary_equality "network peer port range" \
"/t { network peer=(port=3456-3460), }" \
"/t { network peer=(port=3456), \
network peer=(port=3457), \
network peer=(port=3458), \
network peer=(port=3459), \
network peer=(port=3460), }"
verify_binary_inequality "network port range allows more than single port" \
"/t { network port=3456-3460, }" \
"/t { network port=3456, }"
verify_binary_inequality "network peer port range allows more than single port" \
"/t { network peer=(port=3456-3460), }" \
"/t { network peer=(port=3456), }"
# return to default
features_file=$default_features_file
fi
verify_set "$priority1" "$priority2"
done
# Equality tests that set explicit priority level
# TODO: priority handling for file paths is currently broken
# This test is not actually correct due to two subtle
# interactions: - /* is special-cased to expand to /[^/\x00]+
# with at least one character - Quieting of [^a] in the DFA is
# different and cannot be manually fixed
#verify_binary_xequality "file rule carveout regex vs priority" \
# "/t { deny /[^a]* rwxlk, /a r, }" \
# "/t { priority=-1 deny /* rwxlk, /a r, }" \
# Not grouping all three together because parser correctly handles
# the equivalence of carveout regex and default audit deny
verify_binary_equality "file rule carveout regex vs priority (audit)" \
"/t { audit deny /[^a]* rwxlk, /a r, }" \
"/t { priority=-1 audit deny /* rwxlk, /a r, }"
verify_binary_equality "file rule default audit deny vs audit priority carveout" \
"/t { /a r, }" \
"/t { priority=-1 audit deny /* rwxlk, /a r, }"
# verify combinations of different priority levels
# for single rule comparisons, rules should keep same expected result
# even when the priorities are different.
# different priorities within a profile comparison resulting in
# different permission could affected expected results
priorities="none 0 1 -1"
for pri1 in $priorities ; do
if [ "$pri1" = "none" ] ; then
priority1=""
else
priority1="priority=$pri1"
fi
for pri2 in $priorities ; do
if [ "$pri2" = "none" ] ; then
priority2=""
else
priority2="priority=$pri2"
fi
verify_set "$priority1" "$priority2"
done
done
[ -z "${verbose}" ] && printf "\n"
printf "PASS\n"
exit 0
}
usage()
{
local progname="$0"
local rc="$1"
local msg="usage: ${progname} [Options]
Run the equality tests if no options given, otherwise run as directed
by the options.
Options:
-h, --help display this help
-e base args run an equality test on the following args
-n base args run an inequality test on the following args
-xequality run a known proble equality test
-xinequality run a known proble inequality test
-r on failure retain failed test output and abort
-d include dfa dumps with failed test output
-f arg features file to use
-p arg parser to invoke
--description description to print with test
-v verbose
examples:
$ equality.sh
...
$ equality.sh -r
....
inary equality 'priority=1'x'' Exec perm \"ux\" - most specific match: same as glob
FAIL: Hash values do not match
parser: ../apparmor_parser --config-file=./parser.conf --features-file=./features_files/features.all
known-good (0344cd377ccb239aba4cce768b818010961d68091d8c7fae72c755cfcb48d4a2) != profile-under-test (33fdf4575322a036c2acb75f93a7154179036f1189ef68ab9f1ae98e7f865780) for the following profiles:
known-good /t { priority=1 /* ux, /f px -> b, }
profile-under-test /t { /* ux, }
$ equality.sh -e \"/t { priority=1 /* Px -> b, /f Px, }\" \"/t { /* Px, }\"
$ equality.sh -e \"/t { priority=1 /* Px -> b, /f Px, }\" \"/t { /* Px, }\""
echo "$msg"
}
POSITIONAL_ARGS=()
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
usage
exit 0
;;
-e|--equality)
testtype="equality"
shift # past argument
;;
--xequality)
testtype="xequality"
shift # past argument
;;
-n|--inequality)
testtype="inequality"
shift # past argument
;;
--xinequality)
testtype="xinequality"
shift # past argument
;;
-d|--dfa)
dumpdfa=1
shift # past argument
;;
-r|--retain)
retain=1
shift # past argument
;;
-v|--verbose)
verbos=1
shift # past argument
;;
-f|--feature-file)
features_file="$2"
shift # past argument
shift # past option
;;
--description)
description="$2"
shift # past argument
shift # past option
;;
-p|--parser)
APPARMOR_PARSER="$2"
shift # past argument
shift # past option
;;
-*|--*)
echo "Unknown option $1"
exit 1
;;
*)
POSITIONAL_ARGS+=("$1") # save positional arg
shift # past argument
;;
esac
done
[ -z "${verbose}" ] && printf "\n"
printf "PASS\n"
exit 0
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
if [ $# -eq 0 -o -z $testtype] ; then
run_tests "$@"
exit $?
fi
for profile in "$@" ; do
verify_binary "$testtype" "$description" "$known" "$profile"
done

View File

@@ -0,0 +1,117 @@
capability {0xffffff
}
caps {extended {yes
}
mask {chown dac_override dac_read_search fowner fsetid kill setgid setuid setpcap linux_immutable net_bind_service net_broadcast net_admin net_raw ipc_lock ipc_owner sys_module sys_rawio sys_chroot sys_ptrace sys_pacct sys_admin sys_boot sys_nice sys_resource sys_time sys_tty_config mknod lease audit_write audit_control setfcap mac_override mac_admin syslog wake_alarm block_suspend audit_read perfmon bpf checkpoint_restore
}
}
dbus {mask {acquire send receive
}
}
domain {attach_conditions {xattr {yes
}
}
change_hat {yes
}
change_hatv {yes
}
change_onexec {yes
}
change_profile {yes
}
computed_longest_left {yes
}
disconnected.path {yes
}
fix_binfmt_elf_mmap {yes
}
interruptible {yes
}
kill.signal {yes
}
post_nnp_subset {yes
}
stack {yes
}
unconfined_allowed_children {yes
}
version {1.2
}
}
file {mask {create read write exec append mmap_exec link lock
}
}
io_uring {mask {sqpoll override_creds
}
}
ipc {posix_mqueue {create read write open delete setattr getattr
}
}
mount {mask {mount umount pivot_root
}
move_mount {detached
}
}
namespaces {mask {userns_create
}
pivot_root {no
}
profile {yes
}
userns_create {pciu&
}
}
network {af_mask {unspec unix inet ax25 ipx appletalk netrom bridge atmpvc x25 inet6 rose netbeui security key netlink packet ash econet atmsvc rds sna irda pppox wanpipe llc ib mpls can tipc bluetooth iucv rxrpc isdn phonet ieee802154 caif alg nfc vsock kcm qipcrtr smc xdp mctp
}
af_unix {yes
}
}
network_v8 {af_inet {yes
}
af_mask {unspec unix inet ax25 ipx appletalk netrom bridge atmpvc x25 inet6 rose netbeui security key netlink packet ash econet atmsvc rds sna irda pppox wanpipe llc ib mpls can tipc bluetooth iucv rxrpc isdn phonet ieee802154 caif alg nfc vsock kcm qipcrtr smc xdp mctp
}
}
policy {outofband {0x000001
}
permstable32 {allow deny subtree cond kill complain prompt audit quiet hide xindex tag label
}
permstable32_version {0x000003
}
set_load {yes
}
unconfined_restrictions {change_profile {yes
}
io_uring {0
}
userns {1
}
}
versions {v5 {yes
}
v6 {yes
}
v7 {yes
}
v8 {yes
}
v9 {yes
}
}
}
ptrace {mask {read trace
}
}
query {label {data {yes
}
multi_transaction {yes
}
perms {allow deny audit quiet
}
}
}
rlimit {mask {cpu fsize data stack core rss nproc nofile memlock as locks sigpending msgqueue nice rtprio rttime
}
}
signal {mask {hup int quit ill trap abrt bus fpe kill usr1 segv usr2 pipe alrm term stkflt chld cont stop stp ttin ttou urg xcpu xfsz vtalrm prof winch io pwr sys emt lost
}
}

View File

@@ -78,7 +78,7 @@ APPARMOR_PARSER="${APPARMOR_PARSER:-../apparmor_parser}"
# {a} (0x 40030/0/0/0)
echo -n "Minimize profiles basic perms "
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, /** w, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 6 ] ; then
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, /** w, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 6 ] ; then
echo "failed"
exit 1;
fi
@@ -93,7 +93,7 @@ echo "ok"
# {9} (0x 12804a/0/2800a/0)
# {c} (0x 40030/0/0/0)
echo -n "Minimize profiles audit perms "
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, audit /** w, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 6 ] ; then
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, audit /** w, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 6 ] ; then
echo "failed"
exit 1;
fi
@@ -112,7 +112,7 @@ echo "ok"
# {c} (0x 40030/0/0/0)
echo -n "Minimize profiles deny perms "
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, deny /** w, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 6 ] ; then
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, deny /** w, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 6 ] ; then
echo "failed"
exit 1;
fi
@@ -130,7 +130,7 @@ echo "ok"
# {c} (0x 40030/0/0/0)
echo -n "Minimize profiles audit deny perms "
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, audit deny /** w, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -O filter-deny -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 5 ] ; then
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, audit deny /** w, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -O filter-deny -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 5 ] ; then
echo "failed"
exit 1;
fi
@@ -155,7 +155,7 @@ echo "ok"
## NOTE: change count from 6 to 7 when extend perms is not dependent on
## prompt rules being present
echo -n "Minimize profiles extended no-filter audit deny perms "
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, audit deny /** w, }" | ${APPARMOR_PARSER} -M features_files/features.extended-perms-no-policydb -QT -O minimize -O no-filter-deny -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 7 ] ; then
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, audit deny /** w, }" | ${APPARMOR_PARSER} -M features_files/features.extended-perms-no-policydb -QT -O minimize -O no-filter-deny -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 7 ] ; then
echo "failed"
exit 1;
fi
@@ -173,7 +173,7 @@ echo "ok"
# {2} (0x 4/0//0/0/0) <- from policydb still showing up bug
echo -n "Minimize profiles extended filter audit deny perms "
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, audit deny /** w, }" | ${APPARMOR_PARSER} -M features_files/features.extended-perms-no-policydb -QT -O minimize -O filter-deny -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 6 ] ; then
if [ "$(echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, audit deny /** w, }" | ${APPARMOR_PARSER} -M features_files/features.extended-perms-no-policydb -QT -O minimize -O filter-deny -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 6 ] ; then
echo "failed"
exit 1;
fi
@@ -208,7 +208,7 @@ echo "ok"
#
echo -n "Minimize profiles xtrans "
if [ "$(echo "/t { /b px, /* Pixr, /a Cx -> foo, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 3 ] ; then
if [ "$(echo "/t { /b px, /* Pixr, /a Cx -> foo, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 3 ] ; then
echo "failed"
exit 1;
fi
@@ -216,7 +216,7 @@ echo "ok"
# same test as above + audit
echo -n "Minimize profiles audit xtrans "
if [ "$(echo "/t { /b px, audit /* Pixr, /a Cx -> foo, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 3 ] ; then
if [ "$(echo "/t { /b px, audit /* Pixr, /a Cx -> foo, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 3 ] ; then
echo "failed"
exit 1;
fi
@@ -229,7 +229,7 @@ echo "ok"
# {3} (0x 0/fe17f85/0/14005)
echo -n "Minimize profiles deny xtrans "
if [ "$(echo "/t { /b px, deny /* xr, /a Cx -> foo, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 1 ] ; then
if [ "$(echo "/t { /b px, deny /* xr, /a Cx -> foo, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 1 ] ; then
echo "failed"
exit 1;
fi
@@ -241,7 +241,7 @@ echo "ok"
# {3} (0x 0/fe17f85/0/0)
echo -n "Minimize profiles audit deny xtrans "
if [ "$(echo "/t { /b px, audit deny /* xr, /a Cx -> foo, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -O no-filter-deny -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*} 0 (.*)$')" -ne 0 ] ; then
if [ "$(echo "/t { /b px, audit deny /* xr, /a Cx -> foo, }" | ${APPARMOR_PARSER} -M features_files/features.nopolicydb -QT -O minimize -O no-filter-deny -D dfa-states 2>&1 | grep -v '<==' | grep -c '^{.*}(.*)$')" -ne 0 ] ; then
echo "failed"
exit 1;
fi

View File

@@ -0,0 +1,7 @@
#
#=Description to rule priority out of low end of range
#=EXRESULT FAIL
#
/usr/bin/foo {
priority=-1001 /usr/bin/foo r,
}

View File

@@ -0,0 +1,7 @@
#
#=Description basic file rule priority outside high end of range.
#=EXRESULT FAIL
#
/usr/bin/foo {
priority=1001 /usr/bin/foo r,
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - missing end of range
#=EXRESULT FAIL
#
/usr/bin/foo {
network port=22-,
}

Some files were not shown because too many files have changed in this diff Show More