2
0
mirror of https://gitlab.com/apparmor/apparmor synced 2025-09-02 15:25:27 +00:00

Compare commits

..

477 Commits

Author SHA1 Message Date
John Johansen
6e0b7ca20a parser: change priority so that it accumulates based on permissions
The current behavior of priority rules can be non-intuitive with
higher priority rules completely overriding lower priority rules even in
permissions not held in common. This behavior does have use cases but
its can be very confusing, and does not normal policy behavior

Eg.
  priority=0 allow r /**,
  priority=1 deny  w /**,

will result in no allowed permissions even though the deny rule is
only removing the w permission, beause the higher priority rule
completely over ride lower priority permissions sets (including
none shared permissions).

Instead move to tracking the priority at a per permission level. This
allows the w permission to still override at priority 1, while the
read permission is allowed at priority 0.

The final constructed state will still drop priority for the final
permission set on the state.

Note: this patch updates the equality tests for the cases where
the complete override behavior was being tested for.

The complete override behavior will be reintroduced in a future
patch with a keyword extension, enabling that behavior to be used
for ordered blocks etc.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-06 12:24:04 -08:00
John Johansen
e56dbc2084 parser: fix prefix dump to include priority
The original patch adding priority to the set of prefixes failed to
update the prefix dump to include the priority priority field.

Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-06 10:54:42 -08:00
John Johansen
cc31a0da22 parser: drop priority from state permissions
The priority field is only used during state construction, and can
even prevent later optimizations like minimization. The parser already
explcitily clears the states priority field as part of the last thing
done during construction so it doesn't prevent minimization
optimizations.

This means the state priority not only wastes storage because it is
unused post construction but if used it could introduce regressions,
or other issues.

The change to the minimization tests just removes looking for the
priority field that is no longer reported.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-06 10:53:19 -08:00
John Johansen
9221d291ec parser: stop using dynamic_cast for prompt permission calculations
Like was done for the other MatchFlags switch to using a node type
instead of dynamic_cast as this will result in a performance
improvement.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-06 08:34:02 -08:00
Christian Boltz
002bf1339c Merge spread: Add support for EXPECT_DENIALS in profile tests
This commit adds support for EXPECT_DENIALS in profile tests. Any test
that sets the EXPECT_DENIALS environment variable is expected to trigger
AppArmor denials and will fail if none was generated.

This allows to test that problematic behaviors are correctly blocked.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1515
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-02-05 16:44:42 +00:00
Maxime Bélair
fc3f27e255 spread: Add support for EXPECT_DENIALS in profile tests
Introduce the EXPECT_DENIALS environment variable for profile tests.
Each line of EXPECT_DENIALS is a regex that must match an AppArmor
denial for the corresponding test, and conversely.

This ensures that problematic behaviors are correctly blocked and logged.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>
2025-02-05 10:02:21 +01:00
John Johansen
4831a854fe Merge Initial profile for tar binary
Profile for `tar` package.

In order to test this, I've diffed the output of the `tar`'s testsuite with and without the profile:

```
sudo apt build-dep tar
apt source tar
cd tar-*/
./configure
cd tests/
./testsuite > without_profile.log
apparmor_parser ~/tar
./testsuite > with_profile.log
diff without_profile.log with_profile.log # should not output anything
echo $? # should be zero
```

Additionally, [the testsuite available on QRT](https://git.launchpad.net/qa-regression-testing/tree/scripts/test-tar.py) for the `tar` package should continue to pass after loading the profile.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1453
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-02-03 21:46:36 +00:00
Octavio Galland
c9cfbb4668 restrict networking to localhost 2025-02-03 16:33:13 -03:00
Octavio Galland
38399e7720 disallow ${HOME}/bin 2025-02-03 16:32:58 -03:00
John Johansen
4765bcd7bc Merge parser: misc fixes on apparmor.d man page
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1516
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-31 21:54:08 +00:00
Georgia Garcia
998ee0595e parser: misc fixes on apparmor.d man page
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-31 18:23:14 -03:00
Zygmunt Krynicki
54561af112 Merge tests/spread: fix debian system name
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1511
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-30 16:22:26 +00:00
Zygmunt Krynicki
39cd3f6f21 Merge tests: unify formatting of .gitlab-ci.yml
We had some mixture of indent styles.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1510
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-30 16:22:12 +00:00
Zygmunt Krynicki
d4582f232f tests: unify formatting of .gitlab-ci.yml
We had some mixture of indent styles.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-30 08:02:33 +01:00
Zygmunt Krynicki
8967dee5b9 tests/spread: fix debian system name
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-29 19:45:45 +01:00
John Johansen
d482aab419 Merge tests: mark more regression test as known-failures
A number of tests are failing and since spread does not contain a native
XFAIL facility, we have to maintain a silent-failure feature code
ourselves. A few of those have been fixed since the first iteration of
this patch. The remaining known failures are being fixed.

Later on I would like to separate XFAIL from SKIP so that if a test is
known to exercise kernel feature unavailable on the given system, the
test is just not executed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1483
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-29 09:05:19 +00:00
Zygmunt Krynicki
219626c503 Merge utils: adjusts aa-notify tests to handle Python 3.13+
Python 3.13 changes the formatting of long-short option pairs that use a
meta-variable. Up until 3.13 the meta-variable was repeated. Since
Python change [1] the meta-var is only printed once.

[1] https://github.com/python/cpython/pull/103372

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1495
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-28 22:24:25 +00:00
Zygmunt Krynicki
0acc138712 utils: abbreviate delta for Python 3.12 argparse
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 23:01:42 +01:00
Zygmunt Krynicki
6336465edf utils: adjusts aa-notify tests to handle Python 3.13+
Python 3.13 changes the formatting of long-short option pairs that use a
meta-variable. Up until 3.13 the meta-variable was repeated. Since
Python change [1] the meta-var is only printed once.

[1] https://github.com/python/cpython/pull/103372

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 20:49:16 +01:00
Zygmunt Krynicki
32bf95bb1e tests: exclude debian systems from toybox test
This is so that we get a baseline that passes to enable testing in CI/CD
but also to spark a discussion around what to do with a profile that
indirectly relies on a kernel feature that is not available on a given
system.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 14:57:32 +01:00
Zygmunt Krynicki
b0422d5572 tests: mark more regression test as known-failures
A number of tests are failing and since spread does not contain a native
XFAIL facility, we have to maintain a silent-failure feature code
ourselves. A few of those have been fixed since the first iteration of
this patch. The remaining known failures are being fixed.

Later on I would like to separate XFAIL from SKIP so that if a test is
known to exercise kernel feature unavailable on the given system, the
test is just not executed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 14:56:02 +01:00
Georgia Garcia
6405608442 Merge tests: add fuse-overlayfs to cloud-init
This is a dependency of the overlayfs_fuse regression test.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1509
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-28 12:10:03 +00:00
Zygmunt Krynicki
237b5c0f73 tests: add fuse-overlayfs to cloud-init
This is a dependency of the overlayfs_fuse regression test.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-28 11:28:44 +01:00
John Johansen
3b7ee81f04 Merge utils: test: account for last cmd format change in test-aa-notify
The "last" command, which was supplied by util-linux in older Ubuntu
versions, is now supplied by wtmpdb in Oracular and Plucky. Unfortunately,
this changed the output format and broke our column based parsing.

While the wtmpdb upstream has added json support at
https://github.com/thkukuk/wtmpdb/issues/20, we cannot use it because
we need to support systems that do not have this new feature added.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1508
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2025-01-27 19:55:48 +00:00
John Johansen
265a1656d1 Merge libapparmor: fixes to the SWIG bindings for SWIG 4.3 and later
Unfortunately we are affected by the backwards-incompatible change introduced by https://github.com/swig/swig/pull/2907

This MR contains fixes to keep the Python-side API the same on systems using SWIG 4.3 or later, e.g. Ubuntu Plucky.

Fixes https://gitlab.com/apparmor/apparmor/-/issues/475.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

Closes #475
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1504
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-27 19:51:20 +00:00
Georgia Garcia
5f06df3868 Merge utils: look for 'file' class when parsing logs
Since kernel commit 8c4b785a86be the class is available to check if
the log belongs to which class. This fixes cases where the logparser
is not able to distinguish between network and file operations.

This issue does not manifest previous to and including apparmor-4.0
because we did not process auditing logs then.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/478
Reported-by: vyomydv vyom.yadav@canonical.com
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

This patch should be cherry-picked to apparmor-4.1

Closes #478
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1507
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-27 19:34:25 +00:00
Georgia Garcia
af6dfe5b81 utils: look for 'file' class when parsing logs
Since kernel commit 8c4b785a86be the class is available to check if
the log belongs to which class. This fixes cases where the logparser
is not able to distinguish between network and file operations.

This issue does not manifest previous to and including apparmor-4.0
because we did not process auditing logs then.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/478
Reported-by: vyomydv vyom.yadav@canonical.com
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-27 15:43:23 -03:00
Ryan Lee
afd6aa0581 utils: test: account for last cmd format change in test-aa-notify
The "last" command, which was supplied by util-linux in older Ubuntu
versions, is now supplied by wtmpdb in Oracular and Plucky. Unfortunately,
this changed the output format and broke our column based parsing.

While the wtmpdb upstream has added json support at
https://github.com/thkukuk/wtmpdb/issues/20, we cannot use it because
we need to support systems that do not have this new feature added.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-27 10:15:45 -08:00
Octavio Galland
76647b33b1 typo in tar test 2025-01-27 10:10:29 -03:00
John Johansen
c81eacacac Merge aa-load documentation improvements
This MR includes copyediting of the `aa-load --help` text as well as a man page based on the help text.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1505
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-25 01:39:42 +00:00
Ryan Lee
ee8300545e Write a man page for aa-load based on the help text
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 16:03:20 -08:00
Ryan Lee
6592daff90 Copyedit the help text for aa-load
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 16:02:05 -08:00
Ryan Lee
3fa40935f5 Replace aa_find_mountpoint cstring_output_allocate due to $isvoid issue
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 15:00:15 -08:00
Ryan Lee
1620887463 Replace simple %append_output uses with ISVOID helpers for SWIG 4.3
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 14:59:39 -08:00
Ryan Lee
1b46ab10fd Create %append_output compatibility wrappers for SWIG 4.3
Unfortunately we are affected by the backwards-incompatible change introduced by https://github.com/swig/swig/pull/2907

These wrappers will be needed to fix tests on systems using SWIG 4.3 or later, e.g. Ubuntu Plucky.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 14:59:39 -08:00
Georgia Garcia
dfb7abf2a6 Merge Set up overlayfs_fuse test that uses a FUSE implementation of overlayfs
This also reorganizes the overlayfs tests slightly in order to maximize code reuse between the old test and the new one.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1503
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-24 20:28:57 +00:00
Ryan Lee
be38da7570 Move most file setup and creation to before the overlay mount call
kernel overlayfs propagates the changes, while fuse_overlayfs doesn't

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 09:00:28 -08:00
Ryan Lee
ed8b6cb663 Add fuse_overlayfs to apt dependency list of Gitlab CI test-build-regression
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 09:00:26 -08:00
Ryan Lee
9e05668d5a Set up an overlayfs_fuse regression test by using the other path of the overlayfs_common.inc helper
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 08:59:06 -08:00
Ryan Lee
a0f551d5b7 Wire up the kernel/fuse argument switch in overlayfs_common.inc regression tests
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 08:59:06 -08:00
Ryan Lee
9413658277 Move overlayfs test into include helper and wrap in overlayfs_kernel
By making the test a file to be included as a helper, we can reuse most of the code for a fuse_overlayfs test without copy-pasting

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-24 08:59:06 -08:00
Octavio Galland
8efe442717 tar test fixes 2025-01-24 09:30:49 -03:00
John Johansen
dcce4bc62f Merge Upadate man apparmor.d to highlight pivot_root limitation
As pointed out by https://bugs.launchpad.net/apparmor/+bug/2087875 ,
profile transitions with pivot_root are currently not supported on any
kernel.

This commit makes this limitation more obvious to users.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1436
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: John Johansen <john@jjmx.net>
2025-01-24 11:18:44 +00:00
Zygmunt Krynicki
4c8c4a1d77 Merge tests: unify CI/CD preparation phase
We now have GitLab CI/CD pipeline co-existing with spread, coupled with
image-garden and the cloud-init profile defined for each distribution.

To avoid duplicating list of required dependencies, re-use cloud-init
profile as the reference list of dependencies (superset between build
and test) to install.

In addition to the dependency list, the build_all job now re-uses spread
prepare section in similar fashion. If it builds in spread, it should
build in CI as well.

A small quality-of-life improvement is the shape of a collapsible
section around dependency installation should make reading job logs
easier.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1494
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-24 07:25:09 +00:00
Georgia Garcia
17a09d2987 Merge Allow overrides and preservation of some environment variables in utils make check
Our ubuntu packaging builds Python-enabled libapparmor's in the directories `libapparmor/libapparmor.python[version_identifier]`. In order for the util's `make check` to pick up on the correct libapparmor during the Ubuntu build process, we need the ability to override its search path. This patch introduces a `LIBAPPARMOR_BASEDIR` variable to allow for that.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1497
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 19:11:07 +00:00
Georgia Garcia
625a919bb8 Merge utils: test: various fixes for utils testing in Ubuntu packaging
The first patch fixes a `test-aa-notify.py` `TypeError` when `APPARMOR_NOTIFY` and `__AA_CONFDIR` are both specified, which is something that was broken all this time.

The second patch ensures that `aa-notify` in the test suite is run using the same Python interpreter that the test suite itself is run with, which is necessary for testing the utils under different Pythons.

The third patch does analogous modifications to the minitools tests that launch `aa-audit`, `aa-complain`, etc.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1498
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 19:07:00 +00:00
Ryan Lee
e32c267332 utils: test: use sys.executable when launching minitools in tests
This is analogous to the previous patch's change to the aa-notify tests.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-23 10:37:57 -08:00
Octavio Galland
7a7f88ddf3 tar spread smoke-test 2025-01-23 14:03:23 -03:00
Octavio Galland
e296d5b04c follow rule ordering convention 2025-01-23 13:50:27 -03:00
Octavio Galland
790c795f90 always execute binaries under current profile 2025-01-23 13:50:27 -03:00
Octavio Galland
ef0d5b4cde allow networking 2025-01-23 13:50:27 -03:00
Octavio Galland
667816fe43 explictly allow binaries from certain directories 2025-01-23 13:50:27 -03:00
Octavio Galland
e7807b3761 allow any program to be executed 2025-01-23 13:50:27 -03:00
Octavio Galland
29637f19c9 allow more binaries and capabilities 2025-01-23 13:50:27 -03:00
Octavio Galland
5271d6a74a Fix syntax error, use l to specify executable link 2025-01-23 13:50:27 -03:00
Octavio Galland
486c8c26fe Initial profile for tar binary 2025-01-23 13:50:27 -03:00
Georgia Garcia
c80ef6fb59 Merge tests: skip profile tests on Fedora
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1501
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 13:54:06 +00:00
Georgia Garcia
e750c6c66c Merge tests: add tool for observing the profile of a given command
Using gdb in batch mode, put a breakpoint on _start and spawn the
process.  Then using the built-in python interpreter print the
confinement label on the process and terminate everything.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1500
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 13:52:24 +00:00
Zygmunt Krynicki
065c1d67ca tests: skip profile tests on Fedora
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 14:31:42 +01:00
Zygmunt Krynicki
f98c1098b0 Merge tests: add httpd-devel and pam-devel to fedora cloud-init profile
Those are needed to build the two extension modules.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1499
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-23 13:07:45 +00:00
Zygmunt Krynicki
ffd38b7ac4 tests: measure toybox with actual-profile-of
This should be a more readable example to follow in other tests.  The
toybox test was special given the fact that it is a shell itself, and is
fairly programmable.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 13:53:45 +01:00
Zygmunt Krynicki
23df780544 tests: add tool for observing the profile of a given command
Using gdb in batch mode, put a breakpoint on _start and spawn the
process.  Then using the built-in python interpreter print the
confinement label on the process and terminate everything.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 13:53:45 +01:00
Zygmunt Krynicki
a2ace0d5d7 tests: add httpd-devel and pam-devel to fedora cloud-init profile
Those are needed to build the two extension modules.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 13:48:18 +01:00
Zygmunt Krynicki
29c618a11b tests: put logs from apt-get in a collapsed section
This is a small quality-of-life improvement when looking at CI/CD logs
on GitLab.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 12:37:10 +01:00
Zygmunt Krynicki
f01a40a77c tests: unify CI/CD preparation phase
We now have GitLab CI/CD pipeline co-existing with spread, coupled with
image-garden and the cloud-init profile defined for each distribution.

To avoid duplicating list of required dependencies, re-use cloud-init
profile as the reference list of dependencies (superset between build
and test) to install.

In addition to the dependency list, the build_all job now re-uses spread
prepare section in similar fashion. If it builds in spread, it should
build in CI as well.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-23 12:37:10 +01:00
Georgia Garcia
25676c4694 Merge tests: add integration test for toybox
This is something that was done interactively as a part of a training
session.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1487
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-22 20:39:15 +00:00
Ryan Lee
77cabf7dba utils: test: use sys.executable when launching aa-notify in tests
If the tests are running under a different Python, then the aa-notify bin should use the same Python

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-22 12:04:35 -08:00
Ryan Lee
3365e492a7 utils: test: test-aa-notify: Ensure aanotify_bin is always a list
os.environ returns a string, but the default value is a list, and the concatenation of __AA_CONFDIR assumes a list.
Thus, if APPARMOR_NOTIFY and __AA_CONFDIR were both specified, this would error out.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-22 11:52:38 -08:00
Ryan Lee
90143494fc Allow overrides and preservation of some environment variables in utils make check
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-22 11:10:41 -08:00
Zygmunt Krynicki
1462e1c4b0 Merge tests: enable build tests on Fedora 41
Tests that interact with the kernel are skipped (tests/regression and
tests/snapd) but everything else is green. Most of the tests are
actually passing. The only exception is the aa-notify test that was
broken by Python 3.13 stdlib change. The fix for that has been posted
separately.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1496
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-22 11:06:31 +00:00
Zygmunt Krynicki
03215f46c4 Merge tests: build PAM and apparmor modules in spread
Those fell under the radar during the initial push to expose all of
the tests to spread.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1493
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-22 11:06:17 +00:00
Zygmunt Krynicki
ef880d325f Merge tests: switch tumbleweed to boot with security=apparmor
The openSUSE project has decided to switch to security=selinux by
default. For the purpose of continuing to test AppArmor on the
distribution, alter the cloud-init profile to switch to booting with
security=apparmor.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1492
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-22 11:06:01 +00:00
Zygmunt Krynicki
7ce6819c53 tests: enable build tests on Fedora 41
Tests that interact with the kernel are skipped (tests/regression and
tests/snapd) but everything else is green. Most of the tests are
actually passing. The only exception is the aa-notify test that was
broken by Python 3.13 stdlib change. The fix for that has been posted
separately.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 20:59:40 +01:00
Zygmunt Krynicki
be47567d27 tests: add integration test for toybox
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 12:34:42 +01:00
Zygmunt Krynicki
2ab2c8f8a1 tests: add suite with profile tests
Hopefully more and more profiles will come with smoke tests. Since the
pattern of those tests is likely to be very similar (compile profile,
run some programs, remove profile) it will be good to check if the
profile had caused any denials to be logged. Having this at the suite
level should make writing actual tests easier.

The prepare-each and restore-each logic compile the profile, check for
errors and finally remove the profile. The debug-each logic shows the
program name (with full path).

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 12:34:42 +01:00
Zygmunt Krynicki
5c17df0219 profiles: attach toybox profile to /usr/bin/toybox
This is the actual path used on Debian and derivatives.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 11:16:24 +01:00
Zygmunt Krynicki
42c8745e73 tests: build PAM and apparmor modules in spread
Those fell under the radar during the initial push to expose all of
the tests to spread.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 01:54:24 +01:00
Zygmunt Krynicki
2b44cc09a6 tests: switch tumbleweed to boot with security=apparmor
The openSUSE project has decided to switch to security=selinux by
default. For the purpose of continuing to test AppArmor on the
distribution, alter the cloud-init profile to switch to booting with
security=apparmor.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-21 01:52:59 +01:00
Georgia Garcia
85d57b7f06 Merge tests: pair of cleanups for the coverity job
Avoid a deprecated feature and reduce YAML complexity.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1491
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-20 18:12:56 +00:00
Zygmunt Krynicki
5abbf31ce1 tests: inline .send-to-coverity command
There is no other use of this yaml fragment in the project so inline it
for simplicity.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-20 14:11:17 +01:00
Zygmunt Krynicki
61d75a11ef tests: rewrite coverity job to avoid deprecated "only" feature
The "only" feature has been deprecated for a while. The standard
replacement is the rules:if feature.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-20 14:09:45 +01:00
Christian Boltz
817d5eed1d Merge postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/incoming/.

Similar to 3c2aae3.

Example error:

```
type=AVC msg=audit(1737094364.337:12023): apparmor="DENIED" operation="open" profile="postfix-showq" name="/var/spool/postfix/incoming/B7E4C12C784A" pid=17879 comm="showq" requested_mask="r" denied_mask="r" fsuid=91 ouid=91FSUID="postfix" OUID="postfix"
```

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1489
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-18 13:08:58 +00:00
pyllyukko
ba765e0eab postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/incoming/.

Similar to 3c2aae3.
2025-01-18 09:46:24 +02:00
Georgia Garcia
a12004f96c Merge regression tests: fix the overlayfs mv test failures
The file being moved from needs rw permissions and not just w permissions.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1488
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-17 13:09:38 +00:00
Ryan Lee
63c944a01a regression tests: fix the overlayfs mv test failures
The file being moved from needs rw permissions and not just w permissions

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-16 18:10:06 -08:00
John Johansen
f171f5ebc8 Merge tests: snapd/mount-control: assorted fixes
This makes the snapd/mount-control test pass on all the currently tested systems. Note that there's a somewhat complex problem with the new mount APIs (https://lwn.net/Articles/753473/) from 2018 that are now being used on, for example, Debian 13.

I will need to make similar changes to the profiles generated by snapd, so any insight on what to do there is strongly appreciated.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1479
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-16 19:35:35 +00:00
John Johansen
2e42c33f48 Merge parser: add backend pipeline ordering info to README
Add a basic overview of the ordering of the backend of the compiler
and which stages specific dump info lines up with.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1470
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-16 19:32:26 +00:00
John Johansen
4fc3aacc8f Merge aa-notify: Use a quieter default behavior
In aa-notify, notifications are now merged by default to reduce the risk
of flooding.

Additionally, we now use an exponential backoff algorithm for the
merging time period. If there is several notications within a time
period, it doubles, up to a maximum. The time period shrinks if there is
no notification. The time period is reset if the user clicks on a
notifiation
    
Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1468
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-16 19:31:19 +00:00
Maxime Bélair
7049d7b0c6 aa-notify: Use a quieter default behavior 2025-01-16 19:31:18 +00:00
Christian Boltz
692e6850ba Merge Add support for lastlog2 to get last login
lastlog2 is the 2038-safe replacement for wtmp, and in the meantime
became part of util-linux.

Adjust get_last_login_timestamp() to use the lastlog2 database
(/var/lib/lastlog/lastlog2.db) if it exists, and adjust
get_last_login_timestamp_lastlog2() to actually do that.

(If lastlog2.db doesn't exist, aa-notify will read wtmp as usual.)

Unfortunately lastlog2 doesn't have a way to get machine-readable output
(for example json), therefore - after trying and failing to parse the
lastlog2 output - directly read from lastlog2.db. Let's hope the format
never changes ;-)

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1228378

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1216660

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/372

I propose this patch for 4.0 and master.

Closes #372
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1282
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-14 18:56:21 +00:00
Christian Boltz
45e4c27cf0 Add support for lastlog2 to get last login
lastlog2 is the 2038-safe replacement for wtmp, and in the meantime
became part of util-linux.

This commit switches from trying to parse the lastlog2 output to
directly reading lastlog2.db with sqlite3.

Adjust get_last_login_timestamp() to use the lastlog2 database
(/var/lib/lastlog/lastlog2.db) if it exists, and adjust
get_last_login_timestamp_lastlog2() to actually do that.

(If lastlog2.db doesn't exist, aa-notify will read wtmp as usual.)

Unfortunately lastlog2 doesn't have a way to get machine-readable output
(for example json), therefore - after trying and failing to parse the
lastlog2 output - directly read from lastlog2.db. Let's hope the format
never changes ;-)

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1228378

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1216660

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/372
2025-01-14 19:36:43 +01:00
Christian Boltz
371a9ff9ec Add support for lastlog2 to get last login
lastlog2 is the 2038-safe replacement for wtmp, and in the meantime
became part of util-linux.

Adjust get_last_login_timestamp() to use lastlog2 if it exists, and add
get_last_login_timestamp_lastlog2() to actually do that.

(If lastlog2 doesn't exist, aa-notify will read wtmp as usual.)

Unfortunately lastlog2 doesn't have a way to get machine-readable output
(for example json), therefore we have to parse the output that is meant
for humans. Let's hope the format never changes ;-)

(The alternative would have been to use squlite3 to once more read the
data behind the official program's back, but that was already a bad idea
for wtmp, therefore I decided against it.)

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1228378

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1216660

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/372
2025-01-14 19:36:42 +01:00
Christian Boltz
7d537efcb0 Rename get_last_login_timestamp to get_last_login_timestamp_wtmp
... and add a wrapper function with the old name

Also rename the tests to the new name, and create a copy with the
original name. The copy will be adjusted to also check/expect lastlog2
results in a later commit.
2025-01-14 19:36:40 +01:00
Christian Boltz
9629bc8b6f Merge Support unloading profiles in kill and prompt mode
... in aa-teardown (actually everything that uses rc.apparmor.functions)
and aa-remove-unknown.

Fixes: https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2093797

I propose this fix for 3.0..master, since the apparmor.d manpage in all these branches mentions the `kill` flag.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1484
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-14 18:24:40 +00:00
Christian Boltz
1c2d79de7f Support unloading profiles in kill and prompt mode
... in aa-teardown (actually everything that uses rc.apparmor.functions)
and aa-remove-unknown.

Fixes: https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2093797
2025-01-13 18:07:39 +01:00
Zygmunt Krynicki
43355fada5 Merge tests: add dosfstools to image-garden cloud-init
The package is required by the file_unbindable_mount regression test.
To properly re-generate affected images please update image-garden
to version containing 9714dc45d0ef06862ffe7037193dc43386db48ea
(Tie .user-data and .meta-data to MAKEFILE_LIST).

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1480
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>
2025-01-12 21:02:39 +00:00
John Johansen
c57d727482 parser: add backend pipeline ordering info to README
Add a basic overview of the ordering of the backend of the compiler
and which stages specific dump info lines up with.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-10 23:35:57 -08:00
Christian Boltz
b4cb33b488 Merge tests: regression: separate bash traces from errors
The BASH_XTRACEFD variable can be used to redirect "set -x" traces
to a dedicated file. We can use it to split the execution trace
(what has actually happened) from the failure messages.

On a failing test this does provide improved clarity when debugging
interactively with "spread -debug". On non-interactive runs the now
shorter error list is also implicitly printed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1481
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-10 20:48:16 +00:00
Christian Boltz
7fa4b82235 Merge tests: run autotools test verbosely
Instead of showing just the summary, display the actual test log as well.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1482
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2025-01-10 20:47:54 +00:00
Zygmunt Krynicki
fa33d7199b tests: run autotools test verbosely
Instead of showing just the summary, display the actual test log as well.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 14:04:55 +01:00
Zygmunt Krynicki
2c2e0478f8 tests: regression: separate bash traces from errors
The BASH_XTRACEFD variable can be used to redirect "set -x" traces
to a dedicated file. We can use it to split the execution trace
(what has actually happened) from the failure messages.

On a failing test this does provide improved clarity when debugging
interactively with "spread -debug". On non-interactive runs the now
shorter error list is also implicitly printed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 12:40:17 +01:00
Zygmunt Krynicki
699b598593 tests: sort cloud-init package lists
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 12:38:30 +01:00
Zygmunt Krynicki
215fab71a5 tests: add dosfstools to image-garden cloud-init
The package is required by the file_unbindable_mount regression test.
To properly re-generate affected images please update image-garden
to version containing 9714dc45d0ef06862ffe7037193dc43386db48ea
(Tie .user-data and .meta-data to MAKEFILE_LIST).

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 12:37:49 +01:00
Zygmunt Krynicki
cff25b8d17 tests: snapd/mount-control: allow paths used on openSUSE
In addition allow linking to libeconf, generalize locale paths to cover
values other than C.UTF-8 and allow reading system-wide locale.alias and
gconv modules.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 11:09:36 +01:00
Zygmunt Krynicki
8ed810756b tests: snapd/mount-control: stop/start auditd
This is needed on openSUSE Tumbleweed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 11:09:19 +01:00
Zygmunt Krynicki
5556de53c0 tests: snapd/mount-control: allow new mount APIs
This is not the best of fixes but it seems that on Debian 13, with new
libmount calling fsopen/fsconfig/move_mount, the current apparmor mount
rule is insufficient to allow the call to go through.

The key problems are:
- the fstype is not visible to LSM
- the source directory is an empty string
- the mount is moved to final position

I don't know the extent of "new" mount API coverage by LSM hooks but
I think we should either synthesize new permissions from old rules,
.e.g match each of the system calls against what the mount class
expression, or somehow allow the exceptions better.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 11:08:26 +01:00
Zygmunt Krynicki
32116a50b0 tests: snapd/mount-control: fix bash syntax.
This masked failures that were already occuring.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2025-01-10 11:08:07 +01:00
John Johansen
72f9952a5f Merge parser: add a hfa dump that matches the renumbered chfa
Construction of the chfa can reorder states from what the numbering
given during the hfa constuctions because of reordering for better
compression, dead state removal to ensure better packing etc.

This however means the dfa dump is difficult (it is possible using
multiple dumpes) to match up to the chfa that the kernel is
using. Make this easier by making the dfa dump be able to take
the remapping as input, and provide an option to dump the
chfa equivalent hfa.

Renumbered states will show up as {new <== {orig}} in the dump

Eg.
```
--D dfa-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{5} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {5}  0 (0x 4/0//0/0/0)
    0x4 -> {5}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {5}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {5}  0 (0x 4/0//0/0/0)
    \n 0xa -> {5}  0 (0x 4/0//0/0/0)
    \  0x20 -> {5}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {6}
{6} perms: none
    1 0x31 -> {5}  0 (0x 4/0//0/0/0)
```

```
-D dfa-compressed-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{2 == {5}} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    0x4 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \n 0xa -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \  0x20 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {4 == {6}}
{4 == {6}} perms: none
    1 0x31 -> {2 == {5}}  0 (0x 4/0//0/0/0)
```

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1474
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2025-01-09 19:04:13 +00:00
John Johansen
cd8b75abc0 Merge parser: convert uint to unsigned int
As reported in https://gitlab.com/apparmor/apparmor/-/merge_requests/1475
uint requires the inclusion of sys/types.h for use in musl libc.
Including that would be fine but since it is only used for the
cast for the owner type comparison, just convert to use a more
standard type.

Reported-by: @fossd <fossdd@pwned.life>
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1478
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-09 10:40:27 +00:00
John Johansen
ff03702fde parser: convert uint to unsigned int
As reported in https://gitlab.com/apparmor/apparmor/-/merge_requests/1475
uint requires the inclusion of sys/types.h for use in musl libc.
Including that would be fine but since it is only used for the
cast for the owner type comparison, just convert to use a more
standard type.

Reported-by: @fossd <fossdd@pwned.life>
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 02:15:00 -08:00
John Johansen
e5a960a685 Merge cupsd: Add /etc/paperspecs and convert to @etc_ro/rw
I had this message in my log

```
Dez 30 08:14:46 kernel: audit: type=1400 audit(1735542886.787:307): apparmor="DENIED" operation="open" class="file" profile="/usr/sbin/cupsd" name="/etc/paperspecs" pid=317509 comm="cupsd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
```

If the second commit is bad, I can drop it.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1472
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-09 09:48:48 +00:00
John Johansen
0eca26c6c2 Merge Allow write access to /run/user/*/dconf/user
Gtk applications like Firefox request write access to the file
`/run/user/1000/dconf/user`. The code in `dconf_shm_open` opens the file
with `O_RDWR | O_CREAT`.

4057f8c84f/shm/dconf-shm.c (L68)

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1471
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-09 09:46:55 +00:00
John Johansen
50452e1147 parser: add a hfa dump that matches the renumbered chfa
Construction of the chfa can reorder states from what the numbering
given during the hfa constuctions because of reordering for better
compression, dead state removal to ensure better packing etc.

This however means the dfa dump is difficult (it is possible using
multiple dumpes) to match up to the chfa that the kernel is
using. Make this easier by making the dfa dump be able to take the
emapping as input, and provide an option to dump the chfa equivalent
hfa.

Renumbered states will show up as {new <== {orig}} in the dump

Eg.
--D dfa-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{5} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {5}  0 (0x 4/0//0/0/0)
    0x4 -> {5}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {5}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {5}  0 (0x 4/0//0/0/0)
    \n 0xa -> {5}  0 (0x 4/0//0/0/0)
    \  0x20 -> {5}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {6}
{6} perms: none
    1 0x31 -> {5}  0 (0x 4/0//0/0/0)

-D dfa-compressed-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{2 == {5}} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    0x4 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \n 0xa -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \  0x20 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {4 == {6}}
{4 == {6}} perms: none
    1 0x31 -> {2 == {5}}  0 (0x 4/0//0/0/0)

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-03 14:18:50 -08:00
John Johansen
3034c0772e Merge parser: improve unreachable state removal
Currently states are added to the reachable set when they are popped
from the workqueue. This however can result in states being
added to the work queue multiple times and reprocessed.

```
Eg. If state 2 has the transitions, and 9 is not in the reachable set
  a -> 9
  b -> 9
  c -> 9
  d -> 9
  e -> 3
```

then 9 will get pushed onto the work 4 times. Even worse other states
on the workqueue may also add state 9 to the workqueue because it has
not been added to the reachable set.

Instead add states to the reachable set when they are added to the
workqueue. The first encounter with a state will result in it being
reachable and all other encounters will see that it already in the set
and not add it to the workqueue.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1473
Acked-by: seth.arnold@gmail.com
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2025-01-02 09:36:48 +00:00
John Johansen
4fb3dbc7b3 parser: improve unreachable state removal
Currently states are added to the reachable set when they are popped
from the workqueue. This however can result in states being
added to the work queue multiple times and reprocessed.

Eg. If state 2 has the transitions, and 9 is not in the reachable set
  a -> 9
  b -> 9
  c -> 9
  d -> 9
  e -> 3

then 9 will get pushed onto the work 4 times. Even worse other states
on the workqueue may also add state 9 to the workqueue because it has
not been added to the reachable set.

Instead add states to the reachable set when they are added to the
workqueue. The first encounter with a state will result in it being
reachable and all other encounters will see that it already in the set
and not add it to the workqueue.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-01 17:16:17 -08:00
Jörg Sommer
318fb30446 Allow write access to /run/user/*/dconf/user
Gtk applications like Firefox request write access to the file
`/run/user/1000/dconf/user`. The code in `dconf_shm_open` opens the file
with `O_RDWR | O_CREAT`.

4057f8c84f/shm/dconf-shm.c (L68)
2024-12-31 10:23:50 +01:00
Jörg Sommer
c3af6228fd cupsd: convert profile to @etc_ro/rw
While cups itself writes to /etc the others require only read-only access
and might therefore live in /usr/etc.
2024-12-31 10:12:16 +01:00
Jörg Sommer
97d7fa3f5f cupsd: Add /etc/paperspecs read access
Cups uses libpaper which accesses /etc/paperspecs.

ce42216e2e/lib/libpaper.c.in.in (L419)
2024-12-31 10:12:16 +01:00
John Johansen
40e9b2a961 Merge parser: fix priority for file rules.
Fix priority for file rules, and the ability to dump the dfa at different stages, and update and fix the equality tests.

This in particular adds the ability to better debug the equality tests. Instead of just piping the parser output into the hash it creates a tmp dir and drops the binary files there so they can be manually examined. It adds new options particularly the -r option making so the tests will exit on first failure to make it easier to isolate and examine a failure.

Eg.
```
./equality.sh -r -d -v
Equality Tests:
................................................................................................................................................................................................................................
Binary inequality 'priority=-1'x'priority=-1' change_hat rules automatically inserted
FAIL: Hash values match
parser: ./../apparmor_parser -QKSq --features-file=./features_files/features.all
known-good (ee4f926922ecd341f1389a79dd155879) == profile-under-test (ee4f926922ecd341f1389a79dd155879) for the following profiles:
known-good         /t { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current a, ^test { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current a, /f r, }}
profile-under-test /t { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current w, ^test { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current w, /f r, }}

  files retained in "/tmp/eq.3240859-deHu10/"
```

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1455
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-30 09:26:45 +00:00
John Johansen
8c799f4eec Merge Allow python cache under the @{HOME}/.cache/ dir
Starting with Python 3.8, you can use the PYTHONPYCACHEPREFIX environment
variable to define a cache directory for Python [1]. I think most people would set
this dir to @{HOME}/.cache/python/ , so the python abstraction should allow
writing to this location.

[1]: https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPYCACHEPREFIX

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1467
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-24 22:33:28 +00:00
John Johansen
027b508da8 parser: equality tests: convert to using sha256sum for the hashes
There is a general industry wide effort to move off of md5 and even
sha1 (see recent kernel changes). While in this particular use case it
doesn't make a difference (besides slightly lowering the chance of a
collision) switch to sha256sum to make sure our code doesn't depend on
tools that are deprecated and there is an effort to remove.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:55 -08:00
John Johansen
bf7b80c478 parser: equality tests: fix r carve out tests
Similar to the deny x permission tests, the tests that test carving
out r permissions need to be updated to be conditional on what
priority is being used on the rule.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:55 -08:00
John Johansen
25f16b239d parser: equality tests: update deny x perm carve out test
With priority rules, deny does not carve out permissions from the
higher priority rule. Technically it doesn't from lower priority either
as it completely overrides them, but that case already results in
an inequality so does not cause the tests to fail.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:55 -08:00
John Johansen
369029dc07 parser: equality tests: fix cx specified profile transition
cx rules using a specified profile transition, may be emulated by
using px and a hierarchical profile name. That is

  cx -> b

may be transformed into

  px -> profile//b

which will generate an xtable entry of

  profile//b

which means the previous patch using

  pivot_root -> b,

to reliably add b to the xtable will not cover this case.

transition to using two pivot_root rules to provide the xtable entries
  pivot_root /a -> b,
  pivot_root /c -> /t//b,

the paths /a and /c are irrelavent as long as they don't have an
overlap with the generic globbing expression in the test, Two table
entries will be generated. We guarantee no overlap by converting the

  /** to /f**

Also the xtable reserving rules are moved to the end of the profile so
the table order can be reliably created. A follow on MR around xtable
improvements should add reliability to xtable order.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:55 -08:00
John Johansen
84650beb2f parser: equality tests: fix equality failure due to xtable
exec rules that specify an specific target profile generate an entry
in the xtable. The test entries containing " -> b" are an example of
this.

Currently the parser allocates the xtable entry before priorities are
applied in the backend, or minimization is done. Further more the
parser does not ref count the xtable entry to know what it is no
longer referenced.

The equality tests generate rules that are designed to completely
override and remove a lower priority rule, and remove it. Eg.

  /t { priority=1 /* ux, /f px -> b, }

and then compares the generated profile to the functionaly equivalent
profile eg.

  /t { priority=1 /* ux, }

To verify the overridden rule has been completely removed.
Unfortunately the compilation is not removing the unused xtable entry
for the specified transition, causing the equality comparison to fail.

Ideally the parser should be fixed so unused xtable entries are removed,
but that should be done in a different MR, and have its own test.

To fix the current tests, and another rule that adds an xtable entry
to the same target that can not be overriden by the x rule using
pivot_root. The parser will dedup the xtable entry resulting in the
known and test profile both having the same xtable. So the test will
pass and meet the original goal of verifying the x rule being overriden
and eliminated.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-23 23:36:28 -08:00
John Johansen
cca842b897 parser: equality tests: rework and add debug features
Failed equality tests can be hard to debug. The profiles aren't always
enough to figure out what is going on. Add several options that will
help in debugging, and developing new tests.

Add switches and arg parsing.

Add the ability to run tests individually

Add a -r flag to allow retaining the test and output
similar to the regression tests, so the exact output from the
tests can be examined.

Add a -d flag to dump dfa build information.

Allow overriding the parser, features, and description for a given
test run.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:24:46 -08:00
John Johansen
05ddc61246 parser: equality tests: wrap test run in function
In preparation for some additional abilities wrap the current tests in
a function.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:24:39 -08:00
John Johansen
31e60baab2 parser: equality tests: consitently dump error output to stderr
printf of failure/error info should be going to stderr. Unfortunately
the test has a mix of 2>&1 and 1>&2. Having a mix is just wrong, we
could standardize on either but since the info is error info 1>&2
seems to be the better choice.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
57c57f198c parser: equality tests: fix failing overlapping x rule tests
The test was passing because the file priority was always zero bug
resulting in the priority rule always being correctly combined
with the specific match x rule, instead of overriding it.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
4b410b67f1 parser: equality tests: fix change_hat priority test
The test was passing because the file priority always being zero bug,
the supplied rule always had the same priority as the implied
rule. Resulting in binary_equality always passing even though the
specified priority should have resulted in a failure.

Fix this by checking if the priorities are equal to the implied
rule other wise it should result in an inequality.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
d275dfdd42 parser: equality tests: output parser, config and features info
When there is a failure output the exact call info used to invoke the
parser. To facilitate manually recreating the test.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
fcee32a37e parser: equality tests: convert xequality tests to equality
With the file priority fix the xequality (expected equal but known
failure) tests are now passing. So convert them to regular equality
tests.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
5d2a38e816 parser: add some new dfa dump options.
The dfa goes through several stages during the build. Allow dumping it
at the various stages instead of only at the end.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
9d5b86bc9d parser: fix priority for file rules.
File rules could drop priority info when rule matched a rule
that was the same except for having different priority. For now
fix this by treating them as a different rule.

The priority was also be dropped when add_prefix was used to
add the priority during the parse resulting in file rules always
getting a default priority of 0.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-22 15:02:04 -08:00
John Johansen
8e431ebcd9 Merge regression tests: make loop device size more generous
Depending on the system, copying echo to the loop device fails because the echo binary is too large.
Especially on systems that have echo be just a symlink to coreutils (e.g. busybox) (as opposed to echo being its own binary) 16k is just not enough.
2M seems fine on my system, but this might need yet a higher value depending on what coreutils other people actually run.

The crash in question:
```
cp: error writing '/tmp/sdtest.3937422-31490-Bxvi6g/mount_target/echo': No space left on device
Fatal Error (file_unbindable_mount): Unexpected shell error. Run with -x to debug
rm: cannot remove '/tmp/sdtest.3937422-31490-Bxvi6g/mount_target': Device or resource busy
```

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1469
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-20 08:53:41 +00:00
John Johansen
cd4bb05f20 Merge Add overlayfs regression tests
These tests exercise various common file operations on files in an overlayfs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1461
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>
2024-12-20 08:25:58 +00:00
Grimmauld
1cc2a3bd86 regression tests: make loop device size more generous
Depending on the system, copying echo to the loop device fails because the echo binary is too large.
Especially on systems that have echo be just a symlink to coreutils (e.g. busybox) 16k is just not enough.
2M seems fine on my system, but this might need yet a higher value depending on what coreutils other people actually run.
The actual loop device needs to be larger to properly fit the allocated file size. Testing shows 4M is sufficient, but this is basically arbitrary.
2024-12-19 23:48:43 +01:00
John Johansen
ba60bfff85 Merge Write a regression test for mediating file access in private mounts
This test, as is, emits an execname warning which is due to a bug in the `prologue.inc` infrastructure (see !1450 for a fix to this issue).

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1448
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 19:44:51 +00:00
John Johansen
c489631770 Merge aa-status: fix json generation
- previously, aa-status --json --show profiles would return non-standard json
- adding the --pretty flag would crash completely
- closes #470

Things done:
- removed trailing ", " in json generation
- generate json seperator (", ") for each new json field
  (profiles/processes) after the header if json is enabled

Tested on NixOS and apparmor 4.0.3 base, but should work on any version the patch applies on.

Closes #470
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1451
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 19:44:13 +00:00
John Johansen
59957aa1d8 Merge fixes on the testing infrastructure
This MR is meant to resolve warnings such as "Warning: execname '/home/username/Documents/apparmor/tests/regression/apparmor/file_unbindable_mount': no such file or directory" when running tests like the one in the current version of !1448.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1450
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 19:38:00 +00:00
John Johansen
50f260df51 Merge profiles: transmission-gtk needs attach_disconnected
From LP: #2085377, when using ip netns to torrent traffic through a
VPN, attach_disconnected is needed by the policy because ip netns sets
up a mount namespace.

Fixes: https://bugs.launchpad.net/bugs/2085377
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1395
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 19:15:41 +00:00
John Johansen
3ed5adb665 Merge Allow make-* flags with remount operations
While the mount syscall documentation disallows this, the kernel silently
ignores make-* flags when doing a remount, and real applications were
passing this conflicting set of flags. Because changing the kernel to
reject this combination would break userspace, we should allow them
instead.

For an example: see https://bugs.launchpad.net/apparmor/+bug/2091424.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1466
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-19 17:27:53 +00:00
Mikhail Morfikov
03b5a29b05 Allow python cache under the @{HOME}/.cache/ dir
Starting with Python 3.8, you can use the PYTHONPYCACHEPREFIX environment
variable to define a cache directory for Python [1]. I think most people would set
this dir to @{HOME}/.cache/python/ , so the python abstraction should allow
writing to this location.

[1]: https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPYCACHEPREFIX
2024-12-19 09:33:13 +01:00
Ryan Lee
83270fcf68 Add a regression test for allowing rprivate with conflicting options
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-18 10:28:49 -08:00
Georgia Garcia
67ee5f8b39 Merge Add separator between mount flags in dump_flags
The previous code would concatenate all of them together without spacing.
While dump_flags and the corresponding operator<< function aren't currently used,
this will help for when dump_flags is used to debug parser problems.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1465
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-18 15:26:29 +00:00
Georgia Garcia
a3299ba133 Merge Use "profile//hat" in storage
TL;DR: Replace `aa[profile][hat]` with `active_profiles['profile//hat']` as a preparation to get rid of `aa`'s limits, especially to enable handling nested childs.

Since this is an extremely shortened summary, I recommend to check the individual commits for a readable and understandable diff and more details.

Note that this MR is "just" a preparation - nested childs are not supported yet. Also, `include` still uses the old structure. Both will be separate MRs - this one is already big enough ;-)

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1360
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-18 12:07:01 +00:00
Christian Boltz
58664c106c read_profile: rename active_profile parameter
... to is_active_profile to prevent confusion with active_profiles
2024-12-17 22:51:12 +01:00
Christian Boltz
0551be806e ask_the_questions: use full_profile
... instead of combine_profname([profile, hat])
2024-12-17 22:51:12 +01:00
Christian Boltz
728a8717e9 aa.py: get rid of 'aa'
... which is no longer used - everything is in active_profiles now :-)
2024-12-17 22:51:12 +01:00
Christian Boltz
f66ada256a Drop now-unused split_to_merged() and its tests 2024-12-17 22:51:12 +01:00
Christian Boltz
695e472b2c Switch aa-mergeprof from aa to active_profiles 2024-12-17 22:51:12 +01:00
Christian Boltz
531f47676d ProfileList: add get_all_profiles()
... and a test for it
2024-12-17 22:51:12 +01:00
Christian Boltz
c93d560f89 Switch test-libapparmor-test_multi.py from aa to active_profiles
Note that the old code assigned dummy_prof to aa[profile][hat] and
active_profiles[profile] (= the main/parent profile) - which is
diffferent when testing a log for a child profile.

aa[profile][hat] was the wrong place - but since we used exactly that
again when checking for added exec rules, this error was hidden.

Now that the test is switched to using active_profiles, only check the
main profile for exec rules added by ask_exec(). (This will need to be
adjusted when we add a test for exec rules/events in nested childs, but
not earlier ;-)
2024-12-17 22:51:12 +01:00
Christian Boltz
61a7ba2822 Switch usage of 'aa' to active_profiles
This is mostly a search-and-replace patch.

In most cases, that means replacing `aa[profile][hat]` with
`active_profiles[full_profile]`.

In cases where the main/parent profile is meant, switch from
`aa[profile][profile]` to `active_profiles[profile]`.

Checks like `p in apparmor.aa` that check if a (main) profile exists
become `active_profiles.profile_exists(p)`.

write_profile() gets changed to loop over
`active_profiles.get_profile_and_childs()` which makes the code simpler.

`split_to_merged(aa)` becomes just `active_profiles`.

The only change that is not search-and-replace style is in
write_piece(). It expects a dict (not a ProfileList), therefore adjust
serialize_profile() so that it always hands over a dict.
2024-12-17 22:51:12 +01:00
Christian Boltz
c5bbe79338 replace original_aa with original_profiles
This also changes the internal structure - instead of the nested dict
original_aa[profile][hat], we now have a ProfileList original_profiles[profile//hat].
2024-12-17 22:51:12 +01:00
Christian Boltz
c5e495c56d ProfileList: add replace_profile()
... and some tests for it.
2024-12-17 22:51:12 +01:00
Christian Boltz
a37c65957f ProfileList: add __getitem__()
... and add some tests for it.
2024-12-17 22:51:12 +01:00
Christian Boltz
b66dfd8bfb Use active_profiles.profile_exists()
... to test if a given profile or hat exists
2024-12-17 22:51:12 +01:00
Christian Boltz
0da12fe7cb Use extra_profiles.profile_exists()
... instead of accessing the internal storage directly.
2024-12-17 22:51:12 +01:00
Christian Boltz
3a02d6d14c Use temporary object instead of working in aa[profile][hat]
... in ask_addhat() and ask_the_questions().

Also deduplicate some code in ask_the_questions().
2024-12-17 22:51:12 +01:00
Christian Boltz
f5ed9cffe3 serialize_profile(): simplify and cleanup
Drop `comment.replace('\\n', '\n')` because that doesn't make sense and
doesn't change anything - not even a comment that contains the literal
string '\n' (backslash + letter n).

Besides that, get rid of the 'string' variable and store everything in
'data'.
2024-12-17 22:51:11 +01:00
Christian Boltz
2e8a75195c De-duplicate code in read_profile() 2024-12-17 22:51:11 +01:00
Christian Boltz
578ab8da9d Store child profiles and hats in active_profiles
... including just-created child profiles and hats.

Also ensure that serialize_profile() doesn't print them out as child
profiles AND external hats.

This commit includes a bugfix for a rare corner case:
Since create_new_profile() can return more than one profile if the
program has required_hats, add all of them to active_profiles.
(aa only got the expected profile added, but not the required_hats.)
2024-12-17 22:51:11 +01:00
Christian Boltz
fe9b2542ca ProfileList: add profile_exists()
... and extend the existing tests for add_profile to also check
profile_exists().
2024-12-17 22:51:11 +01:00
Christian Boltz
a0e6fbe32a ProfileStorage: store parent profile
... and extend the tests to get some coverage.
2024-12-17 22:51:11 +01:00
Christian Boltz
792d1a5568 ProfileList addProfile(): always hand over ProfileStorage
... and make it non-optional

Note that read_profile() in aa.py skips child profiles and hats,
therefore active_profiles for now only contains the main profiles.
2024-12-17 22:51:11 +01:00
Ryan Lee
52babe8054 Allow make-* flags with remount operations
While the mount syscall documentation disallows this, the kernel silently
ignores make-* flags when doing a remount, and real applications were
passing this conflicting set of flags. Because changing the kernel to
reject this combination would break userspace, we should allow them
instead.

For an example: see https://bugs.launchpad.net/apparmor/+bug/2091424.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-17 11:59:54 -08:00
Ryan Lee
96718ea4d1 Add separator between mount flags in dump_flags
The previous code would concatenate all of them together without spacing.
While dump_flags and the corresponding operator<< function aren't currently used,
this will help for when dump_flags is used to debug parser problems.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-17 11:50:35 -08:00
Georgia Garcia
f9edc7d4c1 profiles: transmission-gtk needs attach_disconnected
From LP: #2085377, when using ip netns to torrent traffic through a
VPN, attach_disconnected is needed by the policy because ip netns sets
up a mount namespace.

Fixes: https://bugs.launchpad.net/bugs/2085377
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-17 09:32:18 -03:00
Georgia Garcia
b2f713dd83 Merge Python SWIG binding fixes (API breaking)
Changes to Python SWIG bindings that are breaking changes but that fix bindings that were previously unusable.

This MR also depends on !1334 and !1337 being merged first, though ~~I can rebase this one if necesssary~~ this MR has now been rebased after those two were merged.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1338
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-17 12:24:15 +00:00
Christian Boltz
f2c398405b Merge Update fs type comment in swap regression test
As per https://gitlab.com/apparmor/apparmor/-/merge_requests/1463#note_2259888640: this really should have been a part of !1463, except that cboltz only pointed this out after the MR was already merged. Better late than never, nevertheless.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1464
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-16 21:14:00 +00:00
Ryan Lee
5cd3362a81 Update fs type comment in swap regression test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 12:51:30 -08:00
Ryan Lee
1d3d48cc2a Shellcheck pass over overlayfs.sh
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
b24a820e7a Extend overlayfs test with more file ops
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
8212fa8be4 Add more operations to the regression test complain binary
This extra functionality is to be used in a different regression test that reuses the binary

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
e0127767fd Add the overlayfs regression test to task.yaml
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
1cb11f5a89 Add the overlayfs regression test to the Makefile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
2fdb5c799c Add a basic overlayfs regression test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-16 09:52:38 -08:00
Ryan Lee
fa58d3611a Shellcheck fix pass over file_unbindable_mount test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:37:50 -08:00
Georgia Garcia
6d7b5df947 Merge Fix swap regression test on btrfs
As per !1462 it turns out that the swap regression test on btrfs also needs special casing in order to work properly. This is an analogous patch to check for btrfs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1463
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-13 20:32:01 +00:00
Ryan Lee
c768a7dc79 Add file_unbindable_mount to regression task.yaml
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:27:47 -08:00
Ryan Lee
049b35dff0 Add file_unbindable_mount to regression test Makefile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:27:47 -08:00
Ryan Lee
f249c6d58f Write a regression test for mediating file access in unbindable mounts
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:27:47 -08:00
Ryan Lee
90c7af69c5 Fix swap regression test on btrfs
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-13 12:13:55 -08:00
Georgia Garcia
e8f1ac4791 Merge fix swap test on zfs file system
Swap on ZFS is *weird*. Getting it working needs some special casing, see e.g. https://askubuntu.com/questions/1198903/can-not-use-swap-file-on-zfs-files-with-holes

Currently, the swap regression test fails on my system (with /tmp in zfs):
```bash
tests/regression/apparmor ❯ ./swap.sh
Error: swap failed. Test 'SWAPON (unconfined)' was expected to 'pass'. Reason for failure 'FAIL: swapon /tmp/sdtest.872368-19048-kN4FN2/swapfile failed - Invalid argument'
Error: swap failed. Test 'SWAPOFF (unconfined)' was expected to 'pass'. Reason for failure 'FAIL: swapoff /tmp/sdtest.872368-19048-kN4FN2/swapfile failed - Invalid argument'
swapon: /tmp/sdtest.872368-19048-kN4FN2/swapfile: skipping - it appears to have holes.
Fatal Error (swap): Unexpected shell error. Run with -x to debug
```

However, just doing a file mount does make the test work on zfs, similar to how it is done with tmpfs. This means we don't need any special-casing for zfs beyond what is already there for working around (similar) tmpfs limitations.

Also, while researching this, it is possible a similar patch is needed for btrfs, but i currently don't have an easy way to test that.
This is non-breaking for anyone *not* using zfs, and it is currently broken with zfs anyways.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1462
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-13 20:06:56 +00:00
Grimmauld
9a1b538298 fix swap test on zfs file system 2024-12-13 15:35:47 +01:00
Christian Boltz
b3de4ef022 Merge limit buildpath.py setuptools version check to the relevant bits
previously, this check would fail if the setuptools version would contain non-integers.
On my system, that is the case: `setuptools.__version__` is `'75.1.0.post0'`
I believe it is entirely fair to just check the relevant bits and refuse  to continue if those can not be checked properly.
Having some extra slug on the version should not immediately cause issues (e.g. the `post0` here, or slugs like `beta`, `alpha` and the likes).
Probably only very few systems are running setuptools with weird version info, but supporting this is a simple one-line change i figured i might as well MR.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1460
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-11 21:25:00 +00:00
Grimmauld
3302ae98e4 limit buildpath.py setuptools version check to the relevant bits
previously, this check would fail if the setuptools version would contain non-integers.
On my system, that is the case: `setuptools.__version__` is `'75.1.0.post0'`
I believe it is entirely fair to just check the relevant bits and refuse  to continue if those can not be checked properly.
But haviong something extra on the version should not immediately cause issues (e.g. the `post0` here, or slugs like `beta`, `alpha` and the likes).
Probably only very few systems are running setuptools with weird version info, but supporting this doesn't cost much, i believe.
2024-12-11 16:30:19 +01:00
Georgia Garcia
8a6eb170e1 Merge postfix-smtp profile fix
Allow locking for /var/spool/postfix/pid/unix.relay.

Example log entry: `type=AVC msg=audit(1733851239.685:8882): apparmor="DENIED" operation="file_lock" profile="postfix-smtp" name="/var/spool/postfix/pid/unix.relay" pid=14222 comm="smtp" requested_mask="k" denied_mask="k" fsuid=91 ouid=0FSUID="postfix" OUID="root"`

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1459
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-10 20:55:43 +00:00
pyllyukko
76dcf46d4f postfix-smtp profile fix
Allow locking for /var/spool/postfix/pid/unix.relay.
2024-12-10 19:32:49 +02:00
John Johansen
a315d89a2b Merge smbd: allow capability chown
This is neeed for "inherit owner = yes" in smb.conf.

From man smb.conf:

    inherit owner (S)

    The ownership of new files and directories is normally governed by
    effective uid of the connected user. This option allows the Samba
    administrator to specify that the ownership for new files and
    directories should be controlled by the ownership of the parent
    directory.

Fixes: https://bugzilla.suse.com/show_bug.cgi?id=1234327

I propose this fix for 3.x, 4.x and master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1456
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-10 09:34:03 +00:00
John Johansen
60f1b55ab5 Merge Use MS_SYNCHRONOUS instead of MS_SYNC
MS_SYNC is a flag for msync(2) while MS_SYNCHRONOUS is a flag for mount(2).
The header used to define MS_SYNC but IMO this is confusing since that's an
unrelated flag.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1458
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-10 08:39:18 +00:00
Zygmunt Krynicki
d164e877f5 Use MS_SYNCHRONOUS instead of MS_SYNC
MS_SYNC is a flag for msync(2) while MS_SYNCHRONOUS is a flag for mount(2).
The header used to define MS_SYNC but IMO this is confusing since that's an
unrelated flag.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-10 09:09:45 +01:00
John Johansen
239ae21b69 Merge Allow spread to use locally-provided kernel
By placing a bzImage into the top level of the AppArmor git repository one can
instruct spread and image-garden to use that image instead of booting
traditionally with an EFI / full disk image pair.

In addition, make error handling in qemu more robust, so failures are both
surfaced and do not cause endless attempts to allocate.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1452
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-10 00:47:48 +00:00
Zygmunt Krynicki
7031b5aeee Allow spread to use locally-provided kernel
By placing a bzImage into the top level of the AppArmor git repository one can
instruct spread and image-garden to use that image instead of booting
traditionally with an EFI / full disk image pair.

In addition, make error handling in qemu more robust, so failures are both
surfaced and do not cause endless attempts to allocate.

Please update image-garden to at least 5a00ead9964df6463e19432ae50e7760fc6da755

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-10 00:50:02 +01:00
Grimmauld
9967ba9873 aa-status: fix json output with --count flag 2024-12-09 23:58:04 +01:00
Christian Boltz
d305028502 smbd: allow capability chown
This is neeed for "inherit owner = yes" in smb.conf.

From man smb.conf:

    inherit owner (S)

    The ownership of new files and directories is normally governed by
    effective uid of the connected user. This option allows the Samba
    administrator to specify that the ownership for new files and
    directories should be controlled by the ownership of the parent
    directory.

Fixes: https://bugzilla.suse.com/show_bug.cgi?id=1234327
2024-12-09 20:45:42 +01:00
Georgia Garcia
11d121409d Merge tests: add regression tests for snapd mount-control
The test adds a very small and simple smoke test that shows that a mount rule
with both fstype and options allows mounts to be performed on a real running
kernel.

The test is structured in a way that should make it easy to extend with new
variants (flags, fstype) in the future.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1445
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-09 19:25:14 +00:00
Christian Boltz
dfe771602d Merge postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/hold/.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1454
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-09 18:56:30 +00:00
pyllyukko
3c2aae3a22 postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/hold/.
2024-12-09 19:23:34 +02:00
Georgia Garcia
b4adff2ce0 tests: fix profile name when wrapper is specified
When settest was called with two parameters, one for the test name and
the other for the test wrapper/binary, the profile created with
genprofile would show the test name, causing an error if the file
didn't exist.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-09 12:16:13 -03:00
Georgia Garcia
0307619ed9 tests: add option to append a profile to a profile already generated
Some of the tests using the --stdin option of mkprofile.pl are adding
more than one profile at a time. Whenever a profile is created in the
test, its name is added to the file profile.names so the test
infrastructure can tell if the profile is loaded or removed when
appropriately. The issue is that the name of the second profile
created by --stdin is not added, so these checks are not applied.

This patch adds the option of appending a second profile (not rules).
The option --append was used instead of a short -A because the short
options are arguments of mkprofile.pl, which --append is not.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-09 12:16:13 -03:00
Zygmunt Krynicki
1f60021979 tests: add regression tests for snapd mount-control
The test adds a very small and simple smoke test that shows that a mount rule
with both fstype and options allows mounts to be performed on a real running
kernel.

The test is structured in a way that should make it easy to extend with new
variants (flags, fstype) in the future.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-09 14:55:03 +01:00
Grimmauld
4f006a660c aa-status: fix json generation
- previously, aa-status --json --show profiles would return non-standard json
- adding the --pretty flag would crash completely
- closes #470

Things done:
- removed trailing ", " in json generation
- generate json seperator (", ") for each new json field
  (profiles/processes) after the header if json is enabled

Tested on NixOS and apparmor 4.0.3 base, but should work on any version the patch applies on.
2024-12-09 10:57:58 +01:00
Georgia Garcia
9cc40e2dca tests: remove outdated restriction on image name specification
Due to how the tests were implemented in the past, permissions could
be passed along with the image name, and the permission part would be
discarded. The issue is that permissions are usually separated by ':',
but namespaces also contain ':', which would cause a conflict.

Since permissions are no longer passed as part of the image name,
remove that description so profile names in namespaces can be
supported.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-06 14:25:34 -03:00
Christian Boltz
5fb91616e3 Merge python 3.13 fixes/workarounds
Fixes/workarounds for python 3.13 support.

fail.py: handle missing cgitb - workaround for https://gitlab.com/apparmor/apparmor/-/issues/447

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1439
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-05 17:35:13 +00:00
Georgia Garcia
d9304c7653 Merge Allow running tests with spread
Spread is a full-system, or integration test suite runner initially developed
to test snapd. Over time it has spread to other projects where it provides a
structured way to organize, run and debug complex full-system interactions.
Spread is documented on https://github.com/canonical/spread and is used in
production since late 2016.

Spread has a notion of backends which are responsible for allocating and
discarding test machines. For the purpose of running AppArmor regression tests,
I've combined spread with my own tool, image garden. The tool provides
off-the-shelf images, constructed on-the-fly from freely available images, and
makes them easily available to spread.

The reason for doing it this way is so that using non-free cloud systems is not
required and anyone can repeat the test process locally, on their own computer.
Vanilla spread is somewhat limited to x86-64 systems but the way I've used it
here makes it equally possible to test x86_64 *and* aarch64 systems. I've done
most of the development on an ARM single-board-computer running on my desk.

Spread requires a top-level spread.yaml file and a collection of task.yaml
files that describe individual tasks (for us, those are just tests). Tasks have
no implied dependency except that to reach a given task, spread will run all
the _prepare_ statements leading to that task, starting from the project, test
suite and then task. With proper care one can then run a specific individual
test with a one-line command, for example:

```
spread -v garden:ubuntu-cloud-24.04:tests/regression/apparmor:at_secure
```

This will prepare a fresh ubuntu-cloud-24.04 system (matching the CPU
architecture of the host), copy the project tree into the test machine, install
all the build dependencies, build all the parts of apparmor and then run one
specific variant of the regression test, namely the at_secure program.
Importantly the same test can also run on, say debian-cloud-13 (Debian Trixie),
but also, if you have a Google cloud account, on Google Compute Engine or in
one of the other backends either built into spread or available as a fork of
spread or as a helper for ad-hoc backend. Spread can also create more than one
worker per system and distribute the tests to all of the available instances.
In no way are we locking ourselves out of the ability to run our test suite on
our target of choice.

Spread has other useful switches, such as:
- `-reuse` for keeping machines around until discarded with -discard
- `-resend` for re-sending updated copy of the project (useful for -reuse)
- `-debug` for starting an interactive shell on any failure
- `-shell` for starting an interactive shell instead of the `execute` phase

This first patch contains just the spread elements, assuming that both spread
and image-garden are externally installed. A GitLab continuous integration
installing everything required and running a subset of tests will follow
shortly.

I've expanded the initial selection of systems to allow running all the tests
on several versions of Ubuntu, Debian and openSUSE, mainly as a sanity check
but also to showcase how practical spread is at covering real-world systems.

A number of tests are currently failing:

    - garden:debian-cloud-12:tests/regression/apparmor:attach_disconnected
    - garden:debian-cloud-12:tests/regression/apparmor:deleted
    - garden:debian-cloud-12:tests/regression/apparmor:unix_fd_server
    - garden:debian-cloud-12:tests/regression/apparmor:unix_socket_pathname
    - garden:debian-cloud-13:tests/regression/apparmor:attach_disconnected
    - garden:debian-cloud-13:tests/regression/apparmor:deleted
    - garden:debian-cloud-13:tests/regression/apparmor:unix_fd_server
    - garden:debian-cloud-13:tests/regression/apparmor:unix_socket_pathname
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:attach_disconnected
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:deleted
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:e2e
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_fd_server
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_socket_pathname
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:xattrs_profile

In addition, only on openSUSE, I've skipped the entire test suite of the utils
directory, as it requires python3 ttk themes, which I cannot find in packaged
form.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1432
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-05 14:02:31 +00:00
Zygmunt Krynicki
d27377a62f Document spread tests in README.md
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 14:48:58 +01:00
Mikko Rapeli
434e34bb51 fail.py: handle missing cgitb
It's no longer in python standard library starting
at version 3.13. Fixes:

root@qemuarm64:~# aa-complain /etc/apparmor.d/*
Traceback (most recent call last):
  File "/usr/sbin/aa-complain", line 18, in <module>
    from apparmor.fail import enable_aa_exception_handler
  File "/usr/lib/python3.13/site-packages/apparmor/fail.py", line 12, in <module>
    import cgitb
ModuleNotFoundError: No module named 'cgitb'

Signed-off-by: Mikko Rapeli <mikko.rapeli@linaro.org>
2024-12-05 06:58:01 +00:00
John Johansen
e1d8bf1888 Merge Add explicit test for parser priority-based carveouts
Tests #466 but is marked as expected fail due to that bug not being resolved.

Depends on !1441 which adds the xfail infrastructure to the parser equality testing framework, and should be rebased on top of master once that MR is merged.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1443
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-05 06:21:59 +00:00
Zygmunt Krynicki
1df91e2c8c Third iteration of spread support
- Tests defined in utils/test are now described by a task.yaml in the same
  directory and can run concurrently across many machines.
- Tests for utils/ are now executed on openSUSE Tumbleweed since ttk themes is
  no longer a hard dependency in master.
- Tests no longer run on openSUSE Leap 15.6 due to the age of default
  Python (3.6) and gcc/g++. The tight integration with SWIG which does
  not seem to support other Python versions very well. Perl hard-codes
  old GCC for extension modules. The upcoming openSUSE Leap 16 should be
  a viable target. In the meantime we can still test everything through
  rolling-release Tumbleweed.
- Formatting of YAML files is now more uniform, at four spaces per tab.
- The run-spread.sh script is now in the root of the tree. The script allows
  running all spread tests sequentially on one system, while collecting logs
  and artifacts for convenient analysis after the fact.
- All systems are adjusted to run _four_ workers in parallel with _two_ virtual
  cores each and equipped with 1.5GB of virtual memory. This aims to best
  utilize the capacity of a typical CI worker with two to four cores and about
  8GB of available memory.
- Failing tests are marked as such, so that as a whole the entire spread suite
  can pass and be useful at catching regressions.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 02:17:07 +01:00
Zygmunt Krynicki
c95ac4d350 Second iteration of spread support
Compared to v1 the following improvements have been made:

- The cost of installing packages have been shifted from each startup to image
  preparation phase, thanks to the integration of custom cloud-init profiles
  into image-garden. This has dramatic impact on iteration time while also
  entirely removing requirement to be online to run once a prepared image is
  available.

- Support for running on Google Compute Engine has been removed since it would
  not be able to use cloud-init the same way would currently only complicate
  setup.

- The number of workers have been tuned for local iteration, aiming for
  comfortable work with 16GB of memory on the host. Once CI/CD pipeline
  support is introduced I will add a dedicated entry so that resources are
  utilized well both locally and when running in CI.

- The set of regression tests listed in tests/regression/apparmor/task.yaml is
  now cross-checked so introduction of a new test to the makefile there is
  automatically flagged and causes spread to fail with a clear message.

- The task tests/unit/utils has been improved to generate profiles. Thanks to
  Christian Boltz for explaining this relationship between tests.

- A number of comments have been improved and cleaned up for readability,
  accuracy and sometimes better grammar.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 02:17:07 +01:00
Zygmunt Krynicki
cc04181578 Allow running tests with spread
Spread is a full-system, or integration test suite runner initially developed
to test snapd. Over time it has spread to other projects where it provides a
structured way to organize, run and debug complex full-system interactions.
Spread is documented on https://github.com/canonical/spread and is used in
production since late 2016.

Spread has a notion of backends which are responsible for allocating and
discarding test machines. For the purpose of running AppArmor regression tests,
I've combined spread with my own tool, image garden. The tool provides
off-the-shelf images, constructed on-the-fly from freely available images, and
makes them easily available to spread.

The reason for doing it this way is so that using non-free cloud systems is not
required and anyone can repeat the test process locally, on their own computer.
Vanilla spread is somewhat limited to x86-64 systems but the way I've used it
here makes it equally possible to test x86_64 *and* aarch64 systems. I've done
most of the development on an ARM single-board-computer running on my desk.

Spread requires a top-level spread.yaml file and a collection of task.yaml
files that describe individual tasks (for us, those are just tests). Tasks have
no implied dependency except that to reach a given task, spread will run all
the _prepare_ statements leading to that task, starting from the project, test
suite and then task. With proper care one can then run a specific individual
test with a one-line command, for example:

```
spread -v garden:ubuntu-cloud-24.04:tests/regression/apparmor:at_secure
```

This will prepare a fresh ubuntu-cloud-24.04 system (matching the CPU
architecture of the host), copy the project tree into the test machine, install
all the build dependencies, build all the parts of apparmor and then run one
specific variant of the regression test, namely the at_secure program.
Importantly the same test can also run on, say debian-cloud-13 (Debian Trixie),
but also, if you have a Google cloud account, on Google Compute Engine or in
one of the other backends either built into spread or available as a fork of
spread or as a helper for ad-hoc backend. Spread can also create more than one
worker per system and distribute the tests to all of the available instances.
In no way are we locking ourselves out of the ability to run our test suite on
our target of choice.

Spread has other useful switches, such as:
- `-reuse` for keeping machines around until discarded with -discard
- `-resend` for re-sending updated copy of the project (useful for -reuse)
- `-debug` for starting an interactive shell on any failure
- `-shell` for starting an interactive shell instead of the `execute` phase

This first patch contains just the spread elements, assuming that both spread
and image-garden are externally installed. A GitLab continuous integration
installing everything required and running a subset of tests will follow
shortly.

I've expanded the initial selection of systems to allow running all the tests
on several versions of Ubuntu, Debian and openSUSE, mainly as a sanity check
but also to showcase how practical spread is at covering real-world systems.

A number of systems and tests are currently failing:

- garden:debian-cloud-12:tests/regression/apparmor:attach_disconnected
- garden:debian-cloud-12:tests/regression/apparmor:deleted
- garden:debian-cloud-12:tests/regression/apparmor:unix_fd_server
- garden:debian-cloud-12:tests/regression/apparmor:unix_socket_pathname
- garden:debian-cloud-13:tests/regression/apparmor:attach_disconnected
- garden:debian-cloud-13:tests/regression/apparmor:deleted
- garden:debian-cloud-13:tests/regression/apparmor:unix_fd_server
- garden:debian-cloud-13:tests/regression/apparmor:unix_socket_pathname
- garden:opensuse-cloud-15.6:tests/regression/apparmor:attach_disconnected
- garden:opensuse-cloud-15.6:tests/regression/apparmor:deleted
- garden:opensuse-cloud-15.6:tests/regression/apparmor:e2e
- garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_fd_server
- garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_socket_pathname
- garden:opensuse-cloud-15.6:tests/regression/apparmor:xattrs_profile
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:attach_disconnected
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:deleted
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:unix_fd_server
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:unix_socket_pathname
- garden:ubuntu-cloud-22.04:tests/regression/apparmor:attach_disconnected

In addition, only on openSUSE, I've skipped the entire test suite of the utils
directory, as it requires python3 ttk themes, which I cannot find in packaged
form.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 02:17:07 +01:00
Zygmunt Krynicki
9588b06e0f Allow running exactly one test in utils/test
The new check-one-test-% pattern rule allows running individual test scripts.
This allows them to be tested in parallel across many Make worker threads or
across many distinct machines with spread.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-12-05 02:17:07 +01:00
Georgia Garcia
57bcb02629 Merge Revert "Merge Fix regression test bug involving binaries different from name"
This reverts merge request !1446 due to breakage in the aa-exec and userns regression tests.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1447
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-04 21:33:26 +00:00
Ryan Lee
5cfdf9867f Revert "Merge Fix regression test bug involving binaries different from name"
This reverts merge request !1446
2024-12-04 21:05:55 +00:00
Georgia Garcia
1c4322a095 Merge Fix regression test bug involving binaries different from name
When the test name and test binary differed and genprofile was used, there would be an execname warning about the original expected binary not existing. This fixes that warning.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1446
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-04 20:36:26 +00:00
Ryan Lee
b7669222dc Fix regression test bug involving binaries different from name
When the test name and test binary differed and genprofile was used, there would be an execname warning about the original expected binary not existing. This fixes that warning.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-04 12:19:21 -08:00
Ryan Lee
b925d8acff parser equality tests: print both profiles upon test failure
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-04 11:37:36 -08:00
Ryan Lee
7b5f4c0d6f Add explicit test for parser priority-based carveouts
These are marked as expected fail due to a bug in the parser's priority
handling.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-04 11:29:40 -08:00
John Johansen
53e322b755 Merge parser: equality tests: add the ability have tests that are a known problem
currently the equality tests require the tests to PASS as known equality
or inequality. Add the ability to add tests that are a known problem
and are expected to fail the equality, or inequality test.

This is done by using

   verify_binary_xequality
   verify_binary_xinequality

This allows new tests to be added to document a known issue, without
having to develop the fix for the issue. The use of this facility
is expected to be temporary, so any test marked as xequality or
xinequality will be noisy but not fail the other tests until they
are fixed, at which point they will cause the tests to fail to
force them to be updated to the correct equality or inequality
test.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1441
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-12-04 02:55:57 +00:00
John Johansen
662a26d133 Merge profiles: update bwrap profile
Update the bwrap profile so that it will attach to application profiles
if present.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1435
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-12-03 21:45:49 +00:00
John Johansen
b81ea65c1c parser: equality tests: add the ability have tests that are a known problem
currently the equality tests require the tests to PASS as known equality
or inequality. Add the ability to add tests that are a known problem
and are expected to fail the equality, or inequality test.

This is done by using

   verify_binary_xequality
   verify_binary_xinequality

This allows new tests to be added to document a known issue, without
having to develop the fix for the issue. The use of this facility
is expected to be temporary, so any test marked as xequality or
xinequality will be noisy but not fail the other tests until they
are fixed, at which point they will cause the tests to fail to
force them to be updated to the correct equality or inequality
test.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-12-03 13:44:53 -08:00
Georgia Garcia
ea7e75cde7 Merge Drop useless code in test-regex_matches.py
No need to assign a variable to itsself, not even conditionally.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1442
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-03 16:35:17 +00:00
Georgia Garcia
36ae21e3fa Merge Remove match statements in utils for older Python compatibility
Somehow the use of new match statements slipped by review despite our commitment to supporting older Python versions. Replace them with an unfortunately-needed if-elif chain.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1440
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-03 12:20:30 +00:00
Christian Boltz
ce0cfccfd7 Drop useless code in test-regex_matches.py
No need to assign a variable to itsself, not even conditionally.
2024-12-02 21:33:34 +01:00
Ryan Lee
2068ea8720 Remove match statements in utils for older Python compatibility
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-12-02 10:47:16 -08:00
Christian Boltz
93c7035148 Merge aa-remove-unknown: fix readability check [upstreaming]
I am upstreaming this patch that is part of the nix package of apparmor for close to a year now.
This fixes the issue at https://github.com/NixOS/nixpkgs/issues/273164 for more distros than just NixOS.
The original merge Request on the nix side patching this was https://github.com/NixOS/nixpkgs/pull/285915.
However, people had issues with gitlab, so this never hit apparmor upstream until now. This does however also mean this patch has seen production and seems to work quite well.

## Original reasoning/message of the patch author:

This check is intended for ensuring that the profiles file can actually
be opened.  The *actual* check is performed by the shell, not the read
utility, which won't even be executed if the input redirection (and
hence the test) fails.

If the test succeeds, though, using `read` here might actually
jeopardize the test result if there are no profiles loaded and the file
is empty.

This commit fixes that case by simply using `true` instead of `read`.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1438
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-12-01 16:11:13 +00:00
Andreas Wiese
b4aa00de51 aa-remove-unknown: fix readability check
This check is intended for ensuring that the profiles file can actually
be opened.  The *actual* check is performed by the shell, not the read
utility, which won't even be executed if the input redirection (and
hence the test) fails.

If the test succeeds, though, using `read` here might actually
jeopardize the test result if there are no profiles loaded and the file
is empty.

This commit fixes that case by simply using `true` instead of `read`.
2024-11-29 12:20:48 +01:00
Maxime Bélair
cf51f7aadd Upadate man apparmor.d to highlight pivot_root limitation
As pointed out by https://bugs.launchpad.net/apparmor/+bug/2087875 ,
profile transitions with pivot_root are currently not supported on any
kernel.

This commit makes this limitation more obvious to users.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>
2024-11-27 17:29:42 +01:00
John Johansen
420945139c Merge Note in README which build/test steps can be meaningfully parallelized
This MR documents the lessons learned from the experiments that ultimately resulted in !1416.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1434
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-27 08:23:21 +00:00
John Johansen
74a67394ac Merge Make signal.cc:signal_map an unordered map
This also includes renaming SIGTSTP "stp" to "tstp" while preserving backwards compatibility.

Analogous to !1420.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1425
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-11-27 08:20:13 +00:00
John Johansen
c5b17d85ea Merge regression tests: Add test to check for DAC permissions to the testsuite
The regression test suite uses root with capabilities restricted in
several tests. This can cause the test suite to fail in weird and
confusing ways.

Add a test to check for DAC permissiosns from / to the testsuite
and abort running the tests with an error message if DAC permissions
are going to cause the test suite to fail.

Currently the test is pretty basic, but is better than nothing.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1411
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-11-27 08:14:11 +00:00
Georgia Garcia
9d2cde168e Merge aa-notify: Adding support for merging notification.
The new flag --merge-notifications enables the merging of all
notifications from a fixed time period into a single one, thus
preventing notification flooding.
A new GUI allows users to choose either a synthetic or a comprehensive
view of the notifications.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1324
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-11-26 18:35:37 +00:00
Maxime Bélair
f63fcdc8d2 aa-notify: Adding support for merging notification. 2024-11-26 18:35:37 +00:00
John Johansen
1979af7710 profiles: update bwrap profile
Update the bwrap profile so that it will attach to application profiles
if present.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-26 09:52:17 -08:00
Ryan Lee
4286d3a79f Apply 1 suggestion(s) to 1 file(s)
Co-authored-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-11-26 16:49:57 +00:00
Ryan Lee
967685352c Note in README which build/test steps can be meaningfully parallelized
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-25 17:42:41 -08:00
Christian Boltz
6f5cdb7b44 Merge Assorted fixes for test suite portability
I've been working on improved end-to-end testing of AppArmor on a number
of popular Linux distributions. My first run contains Debian, Ubuntu and openSUSE.

This branch contains three small fixes that, mainly, allow running more tests on
openSUSE Tumbleweed.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1431
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-11-25 21:58:08 +00:00
Zygmunt Krynicki
4caf0aff81 On openSUSE 15.6 make fails to find awk
Using this version of make:
```
GNU Make 4.2.1
Built for x86_64-suse-linux-gnu
```
I'm not entirely sure why but the alternative syntax I've used works correctly.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 15:05:23 +01:00
Zygmunt Krynicki
32ee85cef8 Use larger loop device in mult_mount.sh test
This fixes the test to pass on openSUSE Tumbleweed, where the small size
prevented alloction of an inode for the `lost+found` directory:

```
garden:opensuse-cloud-tumbleweed .../tests/regression/apparmor# mkfs.ext2 -F -m 0 -N 10 /tmp/sdtest.32929-21402-6x826m/image.ext3
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 512 1k blocks and 8 inodes

Allocating group tables: done
Writing inode tables: done
ext2fs_mkdir: Could not allocate inode in ext2 filesystem while creating /lost+found
```

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 13:55:19 +01:00
Zygmunt Krynicki
92fcdcab9e Quote trailing backslash in test case
This fixes an error with Python 3.11:

```
test/test-parser-simple-tests.py:420:21: E502 the backslash is redundant between brackets
```

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 13:55:05 +01:00
Zygmunt Krynicki
4b0adc63f5 Use command -v rather than which
Which is technically not POSIX and command -v works everywhere. This fixes
building and running the test suite on openSUSE Tumbleweed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 13:54:41 +01:00
Zygmunt Krynicki
f58fe9cd52 parser: quote BISON_MAJOR in case it is empty
On a test system without bison installed, make setup fails with:

  /bin/sh: 1: bison: not found
  /bin/sh: 1: test: -ge: unexpected operator

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
2024-11-25 13:54:25 +01:00
Christian Boltz
2b45586fa9 Merge Dovecot profile: Allow reading of /proc/sys/kernel/core_pattern
See <https://dovecot.org/bugreport.html>

(the link describes how Dovecot requires access to `core_pattern`)

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1331
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-11-21 20:51:13 +00:00
Georgia Garcia
a2d52fedb2 Merge tests: fix incorrect setfattr call in xattrs_profile
The file was quoted with the following space, making the test broken.

Signed-off-by: Zygmunt Krynicki <me@zygoon.pl>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1429
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-11-21 18:05:24 +00:00
Zygmunt Krynicki
8c16fb2700 tests: fix incorrect setfattr call in xattrs_profile
The file was quoted with the following space, making the test broken.

Signed-off-by: Zygmunt Krynicki <me@zygoon.pl>
2024-11-21 18:10:14 +01:00
pyllyukko
0a5a9c465f Dovecot profile: Allow reading of /proc/sys/kernel/core_pattern
See <https://dovecot.org/bugreport.html>
2024-11-21 16:21:17 +02:00
Ryan Lee
f21a243ce4 Make signal.cc:signal_map an unordered_map
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-18 15:25:56 -08:00
Ryan Lee
5bcf05dd3d Similar name adjustments for signal.cc:signal_map
Because this is used in parsing profiles, we keep backwards compatibility by including
both names and mapping them to the same underlying signal number.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-18 15:25:27 -08:00
Ryan Lee
7a85f6282f Name adjustments to signal.cc::sig_names
Because sig_names is only used to dump parsed signals for debugging purposes,
renaming SIGTSTP "stp" to "tstp" is not a breaking change.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-18 15:21:27 -08:00
Christian Boltz
e27b0ad2b6 Merge Quote some variables in regression test suite to allow for spaces
This is not a complete fix for the spaces issue, but it is the next simple step that can be taken before the more difficult work of finding the remaining bugs in each shell script.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1424
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-11-18 21:30:46 +00:00
Ryan Lee
e7ec01f075 Quote some variables in regression test suite to allow for spaces
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-18 10:16:28 -08:00
John Johansen
2e77129e15 Merge parser: fix expr MatchFlag dump
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.

rule: \d2  ->  \x2 priority=1001 (0x4/0)< 0x4>

rule: \d7  ->  \a priority=3e9 (0x4/0)< 0x4>

rule: \d10  ->  \n priority=3e9 (0x4/0)< 0x4>

rule: \d9  ->  \t priority=3e9 (0x4/0)< 0x4>

rule: \d14  ->  \xe priority=1001 (0x4/0)< 0x4>

where priority=3e9 is the hex encoded priority 1001.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1419
Approved-by: Maxime Bélair <maxime.belair@canonical.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 17:41:13 +00:00
John Johansen
4af8acee6f Merge Initialize the backmap field in new_entry for capability addition
The old code implicitly initialized it to 0 by overwriting a
zero-initialized array terminator. Now that we construct the new entry
from scratch, we need to do this manually.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1423
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 02:07:44 +00:00
Ryan Lee
a513d02297 Initialize the backmap field in new_entry for capability addition
The old code implicitly initialized it to 0 by overwriting a
zero-initialized array terminator. Now that we construct the new entry
from scratch, we need to do this manually.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-14 17:14:19 -08:00
John Johansen
a422d2ea17 Merge Partial fix for regression tests if parent directory contains spaces
Most `tests/regression/apparmor/*.sh` scripts contain

    . $bin/prologue.inc

This will explode if one of the parent directories contains a space.

Minimized reproducer:

```
# cat test.sh
pwd=`dirname $0`
pwd=`cd $pwd ; /bin/pwd`
bin=$pwd
echo "pwd: $bin"
. $bin/prologue.inc
# ./test.sh
pwd: /tmp/foo bar
./test.sh: line 9: /tmp/foo: No such file or directory
```

Notice that test.sh tries to source `/tmp/foo` instead of `/tmp/foo bar/prologue.inc`.

The fix is to quote the prologue.inc path:

    . "$bin/prologue.inc"

While on it, also fix other uses of $bin - directly and indirectly - by quoting them.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1418
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:46:04 +00:00
John Johansen
015b41aeb4 Merge parser: improve libapparmor_re build and dump info
Fix libapparmor_re/Makefile so it works correctly with rebuilds and improve state machine dump information, to aid with debugging of permission handling during the compile.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1410
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:32:30 +00:00
John Johansen
48bf4d1df9 Merge Small fixset 2 for parser code nits
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1422
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:31:57 +00:00
John Johansen
95c648c0d2 Merge Pass parallelism arg to parser_sanity test prove invocation
Unfortunately, meaningfully parallelizing parser testing is a giant task:

- Parser equality testing is a shell script based framework where adding parallelism would be a major rework.
- Parser testing using Python’s unittest framework also needs a different test runner to enable parallelism.
- Parser testing using Perl’s prove framework already supports parallelism, but adding -j to Prove does not result in speedups. Thus, I suspect most of the overhead is in spawning the processes, and that speeding this part up will require making the parser a library and testing it that way.

The commit in this MR passes a `-j` parallelism flag to Perl's prove framework, but local testing has shown that this does not create speedups, and Gitlab CI has a very modest improvement of 11 minutes 16 seconds for the parser testing stage without `-j $(nproc)` vs 10 minutes 51 seconds with `-j $(nproc)`. Instead of passing `-j $(nproc)`, pass a fixed `-j 2` to gain some speedups, as the overhead of `-j $(nproc)` on a system with more than 2 cores eats up any time gains that parallelism would have brought.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1416
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:27:31 +00:00
John Johansen
926929da16 Merge Write basic file complain-mode regression tests
The test "Complain mode profile (file exec cx permission entry)" currently will only pass on a Ubuntu Oracular system due to a kernel bugfix patch that has not yet been upstreamed or backported.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1415
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:26:40 +00:00
John Johansen
d0d6975a1f Merge Remove unused and broken dup_value_list function
This function was broken all this time: instead of duplicating each entry in the list, it would duplicate the first entry n times. Since this function is currently not used anywhere, delete it instead of fixing it.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1421
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:24:52 +00:00
John Johansen
b6adc6b9e5 Merge Make parser_misc keyword_table and rlimit_table unordered_maps
Besides of transitioning towards C++, this also eliminates the linear scan search that the functions using these arrays did.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1420
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-15 00:24:09 +00:00
Ryan Lee
ae5a16bfd1 Use list_remove_at macro in mount.cc:extract_flags
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-14 10:36:30 -08:00
John Johansen
82e4b4ba00 regression tests: Add test to check for DAC permissions to the testsuite
The regression test suite uses root with capabilities restricted in
several tests. This can cause the test suite to fail in weird and
confusing ways.

Add a test to check for DAC permissiosns from / to the testsuite
and abort running the tests with an error message if DAC permissions
are going to cause the test suite to fail.

Currently the test is pretty basic, but is better than nothing.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-14 10:20:37 -08:00
Ryan Lee
af5cd5002b Remove unused and broken dup_value_list function
This function was broken all this time: instead of duplicating each entry
in the list, it would duplicate the first entry n times. Since this
function is currently not used anywhere, delete it instead of fixing it.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 15:53:08 -08:00
Ryan Lee
88719dbb7b Fix infinite loop in chfa.cc:weld_file_to_policy
This is simple enough to fix even if weld_file_to_policy isn't used in practice
with the compat layer that uses it being a target for deletion

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 15:46:38 -08:00
Ryan Lee
5b391a5a4f Remove unused list_remove and list_find_prev helper macros in parser.h
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 14:51:08 -08:00
Ryan Lee
d9ee417dc3 Add missing null check in mount.cc:extract_flags
There is a null check before storing invflags into inv, but not before initializing the value at inv to 0.
Assuming the null check is needed, it should be there in both places.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 14:45:55 -08:00
Ryan Lee
9689e3a39f Clarify duplicate insertion logic in parser_alias.c:process_entries
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-13 14:34:55 -08:00
Ryan Lee
cb110eaf98 Write basic file complain-mode regression tests
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-12 12:08:22 -08:00
John Johansen
a31343c5f7 parser: fix expr MatchFlag dump
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.

rule: \d2  ->  \x2 priority=1001 (0x4/0)< 0x4>

rule: \d7  ->  \a priority=3e9 (0x4/0)< 0x4>

rule: \d10  ->  \n priority=3e9 (0x4/0)< 0x4>

rule: \d9  ->  \t priority=3e9 (0x4/0)< 0x4>

rule: \d14  ->  \xe priority=1001 (0x4/0)< 0x4>

where priority=3e9 is the hex encoded priority 1001.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-11 14:39:40 -08:00
John Johansen
f24fc4841f Merge parser: fix mapping of AA_CONT_MATCH for policydb compat entries
The mapping of AA_CONT_MATCH was being dropped resulting in the
tcp tests failing because they would only match up to the first conditional
match check in the layout.

Bug: https://gitlab.com/apparmor/apparmor/-/issues/462
Fixes: e29f5ce5f ("parser: if extended perms are supported by the kernel build a permstable")
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1409
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-11 20:53:08 +00:00
John Johansen
eb365b374d Merge parser: bug fix do not change auditing information when applying deny
The parser recently changed how/where deny information is applied.
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed the implicit filtering of explicit denies during
the minimization pass. The implicit clear allowed the explicit
information to be carried into the minimization pass and merged with
implicit denies. The end result being a minimized dfa with the explicit
deny information available to be applied post minimization, and
then dropped later at permission encoding in the accept entries.

Extended permission however enable carrying explicit deny information
into the kernel to fix certain bugs like complain mode not being
able to distinguish between implicit and explicit deny rules (ie.
deny rules get ignored in complain mode). However keeping explicit
deny information when unnecessary result in a larger state machine
than necessary and slower compiles.

commit 179c1c1ba ("parser: fix minimization check for filtering_deny")
Moved the explicit apply_and_clear_deny() pass to before minimization
to restore mnimization's ability to create a minimized dfa with
explicit and implicit deny information merged but this also cleared
the explicit deny information that used to be carried through
minimization. This meant that when the deny information was applied
post minimization it resulted in the audit and quiet information
being cleared.

This resulted in the query_label tests failing as they are checking
for the expected audit infomation in the permissions.

Fixes: 179c1c1ba ("parser: fix minimization check for filtering_deny")
Bug: https://gitlab.com/apparmor/apparmor/-/issues/461
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1408
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-11 20:52:19 +00:00
Christian Boltz
55db4af979 Quote indirect uses of $bin and ${bin}
... to avoid issues with spaces in a parent directory's name.

"Indirect uses" means usage of $bin via another variable, for example
`foo=$bin/whatever`
2024-11-10 22:10:42 +01:00
Christian Boltz
22cf88b7c7 Quote all uses of $bin and ${bin}
... to avoid issues with spaces in a parent directory's name.
2024-11-10 22:10:42 +01:00
Christian Boltz
e1972eb22f Fix sourcing prologue.inc if parent directory contains spaces
Most `tests/regression/apparmor/*.sh` scripts contain

    . $bin/prologue.inc

This will explode if one of the parent directories contains a space.

Minimized reproducer:

```
pwd=`dirname $0`
pwd=`cd $pwd ; /bin/pwd`
bin=$pwd
echo "pwd: $bin"
. $bin/prologue.inc
pwd: /tmp/foo bar
./test.sh: line 9: /tmp/foo: No such file or directory
```

Notice that test.sh tries to source `/tmp/foo` instead of `/tmp/foo bar/prologue.inc`.

The fix - as done in this commit - is to quote the prologue.inc path:

    . "$bin/prologue.inc"
2024-11-10 22:10:32 +01:00
John Johansen
4c32ad8fb7 Merge test-logprof: Increase timeout once more
Builds for risc64 are much slower than on other architectures (4-5
seconds with qemu-user or on Litchi Pi 4A).

Since the timeout is only meant as a safety net, increase it generously,
and hopefully for the last time.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/463

I propose this patch for 4.0 and master.

Closes #463
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1417
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-09 22:37:24 +00:00
Christian Boltz
508ace452d test-logprof: Increase timeout once more
Builds for risc64 are much slower than on other architectures (4-5
seconds with qemu-user or on Litchi Pi 4A).

Since the timeout is only meant as a safety net, increase it generously,
and hopefully for the last time.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/463
2024-11-09 19:57:00 +01:00
Ryan Lee
d7cf845437 Make capabilities tracker into a class
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-08 14:55:43 -08:00
Ryan Lee
f6f3279c10 Make parser_misc keyword_table and rlimit_table unordered_maps
Besides of transitioning towards C++ this also eliminates the linear scan search that the functions using these arrays did.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-08 14:52:16 -08:00
Ryan Lee
f9a2c3c0eb Pass parallelism arg to parser_sanity test prove invocation
Because the main overhead of the parser_sanity test suite is process
spawning, parallelizing too much could end up hurting performance instead
of helping it. Thus, use a fixed value of 2 instead of $(nproc).

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-08 09:34:03 -08:00
Steve Beattie
5b98577a4d gitlab-ci: Build regression test suite in CI
Even if we can't run the regression tests in our GitLab CI environment, we can at least ensure the binaries in the regression test suite compile successfully.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1414
Approved-by: Steve Beattie <steve+gitlab@nxnw.org>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-11-07 20:06:30 +00:00
Ryan Lee
630b38238d Build regression tests in GitLab CI
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-07 11:47:55 -08:00
John Johansen
0828ab67b2 Merge regression tests: check for setfattr binary used by xattrs_profile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1412
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-07 03:27:28 +00:00
John Johansen
aa24194981 Merge Don't use hardcoded cerr in hfa.h State::dump
After https://gitlab.com/apparmor/apparmor/-/merge_requests/1410#note_2197215172 I did a quick grep and uncovered one other potential instance of `cerr` being written to in a dump function that takes an `ostream`.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1413
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-07 03:25:42 +00:00
Ryan Lee
d1cecd5f46 Don't use hardcoded cerr in hfa.h State::dump
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 18:00:16 -08:00
Ryan Lee
b39a535cb9 regression tests: check for setfattr binary used by xattrs_profile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 17:36:37 -08:00
John Johansen
15a02e0948 parser: fix mapping of AA_CONT_MATCH for policydb compat entries
The mapping of AA_CONT_MATCH was being dropped resulting in the
tcp tests failing because they would only match up to the first conditional
match check in the layout.

Bug: https://gitlab.com/apparmor/apparmor/-/issues/462
Fixes: e29f5ce5f ("parser: if extended perms are supported by the kernel build a permstable")
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 12:33:36 -08:00
John Johansen
45964d34e7 parser: add the abilitiy to dump the permissions table
Instead of encoding permissions in the accept and accept2 tables
extended perms uses a permissions table and accept becomes an index
into the table.

Add the ability to dump the permissions table so that it can be
compared and debugged.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 11:44:30 -08:00
John Johansen
00dedf10ad parser: add the accept2 table entry to the chfa dump
The chfa dump is missing information about the accept2 entry. The
accept2 information is necessary to help with debugging state machine
builds as accept2 is used to store quiet and audit information in the
old format or conditional information in the extended perms format.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 09:21:03 -08:00
John Johansen
7cc7f47424 parser: fix and cleanup libapparmor_re/Makefile
The Makefile is missing some of its .h depenedncies causing compiles
to either fail or worse result in bad builds when rebuilding in an
already built tree.

Move the header dependencies into a variable and use it for each
target. While some targets don't need every include as a dependency
and this will result in unnecessary rebuilds in some cases, it makes
the Makefile cleaner, easier to maintain and makes sure a dependency
isn't accidentally missed.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 09:20:51 -08:00
John Johansen
9ea0d4f89b parser: bug fix do not change auditing information when applying deny
The parser recently changed how/where deny information is applied.
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed the implicit filtering of explicit denies during
the minimization pass. The implicit clear allowed the explicit
information to be carried into the minimization pass and merged with
implicit denies. The end result being a minimized dfa with the explicit
deny information available to be applied post minimization, and
then dropped later at permission encoding in the accept entries.

Extended permission however enable carrying explicit deny information
into the kernel to fix certain bugs like complain mode not being
able to distinguish between implicit and explicit deny rules (ie.
deny rules get ignored in complain mode). However keeping explicit
deny information when unnecessary result in a larger state machine
than necessary and slower compiles.

commit 179c1c1ba ("parser: fix minimization check for filtering_deny")
Moved the explicit apply_and_clear_deny() pass to before minimization
to restore mnimization's ability to create a minimized dfa with
explicit and implicit deny information merged but this also cleared
the explicit deny information that used to be carried through
minimization. This meant that when the deny information was applied
post minimization it resulted in the audit and quiet information
being cleared.

This resulted in the query_label tests failing as they are checking
for the expected audit infomation in the permissions.

Fixes: 179c1c1ba ("parser: fix minimization check for filtering_deny")
Bug: https://gitlab.com/apparmor/apparmor/-/issues/461
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-11-06 09:11:43 -08:00
Ryan Lee
a2df3143d1 Remove aa_query_file_{path,link}_len wrappers
The prefix can be done in higher-level languages via slicing and having an explicit length exposes an out-of-bounds memory read footgun to those higher level languages

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 09:10:56 -08:00
Ryan Lee
53e3116350 Write test for aa_gettaskcon SWIG wrapper
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 09:10:56 -08:00
Ryan Lee
d199c2ae33 Write custom SWIG typemap for pid_t
Surprisingly, SWIG did not pick up the "typedef int pid_t" from the C headers.
As such, we need to provide our own wrapper. We don't just replicate the typdef
because we still support systems that have 16-bit PIDs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-06 09:10:45 -08:00
John Johansen
8df57ba89c Merge aa.py is_known_rule(): remove obsolete sanity check
We use ProfileStorage everywhere, which makes checking if a specific
rule_type exists obsolete.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1405
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:13:36 +00:00
John Johansen
e36ef0ed43 Merge aa.py: drop unused function profile_exists()
I don't know when (or even: if) this function was in use. A quick look
at the git history of aa.py shows that the function was (blindly?)
updated a few times. However, I didn't find a commit that uses or stops
using profile_exists(), so maybe it was never used at all.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1404
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:12:38 +00:00
John Johansen
5ebbe788ea Merge aa-mergeprof: prevent backtrace if file not found
If a user specifies a non-existing file to merge into the profiles
(`aa-mergeprof /file/not/found`), this results in a backtrace showing an
AppArmorBug because that file unsurprisingly doesn't end up in the
active_profiles filelist.

Handle this more gracefully by adding a read_error_fatal parameter to
read_profile() that, if set, forwards the exception. With that,
aa-mergeprof doesn't try to list the profiles in this non-existing file.

Note that all other callers of read_profile() continue to ignore read
errors, because aborting just because a single file in /etc/apparmor.d/
(for example a broken symlink) isn't readable would be a bad idea.

This bug was introduced in 4e09f315c3, therefore I propose this patch for 3.0..master

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1403
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:11:27 +00:00
John Johansen
3c40aab1a0 Merge profiles: update dconf abstraction to use @{etc_ro}
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1402
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:06:57 +00:00
John Johansen
bfa9147182 Merge php-fpm: widen allowed socket paths
It is common for packaged PHP applications to ship a PHP-FPM
configuration using a scheme of "$app.sock" or or "$app.socket" instead
of using a generic FPM socket.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1406
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:05:29 +00:00
John Johansen
14f54f3df2 Merge Misc small fixes to resolve some compiler warnings in regression test suite
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1407
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-11-06 03:04:18 +00:00
Ryan Lee
2ce217b873 Test SWIG Python bindings for aa_query_file_path
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
edb4a72c8c SWIG aa_query helper bitmask constants and stdint header
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
5db4908fd7 SWIG Python test for change_hat type signatures
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
930fca1e39 SWIG Python test refactoring of AppArmor enabled checks
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
369c9e73de Test aa_getcon SWIG bindings and leave some comments for untested ones
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
48901f2118 Write a test for aa_splitcon's SWIG bindings
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
c471acbe44 Typemaps for allowed, audited outputs of query functions
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:53 -08:00
Ryan Lee
cdb3e4a14e Add typemap for Python SWIG aa_change_hatv so it can take a string list
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:38:35 -08:00
Ryan Lee
ea2c957f14 Write basic test for Python aa_find_mountpoint
Also exercises aa_is_enabled

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:37:51 -08:00
Ryan Lee
04da4c86b0 Write custom typemap for aa_splitcon
Can't use %cstring_mutable because aa_splitcon also returns a ptr

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:37:51 -08:00
Ryan Lee
f05112b5e9 aa_is_enabled now returns a boolean in Python
Because boooleans are a subclass of ints in Python, this isn't a breaking change

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:37:51 -08:00
Ryan Lee
a15768b0bf Write an output typemap for errno-based functions
In Python, return status is signalled by exceptions (or lack thereof)
instead of int. Keep the typemap portable for any other languages we may
add in the future.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 18:37:23 -08:00
Ryan Lee
50d26beb00 Include cstring.i and some cstring output typemaps for libapparmor SWIG
This includes a custom typemap to handle (char **label, char **mode)
pairs and a cstring_output_allocate declaration for char **mnt.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 16:40:48 -08:00
Ryan Lee
d273055ebf Use fn arg in pivot_root _clone instead of hardcoding everywhere
The only use of this _clone function passes in the same function that was
hardcoded, so this doesn't change any functionality.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 12:34:44 -08:00
Ryan Lee
823d14df80 Reserve enough space for full possible fd length
Even if file descriptor values would not exercise the full range provided
by int, it doesn't hurt to allocate enough space for all ints.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-11-05 12:34:12 -08:00
Georg Pfuetzenreuter
f575817b68 php-fpm: widen allowed socket paths
It is common for packaged PHP applications to ship a PHP-FPM
configuration using a scheme of "$app.sock" or or "$app.socket" instead
of using a generic FPM socket.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>
2024-11-05 20:03:11 +01:00
Christian Boltz
db1ca4f5e8 aa.py is_known_rule(): remove obsolete sanity check
We use ProfileStorage everywhere, which makes checking if a specific
rule_type exists obsolete.
2024-11-03 21:21:39 +01:00
Christian Boltz
c10b39f3fe aa.py: drop unused function profile_exists()
I don't know when (or even: if) this function was in use. A quick look
at the git history of aa.py shows that the function was (blindly?)
updated a few times. However, I didn't find a commit that uses or stops
using profile_exists(), so maybe it was never used at all.
2024-11-03 21:06:27 +01:00
Christian Boltz
e5479bd7ef aa-mergeprof: prevent backtrace if file not found
If a user specifies a non-existing file to merge into the profiles
(`aa-mergeprof /file/not/found`), this results in a backtrace showing an
AppArmorBug because that file unsurprisingly doesn't end up in the
active_profiles filelist.

Handle this more gracefully by adding a read_error_fatal parameter to
read_profile() that, if set, forwards the exception. With that,
aa-mergeprof doesn't try to list the profiles in this non-existing file.

Note that all other callers of read_profile() continue to ignore read
errors, because aborting just because a single file in /etc/apparmor.d/
(for example a broken symlink) isn't readable would be a bad idea.
2024-11-01 22:39:32 +01:00
Georgia Garcia
cbe8d295a5 profiles: update dconf abstraction to use @{etc_ro}
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-31 09:52:07 +01:00
Georgia Garcia
f7b5d0e783 Merge Improvements to Postfix profiles
* Support /usr/libexec/postfix/ path
* Added abstractions/{nameservice,postfix-common} to postfix-postscreen
* Added postfix-tlsproxy, postscreen & spawn to postfix-master
    * Added missing postfix-tlsproxy profile
* Added postscreen cache map (see <https://www.postfix.org/postconf.5.html#postscreen_cache_map>)
* Added /{var/spool/postfix/,}pid/pass.smtpd to postfix-smtpd

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1330
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-30 10:43:47 +00:00
John Johansen
3d1a3493af Merge profiles: add support for ArchLinux php-legacy package to php-fpm
ArchLinux ships a secondary PHP package called php-legacy with different
paths. As of now, the php-fpm profile will cover this binary but
inadequately restrict it.

Fixes: #454

Closes #454
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1401
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-10-30 09:37:08 +00:00
Christian Pfeiffer
6a5432b2b0 profiles: add support for ArchLinux php-legacy package to php-fpm
ArchLinux ships a secondary PHP package called php-legacy with different
paths. As of now, the php-fpm profile will cover this binary but
inadequately restrict it.

Fixes: #454
2024-10-30 09:39:37 +01:00
pyllyukko
4ccf567d31 Improvements to Postfix profiles
* Support /usr/libexec/postfix/ path
* Added abstractions/{nameservice,postfix-common} to postfix-postscreen
* Added postfix-tlsproxy, postscreen & spawn to postfix-master
    * Added missing postfix-tlsproxy profile
* Added postscreen cache map (see <https://www.postfix.org/postconf.5.html#postscreen_cache_map>)
* Added /{var/spool/postfix/,}pid/pass.smtpd to postfix-smtpd
2024-10-29 20:35:28 +02:00
John Johansen
4fe3e30abc Merge abstractions/nameservice: include nameservice-strict
... and drop all rules it contains from abstractions/nameservice.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1373
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 15:26:43 +00:00
John Johansen
82a4e70248 Merge zgrep: deny passwd access
Bash will try to read the passwd database to find the shell of a user if
$SHELL is not set. This causes zgrep to trigger

```
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
```

if called in a sanitized environment. As the functionality of zgrep is
not impacted by a limited Bash environment, add deny rules to avoid the
potentially misleading AVC messages.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1361
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 13:50:06 +00:00
John Johansen
d5777c0403 Merge ProfileStorage: store correct name
Instead of always storing the name of the main profile, store the child
profile/hat name if we are in a child profile or hat.

As a result, we always get the correct "profile xy" header even for
child profiles when dumping the ProfileStorage object.

Also extend the tests to check that the name gets stored correctly.

.

Add aa-complain tests for profile with hats and subprofiles

So far, change_profile_flags() in aa.py is the only user of
ProfileStorage's 'name'.

Rewrite minitools test_cleanprof() so that most of its code can be
reused, and add a test that runs 'aa-complain
/usr/bin/a/simple/cleanprof/test/profile' on cleanprof.in to ensure
aa-complain still works as expected on subprofiles and hats.

Note: aa-complain $profilename will change the flags of hats, but not
child profiles. This is a known issue, and doesn't change with this MR.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1359
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 13:48:30 +00:00
John Johansen
e48ab421b5 Merge Check if all profiles and abstractions contain abi/4.0
... and add abi/4.0 where it was missing

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1358
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 12:48:41 +00:00
John Johansen
ab16377838 Merge zgrep: allow reading /etc/nsswitch.conf and /etc/passwd
Seen on various VMs, my guess is that bash wants to translate a uid to a
username.

Log events (slightly shortened)

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

I propose this patch for 3.0..master

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1357
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 12:45:48 +00:00
John Johansen
ac704a5ba6 Merge Small fixset 1 for parser code nits
Numbered as 1 because I expect to find and fix more things like this as I continue to dig into the parser code.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1400
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 12:33:02 +00:00
Georgia Garcia
a8b6c90d29 Merge common_test setup_aa(): drop try/except
... which only existed for historical reasons

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1389
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-29 11:16:30 +00:00
Georgia Garcia
45a3bbb2c9 Merge Several test-libapparmor-test_multi.py fixes
Several fixes for test-libapparmor-test_multi.py and the expected profiles. The most important fix is that testing exec events/rules now works.

Please check the individual commits for details and readable diffs.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1390
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-29 11:15:09 +00:00
John Johansen
99261bad11 Merge Fix memory leak in aare_rules UniquePermsCache
When the find fails but the insertion also fails, we leak the new node
that we generated. Delete the new node in this case to avoid leaking
memory.

The question remains, however, as to whether we should implement `operator==` in addition to `operator<` so that they are consistent with each other and `find` works correctly.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1399
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-10-29 11:12:05 +00:00
Georgia Garcia
c6edb65fc1 Merge Add a test for aa-autodep
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1398
Approved-by: Maxime Bélair <maxime.belair@canonical.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-29 11:07:14 +00:00
Ryan Lee
6a1e9f916b Replace BOOL,TRUE,FALSE macros with actual C++ boolean type
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-28 12:35:57 +01:00
Ryan Lee
b43f1c4073 Make parser_include push_include_stack take const char because it doesn't actually modify it
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-28 12:35:26 +01:00
Ryan Lee
81950dae4e Fix memory leak in aare_rules UniquePermsCache
When the find fails but the insertion also fails, we leak the new node
that we generated. Delete the new node in this case to avoid leaking
memory.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-28 12:23:17 +01:00
John Johansen
e9d6e0ba14 Merge parser: fix minimization check for filtering deny
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed implicit filtering of explicit denies in the
minimization pass (the information was ignored in building the set of
final accept states).

The filtering of explicit denies reduces the size of the produced
dfa. Since we need to be smarter about when explicit denies are
kept (eg. during complain mode), and most dfas are limited to 65k
states we currently need to filter explicit deny perms by default.

To compensate commit 2737cb2c2 ("parser: minimization - remove
unnecessary second minimization pass") moved the
apply_and_clear_deny() to before minimization. However its check to
apply removal denials before minimization is broken. Remove minimization
triggering apply_and_clear_deny() and just set the FILTER_DENY flag
by default, until we have better selection of rules/conditions where
explicit deny information should be carried through to the backend.

Fixes: 2737cb2c2 ("parser: minimization - remove unnecessary second minimization pass")
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1397
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-10-28 11:22:57 +00:00
John Johansen
a5da9d5b5d Merge parser: fix integer overflow bug in rule priority comparisons
There is an integer overflow when comparing priorities when cmp is
used because it uses subtraction to find lessthan, equal, and greater
than in one operation.

But INT_MAX and INT_MIN are being used by priorities and this results
in INT_MAX - INT_MIN and INT_MIN - INT_MAX which are both overflows
causing an incorrect comparison result and selection of the wrong
rule permission.

Closes: https://gitlab.com/apparmor/apparmor/-/issues/452
Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")
Signed-off-by: John Johansen <john.johansen@canonical.com>

Closes #452
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1396
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-10-28 11:21:31 +00:00
John Johansen
e2d55844a2 parser: fix integer overflow bug in rule priority comparisons
There is an integer overflow when comparing priorities when cmp is
used because it uses subtraction to find lessthan, equal, and greater
than in one operation.

But INT_MAX and INT_MIN are being used by priorities and this results
in INT_MAX - INT_MIN and INT_MIN - INT_MAX which are both overflows
causing an incorrect comparison result and selection of the wrong
rule permission.

Closes: https://gitlab.com/apparmor/apparmor/-/issues/452
Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:03:53 -07:00
Christian Boltz
1f8227e671 Add a test for aa-autodep 2024-10-27 21:55:35 +01:00
John Johansen
179c1c1ba7 parser: fix minimization check for filtering_deny
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed implicit filtering of explicit denies in the
minimization pass (the information was ignored in building the set of
final accept states).

The filtering of explicit denies reduces the size of the produced
dfa. Since we need to be smarter about when explicit denies are
kept (eg. during complain mode), and most dfas are limited to 65k
states we currently need to filter explicit deny perms by default.

To compensate commit 2737cb2c2 ("parser: minimization - remove
unnecessary second minimization pass") moved the
apply_and_clear_deny() to before minimization. However its check to
apply removal denials before minimization is broken. Remove minimization
triggering apply_and_clear_deny() and just set the FILTER_DENY flag
by default, until we have better selection of rules/conditions where
explicit deny information should be carried through to the backend.

Fixes: 2737cb2c2 ("parser: minimization - remove unnecessary second minimization pass")
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-25 01:14:18 -07:00
John Johansen
8d6270e1fe Merge Use parallelism and make --touch when building in GitLab CI for faster CI times
As per https://docs.gitlab.com/ee/ci/pipelines/compute_minutes.html#gitlab-hosted-runner-cost-factors, GitLab CI computes minutes as wall clock time per stage * a constant cost factor derived from the runner type, so using parallelism in `make -j $(nproc)` will reduce the time it takes for GitLab CI to complete without increasing usage of GitLab CI minutes.

When investigating this, I also found out that the test stages needlessly rebuilt large parts of the C code base due to mtimes not being preserved when artifacts are restored from the build stage. Adding `make --touch` updates the mtimes so that the subsequent tests do not need to rebuild binaries needlessly.

The combined changes in this MR reduce the CI time from 13 minutes and 57 seconds (cb0f84e101 of `master`, https://gitlab.com/rlee287/apparmor/-/pipelines/1501017669 on my own fork without Coverity) to 12 minutes and 49 seconds (https://gitlab.com/rlee287/apparmor/-/pipelines/1502723883). This comparison omits the `make -j $(nproc)` addition to cov-build since I do not have a way of testing its effectiveness.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1387
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-24 12:30:12 +00:00
Christian Boltz
8eff7cd98f test-multi: no longer expect testcase01, 12 and 13 to fail
'testcase01', 'testcase12' and 'testcase13' contain a strange mix of
exec and network events.

Nevertheless, there's enough information to parse them as good-enough
exec events. While this is not perfectly correct, it's better than
skipping these logs in this test.

Stop expecting that these profiles have a wrong content, and adjust them
so that they contain the (somewhat) expected exec rule.
2024-10-23 19:25:35 +02:00
Christian Boltz
02e2ce0ad9 test exec events/rules in test-libapparmor-test_multi.py
So far, exec events were accidentally skipped in
test-libapparmor-test_multi.py because aa[profile][hat] was not
initialized, and ask_exec() exited early because of this.

Initialize aa[profile][hat] in the test to fix this.

To avoid that someone needs to select "inherit" each time the tests run,
add an optional default_ans parameter to ask_exec(), and let the test
call it with 'CMD_ix'.

(In case you wonder - defaulting to CMD_cx would ask to sanitize the
environment. CMD_ix avoids this.)

Also, we have to copy over aa[profile][hat] to log_dict in the test
because ask_exec() modifies aa[...], but the test only checks its local
log_dict.

Finally, add the expected exec rules to the *.profile files
2024-10-23 19:25:35 +02:00
Christian Boltz
5d0fd65a69 test-libapparmor-test_multi.py: use reset_aa()
... instead of resetting various apparmor.aa variables manually.
2024-10-23 19:25:35 +02:00
Christian Boltz
4276e80ed5 aa.py: Add load_sev_db()
... to de-duplicate code loading the severity db.
2024-10-23 19:25:35 +02:00
Christian Boltz
183d00e087 test_multi: fix testcase_dbus_09.profile
peer name=... is invalid in dbus message rules.

Note that this testcase is currently disabled in the utils tests because
it's based on a multiline log.
2024-10-23 19:25:34 +02:00
Christian Boltz
209dd851b3 test_multi: no longer skip testcase31
It is handled correctly in the current codebase.

It would be even better if it would generate a link rule that includes
the source, but let's leave that for a later fix.
2024-10-23 19:25:32 +02:00
Christian Boltz
19b4aeb338 Merge aa.py: drop unused confirm_and_abort() and delete_profile()
confirm_and_abort() is unused (note that a function with the same name
exists in ui.py and is used there)

Also delete the now-unused delete_profile() - luckily it was never used,
because it would also have deleted profiles that were "just" modified.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1388
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-10-21 14:20:48 +00:00
Christian Boltz
8791c7c48d common_test setup_aa(): drop try/except
... which only existed for historical reasons
2024-10-20 20:49:03 +02:00
Christian Boltz
4dd3ce0097 aa.py: drop unused confirm_and_abort() and delete_profile()
confirm_and_abort() is unused (note that a function with the same name
exists in ui.py and is used there)

Also delete the now-unused delete_profile() - luckily it was never used,
because it would also have deleted profiles that were "just" modified.
2024-10-20 16:16:30 +02:00
Steve Beattie
4d3b094d9e profiles: transmission-daemon needs attach_disconnected
Systemd's PrivateTmp= in transmission service is causing mount namespaces to be used leading to disconnected paths

[395201.414562] audit: type=1400 audit(1727277774.392:573): apparmor="ALLOWED" operation="sendmsg" class="file" info="Failed name lookup - disconnected path" error=-13 profile="transmission-daemon" name="run/systemd/notify" pid=193060 comm="transmission-da" requested_mask="w" denied_mask="w" fsuid=114 ouid=0

Fixes: https://bugs.launchpad.net/bugs/2083548
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1355
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-10-18 21:47:09 +00:00
Steve Beattie
3478558904 .gitignore: add mod_apparmor and pam_apparmor files
... that are generated during `make`

I propose this patch for 3.x..master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1374
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Steve Beattie <steve+gitlab@nxnw.org>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-10-18 21:45:52 +00:00
Ryan Lee
01435aaaa3 Pass -j flag for cov-build as well
This is separated out because I have no way of testing this

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 13:15:18 -07:00
Ryan Lee
030f991320 GitLab CI: touch built files in test stages before running tests
The artifact restoration step does not preserve mtime, resulting in source files newer than built files, resulting in a needless rebuild of everything before actually running the tests.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 11:46:46 -07:00
Ryan Lee
c47943f1af Invoke tst_binaries target with parallelism in GitLab CI
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 11:46:40 -07:00
Ryan Lee
2e841655cf Add a tst_binaries target to the parser to build tst binaries
This allows building the tst_* binaries in parallel independently of running the parser test suite

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 11:46:40 -07:00
Ryan Lee
88287d4eec Update .gitlab-ci.yml file with -j $(nproc) lines for faster building
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-18 11:45:38 -07:00
John Johansen
cb0f84e101 Merge utils: catch TypeError exception for binary logs
When a log like system.journal is passed on to aa-genprof, for
example, the user receives a TypeError exception: in method
'parse_record', argument 1 of type 'char *'

This patch catches that exception and displays a more meaningful
message.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/436
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

Closes #436
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1354
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>
2024-10-17 18:56:25 +00:00
Ryan Lee
099d7aca5d Merge Fix parser compiler warnings about format specifiers with DEBUG set
This fixes format string specification warnings that are emitted when DEBUG=1 is set. As for %s when the pointer is null: even if gcc prints (null) this is still undefined behavior, so we should do this explicitly.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1382
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-17 18:55:28 +00:00
Georgia Garcia
bb5e69f8db utils: catch TypeError exception for binary logs
When a log like system.journal is passed on to aa-genprof, for
example, the user receives a TypeError exception: in method
'parse_record', argument 1 of type 'char *'

This patch catches that exception and displays a more meaningful
message.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/436
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-16 20:35:40 -03:00
Ryan Lee
c1480d761f Merge Future-proof the Python abstraction for beyond Python 3.19
See https://gitlab.com/apparmor/apparmor/-/merge_requests/1376#note_2161284748

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1381
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-16 22:27:21 +00:00
Ryan Lee
776c56bd7e Fix compiler warnings about format specifiers with DEBUG set
As for %s when the pointer is null: even if gcc prints (null) this is still undefined behavior, so we should do this explicitly

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-16 12:26:54 -07:00
John Johansen
e23633ff0e Merge abstractions/nameservice: tighten libnss_libvirt file access
Limit access to \*.status files located in /var/lib/libvirt/dnsmasq/ as opposed to every file in the same directory.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1379
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>
2024-10-16 18:24:04 +00:00
Ryan Lee
8eb7e7f63b Future-proof the Python abstraction for beyond Python 3.19
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-16 09:36:32 -07:00
Ryan Lee
6d20a10606 Merge Small fixes to libapparmor man pages, including type signature fix
These are small changes to the man pages, with the most important one being updating some function signatures to be consistent with apparmor.h.

We should put together a man page for aalogparse functions too, but I'm submitting this MR first to get the smaller changes in faster.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1378
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-16 16:25:12 +00:00
Christian Boltz
04c9ffbb19 Merge Replace sigalarm-based subprocess timeout with the built-in one
The timeout parameter for subprocess.Popen.communicate has been available since Python 3.3. Given the fragility of SIGALRM based mechanisms, there's no reason to reimplement our own timeout instead of using the built-in one.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1377
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-10-16 12:25:33 +00:00
Giampaolo Fresi Roglia
5be4295b5a abstractions/nameservice: tighten libnss_libvirt file access 2024-10-16 10:48:03 +02:00
Ryan Lee
ffc46247ad Proofreading of libapparmor manpages to fix a few nits
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-15 17:07:02 -07:00
Ryan Lee
6c8a5bedff Add comment to explain podchecker warning for aa_find_mountpoint.pod
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-15 17:05:42 -07:00
Ryan Lee
38e06cf09a Make libapparmor man page type signatures consistent with apparmor.h
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-15 17:05:42 -07:00
Ryan Lee
ed0b539c94 Replace sigalarm-based subprocess timeout with the built-in one
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-15 16:51:17 -07:00
Christian Boltz
2ea82b866d .gitignore: add mod_apparmor and pam_apparmor files
... that are generated during `make`
2024-10-11 22:07:54 +02:00
Christian Boltz
372ad8f6b9 abstractions/nameservice: include nameservice-strict
... and drop all rules it contains from abstractions/nameservice.
2024-10-11 20:53:52 +02:00
Georgia Garcia
68376e7fee Merge abstraction: add nameservice-strict.
As I have read multiple MR mentioning the `nameservice-strict`. Therefore, I thought it would make sense to directly import it here.

To give some context, this abstraction is probably the most commonly included abstraction (after `base`). In `apparmor.d`, it is used by over 700 profiles (only counting direct import). Therefore, adding new rules can have an important impact over a lot of profiles.

Note: the abstraction is a direct import from https://gitlab.com/roddhjav/apparmor.d. The license is the same, I obviously kept Morfikov copyright line. However, I am not sure either or not the SPDX identifier can be used here.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1368
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-11 12:36:42 +00:00
Alexandre Pujol
1966c36c1d abstraction: add missing abi in nameservice-strict. 2024-10-09 21:42:48 +01:00
Alexandre Pujol
62e7220271 abstraction: include nss-systemd in nameservice-strict. 2024-10-09 21:25:57 +01:00
Alexandre Pujol
1a8d8f3695 abstraction: add nameservice-strict.
Imported from gitlab.com/roddhjav/apparmor.d
2024-10-09 19:57:16 +01:00
Georgia Garcia
a522e11129 Merge Support name resolution via libnss-libvirt
Add support for hostname resolution via libnss-libvirt. This change has been tested against the latest oracular version 10.6.0-1ubuntu3.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1362
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-09 15:12:05 +00:00
Giampaolo Fresi Roglia
e53f300821 nameservice: add support for libnss-libvirt 2024-10-09 10:12:26 +02:00
Georg Pfuetzenreuter
48483f2ff8 zgrep: deny passwd access
Bash will try to read the passwd database to find the shell of a user if
$SHELL is not set. This causes zgrep to trigger

```
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
```

if called in a sanitized environment. As the functionality of zgrep is
not impacted by a limited Bash environment, add deny rules to avoid the
potentially misleading AVC messages.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>
2024-10-06 23:18:52 +02:00
Christian Boltz
d8e86fee75 Add aa-complain tests for profile with hats and subprofiles
So far, change_profile_flags() in aa.py is the only user of
ProfileStorage's 'name'.

Rewrite minitools test_cleanprof() so that most of its code can be
reused, and add a test that runs 'aa-complain
/usr/bin/a/simple/cleanprof/test/profile' on cleanprof.in to ensure
aa-complain still works as expected on subprofiles and hats.

Note: aa-complain $profilename will change the flags of hats, but not
child profiles. This is a known issue, and doesn't change with this MR.
2024-10-06 16:51:12 +02:00
Christian Boltz
cb943e4efc ProfileStorage: store correct name
Instead of always storing the name of the main profile, store the child
profile/hat name if we are in a child profile or hat.

As a result, we always get the correct "profile xy" header even for
child profiles when dumping the ProfileStorage object.

Also extend the tests to check that the name gets stored correctly.
2024-10-06 14:44:12 +02:00
Christian Boltz
8c80c56252 Check if all profiles and abstractions contain abi/4.0
... and add abi/4.0 where it was missing
2024-10-06 12:07:58 +02:00
Christian Boltz
68d42c3e37 zgrep: allow reading /etc/nsswitch.conf and /etc/passwd
Seen on various VMs, my guess is that bash wants to translate a uid to a
username.

Log events (slightly shortened)

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
2024-10-06 11:05:52 +02:00
John Johansen
bb460ba467 Merge .gitlab-ci.yml: run pipeline in merge requests too
Hopefully this will allow us to run pipelines in regular branches but
also run it on merge requests on the parent project. This is needed
for users that are not verified by Gitlab.
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html#run-pipelines-in-the-parent-project

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1346
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-04 21:31:51 +00:00
John Johansen
254b324a83 Merge Rename aa_log_record struct fields (C only) to allow inclusion in C++
Do an identifier rename combined with preprocessor directives and SWIG directives to allow the header to be included in C++ while keeping backwards compatibility to the extent possible.

Closes: #439 

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

Closes #439
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1342
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-04 21:28:14 +00:00
Georgia Garcia
645320ae84 profiles: transmission-daemon needs attach_disconnected
Systemd's PrivateTmp= in transmission service is causing mount namespaces to be used leading to disconnected paths

[395201.414562] audit: type=1400 audit(1727277774.392:573): apparmor="ALLOWED" operation="sendmsg" class="file" info="Failed name lookup - disconnected path" error=-13 profile="transmission-daemon" name="run/systemd/notify" pid=193060 comm="transmission-da" requested_mask="w" denied_mask="w" fsuid=114 ouid=0

Fixes: https://bugs.launchpad.net/bugs/2083548
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-04 10:54:48 -03:00
Ryan Lee
2d7440350f Basic test that uses aa_log_record struct fields via old, C++-incompatible names
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:46:00 -07:00
Ryan Lee
645b1406d1 Basic test that invokes aalogparse functions from C++ code
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:45:55 -07:00
Ryan Lee
3cb61b6b41 Add extern "C" decls to aalogparse.h for C++ usage of aalogparse
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:44:25 -07:00
Ryan Lee
e2c407c614 Add SWIG renames for fields to preserve backcompat
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:44:18 -07:00
Ryan Lee
3f5180527d Rename aa_log_record struct fields (C only) to allow inclusion in C++
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 12:43:33 -07:00
John Johansen
b52c9bb860 Merge Convert original_aa from hasher to dict
This requires adding some `.get()` guards at one place, but should
otherwise be a boring change.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1347
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-03 18:36:29 +00:00
Ryan Lee
5b141dd580 Merge Remove aa_query_label from SWIG bindings
This is one of those functions that never worked anyways, because it
modified the passed-in label in place. Moreover, it is a low-level
interface that requires its callers to manually construct a binary query.
As such, it would be better not to expose it and to add wrappers like
aa_query_file_path for the other query classes if that functionality is
needed later.

The removal of this function from the bindings was dropped from !1337 because it exposed functionality that was not present in wrappers around aa_query_label. However, upon further discussion, we decided that it'd be better to remove it now and add other wrappers to libapparmor itself if the functionality provided by the existing wrappers became insufficient.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1352
Approved-by: John Johansen <john@jjmx.net>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-03 18:25:52 +00:00
Ryan Lee
d3603a1f20 Remove aa_query_label from SWIG bindings
This is one of those functions that never worked anyways, because it
modified the passed-in label in place. Moreover, it is a low-level
interface that requires its callers to manually construct a binary query.
As such, it would be better not to expose it and to add wrappers like
aa_query_file_path for the other query classes if that functionality is
needed later.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-03 10:59:25 -07:00
Georgia Garcia
7867a46e2e Merge gitlab-ci.yml: only run coverity in the upstream project
This pipeline only makes sense to run in the upstream project where
the coverity variables are defined, so they currently fail in forks.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1351
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-03 17:46:54 +00:00
Georgia Garcia
c382efe119 gitlab-ci.yml: only run coverity in the upstream project
This pipeline only makes sense to run in the upstream project where
the coverity variables are defined, so they currently fail in forks.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-03 11:12:36 -03:00
Christian Boltz
a56516851e Convert original_aa from hasher to dict
This requires adding some `.get()` guards at one place, but should
otherwise be a boring change.
2024-10-02 22:44:04 +02:00
Georgia Garcia
248e5673ef .gitlab-ci.yml: run pipeline in merge requests too
Hopefully this will allow us to run pipelines in regular branches but
also run it on merge requests on the parent project. This is needed
for users that are not verified by Gitlab.
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html#run-pipelines-in-the-parent-project
2024-10-02 17:37:27 -03:00
John Johansen
07fe0e9a1b Merge Fix ABI break for aa_log_record
Commit 3c825eb001 adds a field called `execpath` to the `aa_log_record` struct. This field was added in the middle of the struct instead of the end, causing an ABI break in libapparmor without a corresponding major version number bump.
Bug report: https://bugs.launchpad.net/apparmor/+bug/2083435
This is fixed by simply moving execpath at the end of the struct.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1345
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-10-01 22:06:45 +00:00
Maxime Bélair
c86c87e886 Fix ABI break for aa_log_record 2024-10-01 22:06:45 +00:00
Ryan Lee
d35a6939be Merge Remove broken SWIG functions that we don't actually want to expose
It doesn't make sense to expose the *_raw functions or the varg version
of aa_change_hatv to higher-level languages. While technically a breaking
change, the generated bindings for these functions never actually worked
anyways:

 - aa_change_hat_vargs uses C varargs, which SWIG passes in NULL for by
   default. It does not attempt to process the passed-in arguments at all
   (and in fact caused an unused-argument compiler warning when compiling
   the generated bindings).
 - aa_getprocattr_raw and aa_getpeercon_raw both place output into a ``char
   **mode`` pointer. SWIG by default generates these as opaque pointer
   object arguments, rendering them unusable for getting output. Future
   patches would be needed to fix ``char**`` arguments for the other functions
   that use them. Moreover, these functions expect their caller to handle
   memory allocation, which is also not possible from a higher-level
   language point of view.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1337
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-01 19:16:38 +00:00
Ryan Lee
bdc8889cc0 Remove private _aa_is_blacklisted from SWIG bindings
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-01 11:58:07 -07:00
Ryan Lee
2bd1884654 Remove SWIG aa_change_hat_vargs, aa_get_procattr_raw, aa_get_peercon_raw
It doesn't make sense to expose the *_raw functions or the varg version
of aa_change_hatv to higher-level languages. While technically a breaking
change, the generated bindings for these functions never actually worked
anyways:

 - aa_change_hat_vargs uses C varargs, which SWIG passes in NULL for by
   default. It does not attempt to process the passed-in arguments at all
   (and in fact caused an unused-argument compiler warning when compiling
   the generated bindings).
 - aa_getprocattr_raw and aa_getpeercon_raw both place output into a char
   **mode pointer. SWIG by default generates these as opaque pointer
   object arguments, rendering them unusable for getting output. Future
   patches would be needed to fix char** arguments for the other functions
   that use them. Moreover, these functions expect their caller to handle
   memory allocation, which is also not possible from a higher-level
   language point of view.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-10-01 11:58:07 -07:00
Ryan Lee
bcab725670 Merge Improvements to the SWIG binding handling of aa_log_record and %exception memory management
This patchset adds annotations so that SWIG can automatically manage the memory lifetimes of aa_log_record objects, and ensures proper cleanup is done in the %exception handler.

This is the first of a sequence of MRs to overhaul the SWIG bindings and fix pieces that never actually worked in the first place. As fixing those other pieces will require breaking changes, I am separating out the non-breaking changes into separate MRs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1334
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-10-01 18:55:24 +00:00
Ryan Lee
61b1501f48 Apply 1 suggestion(s) to 1 file(s)
Co-authored-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-10-01 18:38:04 +00:00
Christian Boltz
4b6df10fe3 Merge ping: allow reading /proc/sys/net/ipv6/conf/all/disable_ipv6
Fixes: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1082190

I propose this patch for 3.0..master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1340
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-09-30 21:42:56 +00:00
Christian Boltz
62ff290c02 Merge abstractions/mesa: allow ~/.cache/mesa_shader_cache_db/
... which is used by Mesa 24.2.2

Reported by darix.

Fixes: https://bugs.launchpad.net/bugs/2081692

I propose this addition for 3.x, 4.0 and master

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1333
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-09-30 21:40:03 +00:00
Christian Boltz
247bdd5deb Merge apparmor.vim: Add missing units for rlimit cpu and rttime
... and allow whitespace between the number and the unit.

I propose this patch for 3.x, 4.0 and master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1336
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-09-30 21:35:40 +00:00
Christian Boltz
df4d7cb8da ping: allow reading /proc/sys/net/ipv6/conf/all/disable_ipv6
Fixes: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1082190
2024-09-27 12:05:29 +02:00
Ryan Lee
398f0790de Add DeprecationWarning emission to Python free_record wrapper
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 11:57:04 -07:00
Ryan Lee
4a7a8fa213 Make Python-side free_record a no-op to prevent double-free
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 11:57:04 -07:00
Ryan Lee
e5fd0fc636 Annotate SWIG aa_log_record alloc+dealloc
Swig generates a "thisown" attribute, which is an escape hatch in case
higher-level code does something weird and needs to tell SWIG whether to
free the C object when Python garbage collects it. Adding this attribute
is not a breaking change w.r.t access to the other attributes of the parsed
record.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 11:57:04 -07:00
Ryan Lee
436ebda9b5 Use SWIG_fail in %exception upon throwing OSError for errno
Unfortunately SWIG_exception does not support throwing OSError, so this
still requires Python-specific code.

Unlike just returning NULL, this will clean up intermediate allocations.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 11:57:04 -07:00
Steve Beattie
65e6620014 libapparmor: merge Rename aa_query_label allow and audit params in headers
This change matches the names in the .c source and the man page for aa_query_label,
and also simplifies the typemap annotations needed to make the SWIG versions usable.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1339
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-09-26 18:52:19 +00:00
Ryan Lee
0c4cda2f1c Rename aa_query_label allow and audit params in headers
This change matches the names in the .c source and the man page for aa_query_label,
and also simplifies the typemap annotations needed to make the SWIG versions usable.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-26 10:58:00 -07:00
Christian Boltz
49b9a20997 abstractions/mesa: allow ~/.cache/mesa_shader_cache_db/
... which is used by Mesa 24.2.2

Reported by darix.

Fixes: https://bugs.launchpad.net/bugs/2081692
2024-09-24 16:39:52 +02:00
Steve Beattie
c4bb6969f5 libapparmor: remove emove remnants of SWIG java files
Merge branch https://gitlab.com/rlee287/apparmor/-/tree/remove_swig_java

The autoconf infrastructure for building this doesn't even show up in the Git history, so there should be no issue with removing the ghosts of Java from the codebase.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1335
Approved-by: Steve Beattie <steve+gitlab@nxnw.org>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
2024-09-24 05:37:16 +00:00
Christian Boltz
9cb591c5a7 apparmor.vim: Add missing units for rlimit cpu and rttime
... and allow whitespace between the number and the unit.
2024-09-22 16:11:24 +02:00
Ryan Lee
225ea202cf Remove remnants of SWIG java files
The autoconf infrastructure for building this doesn't even show up in the Git history, so there should be no issue with removing the ghosts of Java from the codebase

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-20 16:33:58 -07:00
Georgia Garcia
7dc167ea48 Merge Change swig prototype of aa_getprocattr to match argname
This will matter later on for adding SWIG annotations

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1329
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-18 18:05:31 +00:00
Ryan Lee
80bdd22ed7 Change swig prototype of aa_getprocattr to match argname
This will matter later on for adding SWIG annotations

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-17 10:49:41 -07:00
Christian Boltz
7a081cfc67 Merge test-file.py: don't expect translated file permissions
Translating things like 'rw' is pointless and will/should never happen.
Therefore the tests should also expect non-translated file permissions.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1328
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
2024-09-17 15:12:32 +00:00
Christian Boltz
b8de035542 test-file.py: don't expect translated file permissions
Translating things like 'rw' is pointless and will/should never happen.
Therefore the tests should also expect non-translated file permissions.
2024-09-17 14:44:20 +02:00
John Johansen
1940b1b7cd Merge utils: fixes when handling owner file rules
Fixes: https://gitlab.com/apparmor/apparmor/-/issues/429
Fixes: https://gitlab.com/apparmor/apparmor/-/issues/430

Closes #429 and #430
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1320
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-17 09:23:30 +00:00
John Johansen
9fe5a6d853 Merge update translations pot file for C code
The parser and binutils pot file have not been recently refreshed. Update them to current code and add missing pot files for aa_load and aa_status. Also give aa_status base support for translations to populate its pot file.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1318
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-17 09:18:10 +00:00
John Johansen
b5af7d5492 Merge aa-notify: Simplify user interfaces and update man page
aa-notify: Simplify user interfaces and update man page

In notifications, Clicking on "allow" now directly adds the rule without
intermediate window, leading to a smoother UX.
Aligning man page with notify.conf.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1313
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-09-17 09:17:24 +00:00
Maxime Bélair
2b32130280 aa-notify: Simplify user interfaces and update man page 2024-09-17 09:17:23 +00:00
John Johansen
37cac653d1 Merge Make libaalogparse fully reentrant by removing its globals
Tested by using Valgrind's Helgrind and DRD against the reentrancy test that I wrote: they both report no errors with the changes while reporting many errors with the old versions.

Commits "Inline _parse_yacc in libaalogparse" and "Make parse_record take a const char pointer since it never modified str anyways" have a tiny potential to be backwards-incompatible changes: I have justified why they shouldn't be in the commit messages, but it's worth looking over in case I was mistaken and we need to back those out.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1322
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-11 07:54:39 +00:00
John Johansen
4628f8e880 Merge Add parser_sanity-no-gen target to run simple_tests
... without all the profiles generated by the gen-*.py scripts.

This target is meant for local manual testing, especially when working
on additional simple_tests profiles.

It makes local testing much faster (15 seconds for ~2k profiles vs.
several minutes for the additional ~70k profiles generated by gen-*.py)

Needless to say that the CI should continue to use the parser_sanity
target that includes all the generated profiles.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1325
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-11 07:50:01 +00:00
John Johansen
43b30b694c Merge Add more tests for network port range
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1326
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-11 07:48:01 +00:00
Christian Boltz
762df7e753 Add more tests for network port range 2024-09-10 23:10:32 +02:00
Christian Boltz
0d5ae170d4 Add parser_sanity-no-gen target to run simple_tests
... without all the profiles generated by the gen-*.py scripts.

This target is meant for local manual testing, especially when working
on additional simple_tests profiles.

It makes local testing much faster (15 seconds for ~2k profiles vs.
several minutes for the additional ~70k profiles generated by gen-*.py)

Needless to say that the CI should continue to use the parser_sanity
target that includes all the generated profiles.
2024-09-10 22:31:11 +02:00
Ryan Lee
79670745d6 Remove remnants of comments regarding old apparmor log format
The entry AA_RECORD_SYNTAX_V1 is only there for API compatibility reasons.
If we wanted to remove it, we could just renumber the other two entries
to preserve ABI compatibility. However, it seems easier to just delete the
entry if we ever break backcompat with a libapparmor2.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
78f138c37f Make parse_record take a const char pointer since it never modified str anyways
This shouldn't be a breaking change because it's fine to pass a
non-const pointer to a function taking a const pointer, but not the other way round

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
66e1439293 Add an aalogparse reentrancy test for simultaneous log parsing from different threads
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
6a55fb5613 Inline _parse_yacc in libaalogparse
This function was only ever called once inside libaalogparse.c, and it looks
simple enough to not need to be split out into its own helper function.

As this function was never exposed publicly in installed header files, removing it
is not a breaking API change.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
7ff045583d Remove manual YYDEBUG define in grammar.y
The generated grammar.h already sets the correct YYDEBUG value regardless
of whether parse.trace is defined

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:24 -07:00
Ryan Lee
dba7669443 Also make the bison parser of libaalogparse fully reentrant
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-10 11:33:20 -07:00
John Johansen
ebeb89cbce Merge parser: add port range support on network policy
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1321
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-09 21:01:21 +00:00
Georgia Garcia
93f3a0fa99 parser: add equality tests for network port range
To run the network port range equality tests, we need to check if the
kernel supports the network_v8/af_inet feature. Also, a new file
features.af_inet is needed containing the af_inet feature.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-06 09:49:59 -03:00
Georgia Garcia
f9621054d7 parser: add port range support on network policy
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-05 17:01:46 -03:00
Georgia Garcia
2097e82d4a utils: 'owner' should come before 'file'
When both the owner and file keywords were used, the clean rule
generated would have owner after file which is not accepted by the
parser.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/430
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-05 14:55:41 -03:00
Georgia Garcia
39f84c3767 utils: fix file handling of old perms with owner
When the profile already contains a "file" rule containing the owner
prefix and the tool is trying to handle a new file entry, it tries to
show it in the logprof header as "old mode".

The issue is that when the owner rule is an implicit all files
permission, then the object "FileRule" is used instead of the set of
permissions. When subtracting FileRule from set() a TypeError
exception is thrown.

Fix this by "translating" FileRule.ALL perms to "mrwlkix".

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/429
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-09-05 14:54:59 -03:00
Ryan Lee
c5c7565357 Silence -Wyacc because we rely on GNU bison extensions to yacc
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-04 14:54:02 -07:00
Ryan Lee
e0504e697a Make libaalogparse lexer fully reentrant by removing its globals
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-04 12:00:13 -07:00
John Johansen
729e28e8b2 Merge Update apparmor-utils.pot
Followup to !1315 that updates apparmor-utils.pot. The other ones should also be updated at some point, so I'm marking this as a draft until we have a better idea of when/how we want to do that.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1316
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-09-03 17:07:04 +00:00
Ryan Lee
c6181c2dbe Update po README with correct directories and pygettext3 binary
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-03 09:35:05 -07:00
Ryan Lee
0f42187672 Run pygettext3 to update apparmor-utils.pot
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-09-03 09:33:33 -07:00
John Johansen
bdedaf61c8 binutils: add translation support to aa_status and initial pot file
Unfortunately aa_status did not support translations. Add a base support
and the initial pot file. There are no translations done at this time.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-09-03 03:39:16 -07:00
John Johansen
3ac53e75d0 binutils: add pot file for aa_load
aa_load was missing a pot file for translations. Add a pot file for
aa_load and sync it to the code.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-09-03 03:39:16 -07:00
John Johansen
0f09501a84 binutils: update pot files for aa_enabled, aa_exec, aa_features_abi
Update the pot files for message changes in aa_enabled.c, aa_exec.c
and aa_features_abi.c

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-09-03 03:39:16 -07:00
John Johansen
ef95c96f45 parser: update translations pot file to current code
The parser pot file should have been updated before beta. Make
sure it is up to date with the current code.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-09-03 03:39:16 -07:00
John Johansen
ab5f180b08 Merge utils: ignore peer when parsing logs for non-peer access modes
utils: ignore peer when parsing logs for non-peer access modes

Some access modes (create, setopt, getopt, bind, shutdown, listen,
getattr, setattr) cannot be used with a peer in network rules.

Due to how auditing is implemented in the kernel, the peer information
might be available in the log (as faddr= but not daddr=), which causes
a failure in log parsing.

When parsing the log, check if that's the case and ignore the peer,
avoiding the exception on the NetworkRule constructor.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/427

Reported-by: Evan Caville <evan.caville@canonical.com>

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

Closes #427
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1314
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>
2024-08-30 23:17:08 +00:00
Ryan Lee
8cb2e4ca9f Merge Replace 'scrub the environment' and similar wordings
The wording of "scrub the environment" with respect to execution modes is misleading, because a quick read of it could imply that it removes all environment variables. However, it actually enables ld.so's secure-execution mode, which removes a very limited subset of them. This MR rewords the relevant documentation and prompts. If proper environment variable filtering is added later, the documentation can be updated again then.

Synchronizes-with:
- Wiki page update, which I can do after this MR is approved
- Kernel patch to update wording of debug logs (patch submitted to the Apparmor mailing list [here](https://lists.ubuntu.com/archives/apparmor/2024-August/013339.html))

Things that may need updating first:

- Translations: attempting to update `utils/po/apparmor-utils.pot` resulted in a bunch of unrelated changes, so I'd like to ask about translation statuses before making a commit that updates that file properly.
- Adding info on which libc's actually behave differently based on AT_SECURE: glibc and musl libc both do, but they may do subtly different things. I don't know about other libc's.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1315
Approved-by: John Johansen <john@jjmx.net>
Merged-by: Ryan Lee <rlee287@yahoo.com>
2024-08-30 16:14:13 +00:00
Georgia Garcia
1f7d7cd0e0 test_multi: add example of getattr perm with peer in the logs
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-29 17:12:54 -03:00
Georgia Garcia
7562c48c74 utils: ignore peer when parsing logs for non-peer access modes
Some access modes (create, setopt, getopt, bind, shutdown, listen,
getattr, setattr) cannot be used with a peer in network rules.

Due to how auditing is implemented in the kernel, the peer information
might be available in the log (as faddr= but not daddr=), which causes
a failure in log parsing.

When parsing the log, check if that's the case and ignore the peer,
avoiding the exception on the NetworkRule constructor.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/427
Reported-by: Evan Caville <evan.caville@canonical.com>
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-29 17:11:41 -03:00
John Johansen
74f254212a Merge profiles: enable php-fpm in /usr/bin and /usr/sbin
To enable the profile in distros that merge sbin into bin.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/421
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

Closes #421
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1301
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-29 18:46:47 +00:00
John Johansen
cf3428f774 Merge profiles: slirp4netns: allow pivot_root
`pivot_root` is required for running `slirp4netns --enable-sandbox` inside LXD.
- https://github.com/rootless-containers/slirp4netns/issues/348
- https://github.com/rootless-containers/slirp4netns/blob/v1.3.1/sandbox.c#L101-L234

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1298
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-29 18:44:15 +00:00
John Johansen
b6e9df3495 Merge parser: fix rule priority destroying rule permissions for some classes
io_uring and userns mediation are encoding permissions on the class
byte. This is a mistake that should never have been allowed.

With the addition of rule priorities the class byte mediates rule,
that ensure the kernel can determine a class is being mediated is
given the highest priority possible, to ensure class mediation can not
be removed by a deny rule. See
  61b7568e1 ("parser: bug fix mediates_X stub rules.")
for details.

Unfortunately this breaks rule classes that encode permissions on the
class byte, because those rules will always have a lower priority and
the class mediates rule will always be selected over them resulting in
only the class mediates permission being on the rule class state.

Fix this by adding the mediaties class rules for these rule classes
with the lowest priority possible. This means that any rule mediating
the class will wipe out the mediates class rule. So add a new mediates
class rule at the same priority, as the rule being added.

This is a naive implementation and does result in more mediates rules
being added than necessary. The rule class could keep track of the
highest priority rule that had been added, and use that to reduce the
number of mediates rules it adds for the class.

Technically we could also get away with not adding the rules for allow
rules, as the kernel doesn't actually check the encoded permission but
whether the class state is not the trap state. But it is required with
deny rules to ensure the deny rule doesn't result in permissions being
removed from the class, resulting in the kernel thinking it is
unmediated. We also want to ensure that mediation is encoded for other
rule types like prompt, and in the future the kernel could check the
permission so we do want to guarantee that the class state has the
MAY_READ permission on it.

Note: there is another set of classes (file, mqueue, dbus, ...) which
encodes a default rule permission as

  class .* <perm>

this encoding is unfortunate in that it will also add the permission
to the class byte, but also sets up following states with the permission.
thankfully, this accespt anything, including nothing generally isn't
valid in the nothing case (eg. a file without any absolute name). For
this set of classes, the high priority mediates rule just ensures
that the null match case does not have permission.

Fixes: 61b7568e1 parser: bug fix mediates_X stub rules.
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1307
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
2024-08-29 18:41:51 +00:00
John Johansen
159c179193 Merge apparmor.d.pod: Fix writing of aa_change_profile
Signed-off-by: Jörg Sommer <joerg@jo-so.de>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1308
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-29 18:24:44 +00:00
Ryan Lee
8865ff69d8 Replace 'sanitise the environment' wording in aa.py ask_rule_questions
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-08-28 13:39:52 -07:00
Ryan Lee
65c84071bb Replace 'scrub the environment' wording in man pages with something more accurate
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2024-08-28 11:22:08 -07:00
Georgia Garcia
0ec0e2b035 Merge libapparmor: make af_protos.h consistent in different archs
af_protos.h is a generated table of the protocols created by looking
for definitions of IPPROTO_* in netinet/in.h. Depending on the
architecture, the order of the table may change when using -dM in the
compiler during the extraction of the defines.

This causes an issue because there is more than one IPPROTO defined
by the value 0: IPPROTO_IP and IPPROTO_HOPOPTS which is a header
extension used by IPv6. So if IPPROTO_HOPOPTS was first in the table,
then protocol=0 in the audit logs would be translated to hopopts.

This caused a failure in arm 32bit:

Output doesn't match expected data:
--- ./test_multi/testcase_unix_01.out	2024-08-15 01:47:53.000000000 +0000
+++ ./test_multi/out/testcase_unix_01.out	2024-08-15 23:42:10.187416392 +0000
@@ -12,7 +12,7 @@
 Peer Addr: @test_abstract_socket
 Network family: unix
 Socket type: stream
-Protocol: ip
+Protocol: hopopts
 Class: net
 Epoch: 1711454639
 Audit subid: 322

By the time protocol is resolved in grammar.y, we don't have have
access to the net family to check if it's inet6. Instead of making
protocol dependent on the net family, make the order of the
af_protos.h table consistent between architectures using -dD.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1309
Approved-by: John Johansen <john@jjmx.net>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-26 12:39:26 +00:00
John Johansen
90c1358e49 Merge Revert "utils/emacs: add apparmor-mode.el"
This reverts commit 65b0f83aea.

This has since been moved into its own repo under the apparmor gitlab project at https://gitlab.com/apparmor/apparmor-mode

Signed-off-by: Alex Murray <alex.murray@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1312
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-26 05:21:59 +00:00
Alex Murray
2b1ddef16e Revert "utils/emacs: add apparmor-mode.el"
This reverts commit 65b0f83aea.

Signed-off-by: Alex Murray <alex.murray@canonical.com>
2024-08-26 13:54:28 +09:30
Georgia Garcia
95c419dc45 libapparmor: make af_protos.h consistent in different archs
af_protos.h is a generated table of the protocols created by looking
for definitions of IPPROTO_* in netinet/in.h. Depending on the
architecture, the order of the table may change when using -dM in the
compiler during the extraction of the defines.

This causes an issue because there is more than one IPPROTO defined
by the value 0: IPPROTO_IP and IPPROTO_HOPOPTS which is a header
extension used by IPv6. So if IPPROTO_HOPOPTS was first in the table,
then protocol=0 in the audit logs would be translated to hopopts.

This caused a failure in arm 32bit:

Output doesn't match expected data:
--- ./test_multi/testcase_unix_01.out	2024-08-15 01:47:53.000000000 +0000
+++ ./test_multi/out/testcase_unix_01.out	2024-08-15 23:42:10.187416392 +0000
@@ -12,7 +12,7 @@
 Peer Addr: @test_abstract_socket
 Network family: unix
 Socket type: stream
-Protocol: ip
+Protocol: hopopts
 Class: net
 Epoch: 1711454639
 Audit subid: 322

By the time protocol is resolved in grammar.y, we don't have have
access to the net family to check if it's inet6. Instead of making
protocol dependent on the net family, make the order of the
af_protos.h table consistent between architectures using -dD.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-19 18:29:56 -03:00
Jörg Sommer
8195500a1e apparmor.d.pod: Fix writing of aa_change_profile
Signed-off-by: Jörg Sommer <joerg@jo-so.de>
2024-08-17 14:13:08 +02:00
John Johansen
204c0c5a3a parser: fix rule priority destroying rule permissions for some classes
io_uring and userns mediation are encoding permissions on the class
byte. This is a mistake that should never have been allowed.

With the addition of rule priorities the class byte mediates rule,
that ensure the kernel can determine a class is being mediated is
given the highest priority possible, to ensure class mediation can not
be removed by a deny rule. See
  61b7568e1 ("parser: bug fix mediates_X stub rules.")
for details.

Unfortunately this breaks rule classes that encode permissions on the
class byte, because those rules will always have a lower priority and
the class mediates rule will always be selected over them resulting in
only the class mediates permission being on the rule class state.

Fix this by adding the mediaties class rules for these rule classes
with the lowest priority possible. This means that any rule mediating
the class will wipe out the mediates class rule. So add a new mediates
class rule at the same priority, as the rule being added.

This is a naive implementation and does result in more mediates rules
being added than necessary. The rule class could keep track of the
highest priority rule that had been added, and use that to reduce the
number of mediates rules it adds for the class.

Technically we could also get away with not adding the rules for allow
rules, as the kernel doesn't actually check the encoded permission but
whether the class state is not the trap state. But it is required with
deny rules to ensure the deny rule doesn't result in permissions being
removed from the class, resulting in the kernel thinking it is
unmediated. We also want to ensure that mediation is encoded for other
rule types like prompt, and in the future the kernel could check the
permission so we do want to guarantee that the class state has the
MAY_READ permission on it.

Note: there is another set of classes (file, mqueue, dbus, ...) which
encodes a default rule permission as

  class .* <perm>

this encoding is unfortunate in that it will also add the permission
to the class byte, but also sets up following states with the permission.
thankfully, this accespt anything, including nothing generally isn't
valid in the nothing case (eg. a file without any absolute name). For
this set of classes, the high priority mediates rule just ensures
that the null match case does not have permission.

Fixes: 61b7568e1 parser: bug fix mediates_X stub rules.
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-08-15 03:51:20 -07:00
John Johansen
4c8a27457e Merge utils: change os.mkdir to self.mkpath to create intermediary dirs
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1306
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
2024-08-15 04:45:12 +00:00
Georgia Garcia
a3eca67f38 utils: change os.mkdir to self.mkpath to create intermediary dirs
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-15 00:44:55 -03:00
Georgia Garcia
2083994513 profiles: enable php-fpm in /usr/bin and /usr/sbin
To enable the profile in distros that merge sbin into bin.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/421
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-08-14 10:52:53 -03:00
Akihiro Suda
bf5db67284 profiles: slirp4netns: allow pivot_root
`pivot_root` is required for running `slirp4netns --enable-sandbox` inside LXD.
- https://github.com/rootless-containers/slirp4netns/issues/348
- https://github.com/rootless-containers/slirp4netns/blob/v1.3.1/sandbox.c#L101-L234

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2024-08-14 17:29:13 +09:00
112 changed files with 2569 additions and 2410 deletions

3
.gitignore vendored
View File

@@ -1,4 +1,4 @@
apparmor-
apparmor-*
cscope.*
binutils/aa-enabled
binutils/aa-enabled.1
@@ -203,7 +203,6 @@ utils/apparmor/*.pyc
utils/apparmor/rule/*.pyc
utils/apparmor.egg-info/
utils/build/
!utils/emacs/apparmor-mode.el
utils/htmlcov/
utils/test/common_test.pyc
utils/test/.coverage

View File

@@ -13,7 +13,6 @@ workflow:
stages:
- build
- test
- spread
.ubuntu-common:
before_script:
@@ -127,6 +126,19 @@ test-profiles:
- make -C profiles check-abstractions.d
- make -C profiles check-local
# Build the regression tests (don't run them because that needs kernel access)
test-build-regression:
stage: test
needs: ["build-all"]
extends:
- .ubuntu-common
script:
# Additional dependencies required by regression tests
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" install_extra_deps "Installing additional dependencies..."
- apt-get install --no-install-recommends -y attr fuse-overlayfs libdbus-1-dev liburing-dev
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" install_extra_deps
- make -C tests/regression/apparmor -j $(nproc)
shellcheck:
stage: test
needs: []
@@ -184,123 +196,3 @@ coverity:
- "apparmor-*.tar.gz"
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $CI_PROJECT_PATH == "apparmor/apparmor"
.image-garden-x86_64:
stage: spread
# TODO: use tagged release once container tagging is improved upstream.
image: registry.gitlab.com/zygoon/image-garden:latest
tags:
- linux
- x86_64
- kvm
variables:
ARCH: x86_64
GARDEN_DL_DIR: dl
CACHE_POLICY: pull-push
CACHE_COMPRESSION_LEVEL: fastest
before_script:
# Prepare the image in dry-run mode. This helps in debugging cache misses
# when files are not cached correctly by the runner, causing the build section
# below to always do hevy-duty work.
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" prepare_image_dry_run "Prepare image (dry run)"
- image-garden make --dry-run --debug "$GARDEN_SYSTEM.$ARCH.run" "$GARDEN_SYSTEM.$ARCH.qcow2" "$GARDEN_SYSTEM.seed.iso" "$GARDEN_SYSTEM.user-data" "$GARDEN_SYSTEM.meta-data"
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" prepare_image_dry_run
script:
# Prepare the image, for real.
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" prepare_image "Prepare image"
- image-garden make "$GARDEN_SYSTEM.$ARCH.run" "$GARDEN_SYSTEM.$ARCH.qcow2" "$GARDEN_SYSTEM.seed.iso" "$GARDEN_SYSTEM.user-data" "$GARDEN_SYSTEM.meta-data"
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" prepare_image
cache:
# Cache the base image (pre-customization).
- key: image-garden-base-${GARDEN_SYSTEM}.${ARCH}
policy: $CACHE_POLICY
when: always
paths:
- $GARDEN_DL_DIR
# Those are never mutated so they are safe to share.
- efi-code.*.img
- efi-vars.*.img
# Cache the customized system. This cache depends on .image-garden.mk file
# so that any customization updates are immediately acted upon.
- key:
prefix: image-garden-custom-${GARDEN_SYSTEM}.${ARCH}-
files:
- .image-garden.mk
policy: $CACHE_POLICY
when: always
paths:
- $GARDEN_SYSTEM.*
- $GARDEN_SYSTEM.seed.iso
- $GARDEN_SYSTEM.meta-data
- $GARDEN_SYSTEM.user-data
# This job builds and caches the image that the job below looks at.
image-ubuntu-cloud-24.04-x86_64:
extends: .image-garden-x86_64
variables:
GARDEN_SYSTEM: ubuntu-cloud-24.04
needs: []
dependencies: []
rules:
- if: $CI_COMMIT_TAG
- if: $CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH
changes:
paths:
- .image-garden.mk
- .gitlab-ci.yml
compare_to: "refs/heads/master"
.spread-x86_64:
extends: .image-garden-x86_64
variables:
# GitLab project identifier of zygoon/spread-dist can be seen on
# https://gitlab.com/zygoon/spread-dist, under the three-dot menu on
# top-right.
SPREAD_GITLAB_PROJECT_ID: "65375371"
# Git revision of spread to install.
# This must have been built via spread-dist.
# TODO: switch to upstream 1.0 release when available.
SPREAD_REV: 413817eda7bec07a3885e0717c178b965f8924e1
# Run all the tasks for a given system.
SPREAD_ARGS: "garden:$GARDEN_SYSTEM:"
SPREAD_GOARCH: amd64
before_script:
# Prepare the image in dry-run mode. This helps in debugging cache misses
# when files are not cached correctly by the runner, causing the build section
# below to always do hevy-duty work.
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" prepare_image_dry_run "Prepare image (dry run)"
- image-garden make --dry-run --debug "$GARDEN_SYSTEM.$ARCH.run" "$GARDEN_SYSTEM.$ARCH.qcow2" "$GARDEN_SYSTEM.seed.iso" "$GARDEN_SYSTEM.user-data" "$GARDEN_SYSTEM.meta-data"
- stat .image-garden.mk "$GARDEN_SYSTEM".* || true
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" prepare_image_dry_run
# Install the selected revision of spread.
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" install_spread "Installing spread..."
# Install pre-built spread from https://gitlab.com/zygoon/spread-dist generic package repository.
- |
curl --header "JOB-TOKEN: ${CI_JOB_TOKEN}" --location --output spread "${CI_API_V4_URL}/projects/${SPREAD_GITLAB_PROJECT_ID}/packages/generic/spread/${SPREAD_REV}/spread.${SPREAD_GOARCH}"
- chmod +x spread
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" install_spread
script:
- printf '\e[0K%s:%s:%s\r\e[0K%s\n' section_start "$(date +%s)" run_spread "Running spread for $GARDEN_SYSTEM..."
# TODO: transform to inject ^...$ to properly select jobs to run.
- mkdir -p spread-logs spread-artifacts
- ./spread -list $SPREAD_ARGS |
split --number=l/"${CI_NODE_INDEX:-1}"/"${CI_NODE_TOTAL:-1}" |
xargs --verbose ./spread -v -artifacts ./spread-artifacts -v | tee spread-logs/"$GARDEN_SYSTEM".log
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" run_spread
artifacts:
paths:
- spread-logs
- spread-artifacts
when: always
spread-ubuntu-cloud-24.04-x86_64:
extends: .spread-x86_64
variables:
GARDEN_SYSTEM: ubuntu-cloud-24.04
SPREAD_ARGS: garden:$GARDEN_SYSTEM:tests/regression/ garden:$GARDEN_SYSTEM:tests/profiles/
CACHE_POLICY: pull
dependencies: []
needs:
- job: image-ubuntu-cloud-24.04-x86_64
optional: true
parallel: 4

View File

@@ -111,13 +111,21 @@ $ export PYTHON_VERSION=3
$ export PYTHON_VERSIONS=python3
```
Note that, in general, the build steps can be run in parallel, while the test
steps do not gain much speedup from being run in parallel. This is because the
test steps spawn a handful of long-lived test runner processes that mostly
run their tests sequentially and do not use `make`'s jobserver. Moreover,
process spawning overhead constitutes a significant part of test runtime, so
reworking the test harnesses to add parallelism (which would be a major undertaking
for the harnesses that do not have it already) would not produce much of a speedup.
### libapparmor:
```
$ cd ./libraries/libapparmor
$ sh ./autogen.sh
$ sh ./configure --prefix=/usr --with-perl --with-python # see below
$ make
$ make -j $(nproc)
$ make check
$ make install
```
@@ -130,7 +138,7 @@ generate Ruby bindings to libapparmor.]
```
$ cd binutils
$ make
$ make -j $(nproc)
$ make check
$ make install
```
@@ -139,7 +147,8 @@ $ make install
```
$ cd parser
$ make # depends on libapparmor having been built first
$ make -j $(nproc) # depends on libapparmor having been built first
$ make -j $(nproc) tst_binaries # a build step of make check that can be parallelized
$ make check
$ make install
```
@@ -149,7 +158,7 @@ $ make install
```
$ cd utils
$ make
$ make -j $(nproc)
$ make check PYFLAKES=/usr/bin/pyflakes3
$ make install
```
@@ -158,7 +167,7 @@ $ make install
```
$ cd changehat/mod_apparmor
$ make # depends on libapparmor having been built first
$ make -j $(nproc) # depends on libapparmor having been built first
$ make install
```
@@ -167,7 +176,7 @@ $ make install
```
$ cd changehat/pam_apparmor
$ make # depends on libapparmor having been built first
$ make -j $(nproc) # depends on libapparmor having been built first
$ make install
```
@@ -234,7 +243,7 @@ To run:
### Regression tests - using apparmor userspace installed on host
```
$ cd tests/regression/apparmor (requires root)
$ make USE_SYSTEM=1
$ make -j $(nproc) USE_SYSTEM=1
$ sudo make tests USE_SYSTEM=1
$ sudo bash open.sh -r # runs and saves the last testcase from open.sh
```
@@ -247,7 +256,7 @@ $ sudo bash open.sh -r # runs and saves the last testcase from open.sh
```
$ cd tests/regression/apparmor (requires root)
$ make
$ make -j $(nproc)
$ sudo make tests
$ sudo bash open.sh -r # runs and saves the last testcase from open.sh
```

View File

@@ -20,6 +20,8 @@
#include <ctype.h>
#include <dirent.h>
#include <regex.h>
#include <libintl.h>
#define _(s) gettext(s)
#include <sys/apparmor.h>
#include <sys/apparmor_private.h>
@@ -131,7 +133,7 @@ const char *process_statuses[] = {"enforce", "complain", "prompt", "kill", "unco
#define eprintf(...) \
do { \
if (!quiet) \
fprintf(stderr, __VA_ARGS__); \
fprintf(stderr, __VA_ARGS__); \
} while (0)
#define dprintf(...) \
@@ -156,14 +158,14 @@ static int open_profiles(FILE **fp)
ret = stat("/sys/module/apparmor", &st);
if (ret != 0) {
eprintf("apparmor not present.\n");
eprintf(_("apparmor not present.\n"));
return AA_EXIT_DISABLED;
}
dprintf("apparmor module is loaded.\n");
dprintf(_("apparmor module is loaded.\n"));
ret = aa_find_mountpoint(&apparmorfs);
if (ret == -1) {
eprintf("apparmor filesystem is not mounted.\n");
eprintf(_("apparmor filesystem is not mounted.\n"));
return AA_EXIT_NO_CONTROL;
}
@@ -176,9 +178,9 @@ static int open_profiles(FILE **fp)
*fp = fopen(apparmor_profiles, "r");
if (*fp == NULL) {
if (errno == EACCES) {
eprintf("You do not have enough privilege to read the profile set.\n");
eprintf(_("You do not have enough privilege to read the profile set.\n"));
} else {
eprintf("Could not open %s: %s", apparmor_profiles, strerror(errno));
eprintf(_("Could not open %s: %s"), apparmor_profiles, strerror(errno));
}
return AA_EXIT_NO_PERM;
}
@@ -351,7 +353,7 @@ static int get_processes(struct profile *profiles,
continue;
} else if (rc == -1 ||
asprintf(&exe, "/proc/%s/exe", entry->d_name) == -1) {
eprintf("ERROR: Failed to allocate memory\n");
eprintf(_("ERROR: Failed to allocate memory\n"));
ret = AA_EXIT_INTERNAL_ERROR;
goto exit;
} else if (mode) {
@@ -374,7 +376,7 @@ static int get_processes(struct profile *profiles,
// ensure enough space for NUL terminator
real_exe = calloc(PATH_MAX + 1, sizeof(char));
if (real_exe == NULL) {
eprintf("ERROR: Failed to allocate memory\n");
eprintf(_("ERROR: Failed to allocate memory\n"));
ret = AA_EXIT_INTERNAL_ERROR;
goto exit;
}
@@ -598,7 +600,7 @@ static int detailed_profiles(FILE *outf, filters_t *filters, bool json,
*/
subfilters.mode = &mode_filter;
if (regcomp(&mode_filter, profile_statuses[i], REG_NOSUB) != 0) {
eprintf("Error: failed to compile sub filter '%s'\n",
eprintf(_("Error: failed to compile sub filter '%s'\n"),
profile_statuses[i]);
return AA_EXIT_INTERNAL_ERROR;
}
@@ -664,7 +666,7 @@ static int detailed_processes(FILE *outf, filters_t *filters, bool json,
*/
subfilters.mode = &mode_filter;
if (regcomp(&mode_filter, process_statuses[i], REG_NOSUB) != 0) {
eprintf("Error: failed to compile sub filter '%s'\n",
eprintf(_("Error: failed to compile sub filter '%s'\n"),
profile_statuses[i]);
return AA_EXIT_INTERNAL_ERROR;
}
@@ -726,7 +728,7 @@ exit:
static int print_legacy(const char *command)
{
printf("Usage: %s [OPTIONS]\n"
printf(_("Usage: %s [OPTIONS]\n"
"Legacy options and their equivalent command\n"
" --profiled --count --profiles\n"
" --enforced --count --profiles --mode=enforced\n"
@@ -734,8 +736,8 @@ static int print_legacy(const char *command)
" --kill --count --profiles --mode=kill\n"
" --prompt --count --profiles --mode=prompt\n"
" --special-unconfined --count --profiles --mode=unconfined\n"
" --process-mixed --count --ps --mode=mixed\n",
command);
" --process-mixed --count --ps --mode=mixed\n"),
command);
exit(0);
return 0;
@@ -745,7 +747,7 @@ static int usage_filters(void)
{
long unsigned int i;
printf("Usage of filters\n"
printf(_("Usage of filters\n"
"Filters are used to reduce the output of information to only\n"
"those entries that will match the filter. Filters use posix\n"
"regular expression syntax. The possible values for exes that\n"
@@ -755,7 +757,7 @@ static int usage_filters(void)
" --filter.profiles: regular expression to match displayed profile names\n"
" --filter.pid: regular expression to match displayed processes pids\n"
" --filter.exe: regular expression to match executable\n"
);
));
for (i = 0; i < ARRAY_SIZE(process_statuses); i++) {
printf("%s%s", i ? ", " : "", process_statuses[i]);
}
@@ -773,7 +775,7 @@ static int print_usage(const char *command, bool error)
status = EXIT_FAILURE;
}
printf("Usage: %s [OPTIONS]\n"
printf(_("Usage: %s [OPTIONS]\n"
"Displays various information about the currently loaded AppArmor policy.\n"
"Default if no options given\n"
" --show=all\n\n"
@@ -790,8 +792,8 @@ static int print_usage(const char *command, bool error)
" --verbose (default) displays data points about loaded policy set\n"
" --quiet don't output error messages\n"
" -h[(legacy|filters)] this message, or info on the specified option\n"
" --help[=(legacy|filters)] this message, or info on the specified option\n",
command);
" --help[=(legacy|filters)] this message, or info on the specified option\n"),
command);
exit(status);
@@ -867,7 +869,7 @@ static int parse_args(int argc, char **argv)
} else if (strcmp(optarg, "filters") == 0) {
usage_filters();
} else {
eprintf("Error: Invalid --help option '%s'.\n", optarg);
eprintf(_("Error: Invalid --help option '%s'.\n"), optarg);
print_usage(argv[0], true);
break;
}
@@ -935,7 +937,7 @@ static int parse_args(int argc, char **argv)
} else if (strcmp(optarg, "processes") == 0) {
opt_show = SHOW_PROCESSES;
} else {
eprintf("Error: Invalid --show option '%s'.\n", optarg);
eprintf(_("Error: Invalid --show option '%s'.\n"), optarg);
print_usage(argv[0], true);
break;
}
@@ -957,7 +959,7 @@ static int parse_args(int argc, char **argv)
break;
default:
eprintf("Error: Invalid command.\n");
eprintf(_("Error: Invalid command.\n"));
print_usage(argv[0], true);
break;
}
@@ -982,7 +984,7 @@ int main(int argc, char **argv)
if (argc > 1) {
int pos = parse_args(argc, argv);
if (pos < argc) {
eprintf("Error: Unknown options.\n");
eprintf(_("Error: Unknown options.\n"));
print_usage(progname, true);
}
} else {
@@ -994,24 +996,24 @@ int main(int argc, char **argv)
init_filters(&filters, &filter_set);
if (regcomp(filters.mode, opt_mode, REG_NOSUB) != 0) {
eprintf("Error: failed to compile mode filter '%s'\n",
eprintf(_("Error: failed to compile mode filter '%s'\n"),
opt_mode);
return AA_EXIT_INTERNAL_ERROR;
}
if (regcomp(filters.profile, opt_profiles, REG_NOSUB) != 0) {
eprintf("Error: failed to compile profiles filter '%s'\n",
eprintf(_("Error: failed to compile profiles filter '%s'\n"),
opt_profiles);
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}
if (regcomp(filters.pid, opt_pid, REG_NOSUB) != 0) {
eprintf("Error: failed to compile ps filter '%s'\n",
eprintf(_("Error: failed to compile ps filter '%s'\n"),
opt_pid);
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}
if (regcomp(filters.exe, opt_exe, REG_NOSUB) != 0) {
eprintf("Error: failed to compile exe filter '%s'\n",
eprintf(_("Error: failed to compile exe filter '%s'\n"),
opt_exe);
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
@@ -1026,7 +1028,7 @@ int main(int argc, char **argv)
outf_save = outf;
outf = open_memstream(&buffer, &buffer_size);
if (!outf) {
eprintf("Failed to open memstream: %m\n");
eprintf(_("Failed to open memstream: %m\n"));
return AA_EXIT_INTERNAL_ERROR;
}
}
@@ -1037,7 +1039,7 @@ int main(int argc, char **argv)
*/
ret = get_profiles(fp, &profiles, &nprofiles);
if (ret != 0) {
eprintf("Failed to get profiles: %d....\n", ret);
eprintf(_("Failed to get profiles: %d....\n"), ret);
goto out;
}
@@ -1066,7 +1068,7 @@ int main(int argc, char **argv)
ret = get_processes(profiles, nprofiles, &processes, &nprocesses);
if (ret != 0) {
eprintf("Failed to get processes: %d....\n", ret);
eprintf(_("Failed to get processes: %d....\n"), ret);
} else if (opt_count) {
ret = simple_filtered_process_count(outf, &filters, opt_json,
processes, nprocesses);
@@ -1092,14 +1094,14 @@ int main(int argc, char **argv)
outf = outf_save;
json = cJSON_Parse(buffer);
if (!json) {
eprintf("Failed to parse json output");
eprintf(_("Failed to parse json output"));
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}
pretty = cJSON_Print(json);
if (!pretty) {
eprintf("Failed to print pretty json");
eprintf(_("Failed to print pretty json"));
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# Translations for aa_enabled
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2020-10-14 03:52-0700\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# Translations for aa_exec
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2020-10-14 03:52-0700\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# Translations for aa_features_abi
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2011.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2020-10-14 03:52-0700\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# Translations for aa_load
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2025-02-18 07:37-0800\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

165
binutils/po/aa_status.pot Normal file
View File

@@ -0,0 +1,165 @@
# Translations for aa_status
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2024.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2024-08-31 17:49-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"Language: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=CHARSET\n"
"Content-Transfer-Encoding: 8bit\n"
#: ../aa_status.c:161
msgid "apparmor not present.\n"
msgstr ""
#: ../aa_status.c:164
msgid "apparmor module is loaded.\n"
msgstr ""
#: ../aa_status.c:168
msgid "apparmor filesystem is not mounted.\n"
msgstr ""
#: ../aa_status.c:181
msgid "You do not have enough privilege to read the profile set.\n"
msgstr ""
#: ../aa_status.c:183
#, c-format
msgid "Could not open %s: %s"
msgstr ""
#: ../aa_status.c:356 ../aa_status.c:379
msgid "ERROR: Failed to allocate memory\n"
msgstr ""
#: ../aa_status.c:587 ../aa_status.c:653
#, c-format
msgid "Error: failed to compile sub filter '%s'\n"
msgstr ""
#: ../aa_status.c:715
#, c-format
msgid ""
"Usage: %s [OPTIONS]\n"
"Legacy options and their equivalent command\n"
" --profiled --count --profiles\n"
" --enforced --count --profiles --mode=enforced\n"
" --complaining --count --profiles --mode=complain\n"
" --kill --count --profiles --mode=kill\n"
" --prompt --count --profiles --mode=prompt\n"
" --special-unconfined --count --profiles --mode=unconfined\n"
" --process-mixed --count --ps --mode=mixed\n"
msgstr ""
#: ../aa_status.c:734
#, c-format
msgid ""
"Usage of filters\n"
"Filters are used to reduce the output of information to only\n"
"those entries that will match the filter. Filters use posix\n"
"regular expression syntax. The possible values for exes that\n"
"support filters are below\n"
"\n"
" --filter.mode: regular expression to match the profile "
"mode modes: enforce, complain, kill, unconfined, mixed\n"
" --filter.profiles: regular expression to match displayed profile names\n"
" --filter.pid: regular expression to match displayed processes pids\n"
" --filter.exe: regular expression to match executable\n"
msgstr ""
#: ../aa_status.c:762
#, c-format
msgid ""
"Usage: %s [OPTIONS]\n"
"Displays various information about the currently loaded AppArmor policy.\n"
"Default if no options given\n"
" --show=all\n"
"\n"
"OPTIONS (one only):\n"
" --enabled returns error code if AppArmor not enabled\n"
" --show=X What information to show. {profiles,processes,all}\n"
" --count print the number of entries. Implies --quiet\n"
" --filter.mode=filter see filters\n"
" --filter.profiles=filter see filters\n"
" --filter.pid=filter see filters\n"
" --filter.exe=filter see filters\n"
" --json displays multiple data points in machine-readable JSON "
"format\n"
" --pretty-json same data as --json, formatted for human consumption as "
"well\n"
" --verbose (default) displays data points about loaded policy set\n"
" --quiet don't output error messages\n"
" -h[(legacy|filters)] this message, or info on the specified option\n"
" --help[=(legacy|filters)] this message, or info on the specified option\n"
msgstr ""
#: ../aa_status.c:856
#, c-format
msgid "Error: Invalid --help option '%s'.\n"
msgstr ""
#: ../aa_status.c:924
#, c-format
msgid "Error: Invalid --show option '%s'.\n"
msgstr ""
#: ../aa_status.c:946
msgid "Error: Invalid command.\n"
msgstr ""
#: ../aa_status.c:971
msgid "Error: Unknown options.\n"
msgstr ""
#: ../aa_status.c:983
#, c-format
msgid "Error: failed to compile mode filter '%s'\n"
msgstr ""
#: ../aa_status.c:988
#, c-format
msgid "Error: failed to compile profiles filter '%s'\n"
msgstr ""
#: ../aa_status.c:994
#, c-format
msgid "Error: failed to compile ps filter '%s'\n"
msgstr ""
#: ../aa_status.c:1000
#, c-format
msgid "Error: failed to compile exe filter '%s'\n"
msgstr ""
#: ../aa_status.c:1015
#, c-format
msgid "Failed to open memstream: %m\n"
msgstr ""
#: ../aa_status.c:1026
#, c-format
msgid "Failed to get profiles: %d....\n"
msgstr ""
#: ../aa_status.c:1050
#, c-format
msgid "Failed to get processes: %d....\n"
msgstr ""
#: ../aa_status.c:1076
msgid "Failed to parse json output"
msgstr ""
#: ../aa_status.c:1083
msgid "Failed to print pretty json"
msgstr ""

View File

@@ -1 +1 @@
4.1.0~beta5
4.1.0~beta1

View File

@@ -22,15 +22,15 @@
=head1 NAME
aa_change_hat - change to or from a "hat" within a AppArmor profile
aa_change_hat - change to or from a "hat" within a AppArmor profile
=head1 SYNOPSIS
B<#include E<lt>sys/apparmor.hE<gt>>
B<int aa_change_hat (char *subprofile, unsigned long magic_token);>
B<int aa_change_hat (const char *subprofile, unsigned long magic_token);>
B<int aa_change_hatv (char *subprofiles[], unsigned long magic_token);>
B<int aa_change_hatv (const char *subprofiles[], unsigned long magic_token);>
B<int aa_change_hat_vargs (unsigned long magic_token, ...);>

View File

@@ -22,7 +22,7 @@
=head1 NAME
aa_change_profile, aa_change_onexec - change a tasks profile
aa_change_profile, aa_change_onexec - change a task's profile
=head1 SYNOPSIS
@@ -58,8 +58,8 @@ The aa_change_onexec() function is like the aa_change_profile() function
except it specifies that the profile transition should take place on the
next exec instead of immediately. The delayed profile change takes
precedence over any exec transition rules within the confining profile.
Delaying the profile boundary has a couple of advantages, it removes the
need for stub transition profiles and the exec boundary is a natural security
Delaying the profile boundary has a couple of advantages: it removes the
need for stub transition profiles, and the exec boundary is a natural security
layer where potentially sensitive memory is unmapped.
=head1 RETURN VALUE

View File

@@ -54,7 +54,7 @@ B<typedef struct aa_features aa_features;>
B<int aa_features_new(aa_features **features, int dirfd, const char *path);>
B<int aa_features_new_from_file(aa_features **features, int fd);>
B<int aa_features_new_from_file(aa_features **features, int file);>
B<int aa_features_new_from_string(aa_features **features, const char *string, size_t size);>

View File

@@ -58,6 +58,9 @@ appropriately.
=head1 ERRORS
# podchecker warns about duplicate link targets for EACCES, EBUSY, ENOENT,
# and ENOMEM, but this is a warning that is safe to ignore.
B<aa_is_enabled>
=over 4

View File

@@ -41,7 +41,7 @@ result is an intersection of all profiles which are stacked. Stacking profiles
together is desirable when wanting to ensure that confinement will never become
more permissive. When changing between two profiles, as performed with
aa_change_profile(2), there is always the possibility that the new profile is
more permissive than the old profile but that possibility is eliminated when
more permissive than the old profile, but that possibility is eliminated when
using aa_stack_profile().
To stack a profile with the current confinement context, a task can use the
@@ -68,7 +68,7 @@ The aa_stack_onexec() function is like the aa_stack_profile() function
except it specifies that the stacking should take place on the next exec
instead of immediately. The delayed profile change takes precedence over any
exec transition rules within the confining profile. Delaying the stacking
boundary has a couple of advantages, it removes the need for stub transition
boundary has a couple of advantages: it removes the need for stub transition
profiles and the exec boundary is a natural security layer where potentially
sensitive memory is unmapped.

View File

@@ -32,10 +32,10 @@ INCLUDES = $(all_includes)
#
# After changing the AA_LIB_* variables, also update EXPECTED_SO_NAME.
AA_LIB_CURRENT = 25
AA_LIB_REVISION = 1
AA_LIB_AGE = 24
EXPECTED_SO_NAME = libapparmor.so.1.24.1
AA_LIB_CURRENT = 20
AA_LIB_REVISION = 0
AA_LIB_AGE = 19
EXPECTED_SO_NAME = libapparmor.so.1.19.0
SUFFIXES = .pc.in .pc

View File

@@ -1,3 +1,3 @@
SUBDIRS = perl python ruby
EXTRA_DIST = SWIG/*.i java/Makefile.am
EXTRA_DIST = SWIG/*.i

View File

@@ -258,13 +258,7 @@ extern int aa_is_enabled(void);
* allocation uninitialized (0) != SWIG_NEWOBJ
*/
%#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L
/*
* Some older versions of SWIG place this right after a goto label
* This would then be a label followed by a declaration, a C23 extension (!)
* To ensure this works for older SWIG versions and older compilers,
* make this a block element with curly braces.
*/
{static_assert(SWIG_NEWOBJ != 0, "SWIG_NEWOBJ is 0");}
static_assert(SWIG_NEWOBJ != 0);
%#endif
if ($1 != NULL && alloc_tracking$argnum != NULL) {
for (Py_ssize_t i=0; i<seq_len$argnum; i++) {
@@ -321,17 +315,10 @@ extern int aa_stack_onexec(const char *profile);
* We can't use "typedef int pid_t" because we still support systems
* with 16-bit PIDs and SWIG can't find sys/types.h
*
* Capture the passed-in value as a long because pid_t is guaranteed
* to be a signed integer and because the aalogparse struct uses
* (unsigned) longs to store pid values. While intmax_t would be more
* technically correct, if sizeof(pid_t) > sizeof(long) then aalogparse
* itself would also need fixing.
* Capture the passed-in value as an intmax_t because pid_t is guaranteed
* to be a signed integer
*/
%typemap(in,noblock=1,fragment="SWIG_AsVal_long") pid_t (int conv_pid, long pid_large) {
%#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L
static_assert(sizeof(pid_t) <= sizeof(long),
"pid_t type is too large to be stored in a long");
%#endif
%typemap(in,noblock=1,fragment="SWIG_AsVal_long") pid_t (int conv_pid, intmax_t pid_large) {
conv_pid = SWIG_AsVal_long($input, &pid_large);
if (!SWIG_IsOK(conv_pid)) {
%argument_fail(conv_pid, "pid_t", $symname, $argnum);
@@ -341,7 +328,7 @@ extern int aa_stack_onexec(const char *profile);
* Technically this is implementation-defined behaviour but we should be fine
*/
$1 = (pid_t) pid_large;
if ((long) $1 != pid_large) {
if ((intmax_t) $1 != pid_large) {
SWIG_exception_fail(SWIG_OverflowError, "pid_t is too large");
}
}

View File

@@ -1,21 +0,0 @@
WRAPPERFILES = apparmorlogparse_wrap.c
BUILT_SOURCES = apparmorlogparse_wrap.c
all-local: apparmorlogparse_wrap.o
$(CC) -module apparmorlogparse_wrap.o -o libaalogparse.so
apparmorlogparse_wrap.o: apparmorlogparse_wrap.c
$(CC) -c apparmorlogparse_wrap.c $(CFLAGS) -I../../src -I/usr/include/classpath -fno-strict-aliasing -o apparmorlogparse_wrap.o
clean-local:
rm -rf org
apparmorlogparse_wrap.c: org/aalogparse ../SWIG/*.i
$(SWIG) -java -I../SWIG -I../../src -outdir org/aalogparse \
-package org.aalogparse -o apparmorlogparse_wrap.c libaalogparse.i
org/aalogparse:
mkdir -p org/aalogparse
EXTRA_DIST = $(BUILT_SOURCES)

View File

@@ -1,2 +1,4 @@
/home/cb/bin/hello.sh {
/usr/bin/rm mrix,
}

View File

@@ -1,2 +1,4 @@
/usr/bin/wireshark {
/usr/lib64/wireshark/extcap/androiddump mrix,
}

View File

@@ -1,4 +1,4 @@
/bin/ping {
ping2 ix,
/bin/ping mrix,
}

View File

@@ -1,4 +1,4 @@
/bin/ping {
/bin/ping ix,
/bin/ping mrix,
}

View File

@@ -1,4 +1,4 @@
/bin/ping {
/bin/ping ix,
/bin/ping mrix,
}

View File

@@ -0,0 +1,4 @@
/home/steve/aa-regression-tests/link {
/tmp/sdtest.8236-29816-IN8243/target l,
}

View File

@@ -1,3 +1,4 @@
/tmp/apparmor-2.8.0/tests/regression/apparmor/dbus_service {
dbus send bus=system path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=LookupDynamicUserByName peer=( name=org.freedesktop.systemd1, label=unconfined),
dbus send bus=system path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=LookupDynamicUserByName peer=(label=unconfined),
}

View File

@@ -388,7 +388,7 @@ aa_change_hat(2) can take advantage of subprofiles to run under different
confinements, dependent on program logic. Several aa_change_hat(2)-aware
applications exist, including an Apache module, mod_apparmor(5); a PAM
module, pam_apparmor; and a Tomcat valve, tomcat_apparmor. Applications
written or modified to use change_profile(2) transition permanently to the
written or modified to use aa_change_profile(2) transition permanently to the
specified profile. libvirt is one such application.
=head2 Profile Head
@@ -604,7 +604,7 @@ modes:
=item B<Ux>
- unconfined execute -- scrub the environment
- unconfined execute -- use ld.so(8) secure-execution mode
=item B<px>
@@ -612,7 +612,7 @@ modes:
=item B<Px>
- discrete profile execute -- scrub the environment
- discrete profile execute -- use ld.so(8) secure-execution mode
=item B<cx>
@@ -620,7 +620,7 @@ modes:
=item B<Cx>
- transition to subprofile on execute -- scrub the environment
- transition to subprofile on execute -- use ld.so(8) secure-execution mode
=item B<ix>
@@ -632,7 +632,7 @@ modes:
=item B<Pix>
- discrete profile execute with inherit fallback -- scrub the environment
- discrete profile execute with inherit fallback -- use ld.so(8) secure-execution mode
=item B<cix>
@@ -640,7 +640,7 @@ modes:
=item B<Cix>
- transition to subprofile on execute with inherit fallback -- scrub the environment
- transition to subprofile on execute with inherit fallback -- use ld.so(8) secure-execution mode
=item B<pux>
@@ -648,7 +648,7 @@ modes:
=item B<PUx>
- discrete profile execute with fallback to unconfined -- scrub the environment
- discrete profile execute with fallback to unconfined -- use ld.so(8) secure-execution mode
=item B<cux>
@@ -656,7 +656,7 @@ modes:
=item B<CUx>
- transition to subprofile on execute with fallback to unconfined -- scrub the environment
- transition to subprofile on execute with fallback to unconfined -- use ld.so(8) secure-execution mode
=item B<deny x>
@@ -715,20 +715,20 @@ constrained, see the apparmor(7) man page.
B<WARNING> 'ux' should only be used in very special cases. It enables the
designated child processes to be run without any AppArmor protection.
'ux' does not scrub the environment of variables such as LD_PRELOAD;
as a result, the calling domain may have an undue amount of influence
over the callee. Use this mode only if the child absolutely must be
'ux' does not use ld.so(8) secure-execution mode to clear variables such as
LD_PRELOAD; as a result, the calling domain may have an undue amount of
influence over the callee. Use this mode only if the child absolutely must be
run unconfined and LD_PRELOAD must be used. Any profile using this mode
provides negligible security. Use at your own risk.
Incompatible with other exec transition modes and the deny qualifier.
=item B<Ux - unconfined execute -- scrub the environment>
=item B<Ux - unconfined execute -- use ld.so(8) secure-execution mode>
'Ux' allows the named program to run in 'ux' mode, but AppArmor
will invoke the Linux Kernel's B<unsafe_exec> routines to scrub
the environment, similar to setuid programs. (See ld.so(8) for some
information on setuid/setgid environment scrubbing.)
will invoke the Linux Kernel's B<unsafe_exec> routines to set ld.so(8)
secure-execution mode and clear environment variables such as LD_PRELOAD,
similar to setuid programs. (See ld.so(8) for more information.)
B<WARNING> 'Ux' should only be used in very special cases. It enables the
designated child processes to be run without any AppArmor protection.
@@ -743,18 +743,18 @@ This mode requires that a discrete security profile is defined for a
program executed and forces an AppArmor domain transition. If there is
no profile defined then the access will be denied.
B<WARNING> 'px' does not scrub the environment of variables such as
LD_PRELOAD; as a result, the calling domain may have an undue amount of
B<WARNING> 'px' does not use ld.so(8) secure-execution mode to clear variables
such as LD_PRELOAD; as a result, the calling domain may have an undue amount of
influence over the callee.
Incompatible with other exec transition modes and the deny qualifier.
=item B<Px - Discrete Profile execute mode -- scrub the environment>
=item B<Px - Discrete Profile execute mode -- use ld.so(8) secure-execution mode>
'Px' allows the named program to run in 'px' mode, but AppArmor
will invoke the Linux Kernel's B<unsafe_exec> routines to scrub
the environment, similar to setuid programs. (See ld.so(8) for some
information on setuid/setgid environment scrubbing.)
will invoke the Linux Kernel's B<unsafe_exec> routines to set ld.so(8)
secure-execution mode and clear environment variables such as LD_PRELOAD,
similar to setuid programs. (See ld.so(8) for more information.)
Incompatible with other exec transition modes and the deny qualifier.
@@ -764,18 +764,18 @@ This mode requires that a local security profile is defined and forces an
AppArmor domain transition to the named profile. If there is no profile
defined then the access will be denied.
B<WARNING> 'cx' does not scrub the environment of variables such as
LD_PRELOAD; as a result, the calling domain may have an undue amount of
B<WARNING> 'cx' does not use ld.so(8) secure-execution mode to clear variables
such as LD_PRELOAD; as a result, the calling domain may have an undue amount of
influence over the callee.
Incompatible with other exec transition modes and the deny qualifier.
=item B<Cx - Transition to Subprofile execute mode -- scrub the environment>
=item B<Cx - Transition to Subprofile execute mode -- use ld.so(8) secure-execution mode>
'Cx' allows the named program to run in 'cx' mode, but AppArmor
will invoke the Linux Kernel's B<unsafe_exec> routines to scrub
the environment, similar to setuid programs. (See ld.so(8) for some
information on setuid/setgid environment scrubbing.)
will invoke the Linux Kernel's B<unsafe_exec> routines to set ld.so(8)
secure-execution mode and clear environment variables such as LD_PRELOAD,
similar to setuid programs. (See ld.so(8) for more information.)
Incompatible with other exec transition modes and the deny qualifier.
@@ -788,7 +788,7 @@ will inherit the current profile.
This mode is useful when a confined program needs to call another
confined program without gaining the permissions of the target's
profile, or losing the permissions of the current profile. There is no
version to scrub the environment because 'ix' executions don't change
version to set secure-execution mode because 'ix' executions don't change
privileges.
Incompatible with other exec transition modes and the deny qualifier.
@@ -1690,11 +1690,11 @@ rule set. Eg.
change_profile /bin/bash -> {new_profile1,new_profile2,new_profile3},
The exec mode dictates whether or not the Linux Kernel's B<unsafe_exec>
routines should be used to scrub the environment, similar to setuid programs.
(See ld.so(8) for some information on setuid/setgid environment scrubbing.) The
B<safe> mode sets up environment scrubbing to occur when the new application is
executed and B<unsafe> mode disables AppArmor's requirement for environment
scrubbing (the kernel and/or libc may still require environment scrubbing). An
routines should be used to set ld.so(8) secure-execution mode and clear
environment variables such as LD_PRELOAD, similar to setuid programs.
(See ld.so(8) for more information.) The B<safe> mode sets up secure-execution
mode for the new application, and B<unsafe> mode disables AppArmor's
requirement for it (the kernel and/or libc may still turn it on). An
exec mode can only be specified when an exec condition is present.
change_profile safe /bin/bash -> new_profile,
@@ -1796,61 +1796,6 @@ F</etc/apparmor.d/tunables/xdg-user-dirs.d> for B<@{XDG_*}>.
The special B<@{profile_name}> variable is set to the profile name and may be
used in all policy.
=head3 Notes on variable expansion and the / character
It is important to note that how AppArmor performs variable expansion
depends on the context where a variable is used. When a variable is
expanded it can result in a string with multiple path characters
next to each other, in a way that is not evident when looking at
policy.
Eg.
=over 4
Given the following variable definition and rule
@{HOME}=/home/*/
file rw @{HOME}/*,
The variable expansion results in a rule of
file rw /home/*//*.
=back
When this occurs in a context where a path is expected, AppArmor will
canonicalize the path by collapsing consecutive / characters into
a single character. For the above example, this would be
file rw /home/*/*,
There is one exception to this rule, when the consecutive / characters
are at the beginning of a path, this indicates a posix namespace
and the characters will not be collapsed.
Eg.
=over 4
@{HOME}=/home/*/
file rw /@{HOME}/*,
will result in an expansion of
file rw //home/*//*,
which is collapsed to
file rw //home/*/*,
Note: that the leading // in the above example is not collapsed to a
single /. However the second // (that was also seen in the first
example) is collapsed.
=back
=head2 Alias rules
AppArmor also provides alias rules for remapping paths for site-specific
@@ -2152,7 +2097,7 @@ An example AppArmor profile:
/usr/lib/** r,
/tmp/foo.pid wr,
/tmp/foo.* lrw,
@{HOME}/.foo_file rw,
/@{HOME}/.foo_file rw,
/usr/bin/baz Cx -> baz,
# a comment about foo's hat (subprofile), bar.
@@ -2214,7 +2159,7 @@ negative values match when specifying one or the other. Eg, 'rw' matches when
=head1 SEE ALSO
apparmor(7), apparmor_parser(8), apparmor_xattrs(7), aa-complain(1),
aa-enforce(1), aa_change_hat(2), mod_apparmor(5), and
aa-enforce(1), aa_change_hat(2), aa_change_profile(2), mod_apparmor(5), and
L<https://wiki.apparmor.net>.
=cut

View File

@@ -206,8 +206,8 @@ which can help debugging profiles.
=head2 Enable debug mode
When debug mode is enabled, AppArmor will log a few extra messages to
dmesg (not via the audit subsystem). For example, the logs will tell
whether environment scrubbing has been applied.
dmesg (not via the audit subsystem). For example, the logs will state when
ld.so(8) secure-execution mode has been applied in a profile transition.
To enable debug mode, run:

View File

@@ -63,7 +63,6 @@ typedef enum capability_flags {
} capability_flags;
int name_to_capability(const char *keyword);
void capabilities_init(void);
void __debug_capabilities(uint64_t capset, const char *name);
bool add_cap_feature_mask(struct aa_features *features, capability_flags flags);
void clear_cap_flag(capability_flags flags);

View File

@@ -10,6 +10,199 @@ aare_rules.{h,cc} - code to that binds parse -> expr-tree -> hfa generation
-> chfa generation into a basic interface for converting
rules to a runtime ready state machine.
Notes on the compiler pipeline order
============================================
Front End: Program driver logic and policy text parsing into an
abstract syntax tree.
Middle Layer: Transforms and operations on the abstract syntax tree.
Converts syntax tree into expression tree for back end.
Back End: transforms of syntax tree, and creation of policy HFA from
expression trees and HFAs.
Basic order of the backend of the compiler pipe line and where the
dump information occurs in the pipeline.
===== Front End (parse -> AST ================
|
v
yyparse
|
+--->--+-->-+
| |
| +-->---- +---------------------------<-----------------------+
| | | |
| | v |
| | yylex |
| | | |
| ^ token match |
| | | |
| | +----------------------------+ |
| | | | ^
| | v v |
| +-<- rule match? preprocess |
| | | |
| early var expansion +----------+-----------+ |
| | | | | |
^ v v v v |
| new rule() / new ent include variable conditional |
| | | | | |
| v +---->-----+----->-----+----->----+
| new rule semantic check
| |
+-----<-----+
|
----------- | ------ End of Parse --------------------
|
v
post_parse_profile semantic check
|
v
post_process
|
v
add implied rules()
|
v
process_profile_variables()
|
v
rule->expand_variables()
|
+--------+
|
v
replace aliases (to be moved to backend rewrite)
|
v
merge rules
|
v
profile->merge_rules()
|
v
+-->--rule->is_mergeable()
| |
^ v
| add to table
| |
+-------+--------+
|
v
sort->cmp()/oper<()
|
rule->merge()
|
+------------+
|
v
process_profile_rules
|
v
rule->gen_policy_re()
|
v
===== Mid layer (AST -> expr tree) =================
|
+-> add_rule() (aare_rules.{h,cc})
| |
| v
| rule parse (parse.y)
| | |
| | v
| | expr tree (expr-tree.{h,cc})
| | |
| v |
| unique perms | (aare_rules.{h,cc})
| | |
| +------ +
| |
| v
| add to rules expr tree (aare_rules.{h,c})
| |
+------+
|
+------------------+
|
v
create_dfablob()
|
v
expr tree
|
v
create_chfa() (aare_rules.cc)
|
v
expr normalization (expr-tree.{h,cc})
|
v
expr simplification (expr-tree.{h,c})
|
+- D expr-tree
|
+- D expr-simplified
|
==== Back End - Create cHFA out of expr tree and other HFAs ====
v
hfa creation (hfa.{h,cc})
|
+- D dfa-node-map
|
+- D dfa-uniq-perms
|
+- D dfa-states-initial
|
v
hfa rewrite (not yet implemented)
|
v
filter deny (hfa.{h,cc})
|
+- D dfa-states-post-filter
|
v
minimization (hfa.{h,cc})
|
+- D dfa-minimize-partitions
|
+- D dfa-minimize-uniq-perms
|
+- D dfa-states-post-minimize
|
v
unreachable state removal (hfa.{h,cc})
|
+- D dfa-states-post-unreachable
|
+- D dfa-states constructed hfa
|
+- D dfa-graph
|
v
equivalence class construction
|
+- D equiv
|
diff encode (hfa.{h,cc})
|
+- D diff-encode
|
compute perms table
|
+- D compressed-dfa == perm table dump
|
compressed hfa (chfa.{h,cc}
|
+- D compressed-dfa == transition tables
|
+- D dfa-compressed-states - compress HFA in state form
|
v
Return to Mid Layer
Notes on the compress hfa file format (chfa)
==============================================

View File

@@ -25,6 +25,8 @@
#include <iostream>
#include <fstream>
#include <limits>
#include <arpa/inet.h>
#include <stdio.h>
#include <string.h>
@@ -592,10 +594,11 @@ void CHFA::weld_file_to_policy(CHFA &file_chfa, size_t &new_start,
// to repeat
assert(accept.size() == old_base_size);
accept.resize(accept.size() + file_chfa.accept.size());
size_t size = policy_perms.size();
assert(policy_perms.size() < std::numeric_limits<ssize_t>::max());
ssize_t size = (ssize_t) policy_perms.size();
policy_perms.resize(size*2 + file_perms.size());
// shift and double the policy perms
for (size_t i = size - 1; size >= 0; i--) {
for (ssize_t i = size - 1; i >= 0; i--) {
policy_perms[i*2] = policy_perms[i];
policy_perms[i*2 + 1] = policy_perms[i];
}

View File

@@ -558,6 +558,14 @@ void DFA::dump_uniq_perms(const char *s)
//TODO: add prompt
}
// make sure work_queue and reachable insertion are always done together
static void push_reachable(set<State *> &reachable, list<State *> &work_queue,
State *state)
{
work_queue.push_back(state);
reachable.insert(state);
}
/* Remove dead or unreachable states */
void DFA::remove_unreachable(optflags const &opts)
{
@@ -565,19 +573,18 @@ void DFA::remove_unreachable(optflags const &opts)
/* find the set of reachable states */
reachable.insert(nonmatching);
work_queue.push_back(start);
push_reachable(reachable, work_queue, start);
while (!work_queue.empty()) {
State *from = work_queue.front();
work_queue.pop_front();
reachable.insert(from);
if (from->otherwise != nonmatching &&
reachable.find(from->otherwise) == reachable.end())
work_queue.push_back(from->otherwise);
push_reachable(reachable, work_queue, from->otherwise);
for (StateTrans::iterator j = from->trans.begin(); j != from->trans.end(); j++) {
if (reachable.find(j->second) == reachable.end())
work_queue.push_back(j->second);
push_reachable(reachable, work_queue, j->second);
}
}
@@ -1532,7 +1539,9 @@ int accept_perms(optflags const &opts, NodeVec *state, perms_t &perms,
{
int error = 0;
perms_t exact;
std::vector<int> priority(sizeof(perm32_t)*8, MIN_INTERNAL_PRIORITY); // 32 but wan't tied to perm32_t
// size of vector needs to be number of bits in the data type
// being used for the permission set.
std::vector<int> priority(sizeof(perm32_t)*8, MIN_INTERNAL_PRIORITY);
perms.clear();
if (!state)

View File

@@ -275,7 +275,7 @@ public:
ostream &dump(ostream &os)
{
cerr << *this << "\n";
os << *this << "\n";
for (StateTrans::iterator i = trans.begin(); i != trans.end(); i++) {
os << " " << i->first.c << " -> " << *i->second << "\n";
}

View File

@@ -355,7 +355,8 @@ int is_valid_mnt_cond(const char *name, int src)
static unsigned int extract_flags(struct value_list **list, unsigned int *inv)
{
unsigned int flags = 0, invflags = 0;
*inv = 0;
if (inv)
*inv = 0;
struct value_list *entry, *tmp, *prev = NULL;
list_for_each_safe(*list, entry, tmp) {
@@ -368,11 +369,7 @@ static unsigned int extract_flags(struct value_list **list, unsigned int *inv)
" => req: 0x%x inv: 0x%x\n",
entry->value, mnt_opts_table[i].set,
mnt_opts_table[i].clear, flags, invflags);
if (prev)
prev->next = tmp;
if (entry == *list)
*list = tmp;
entry->next = NULL;
list_remove_at(*list, prev, entry);
free_value_list(entry);
} else
prev = entry;
@@ -683,7 +680,7 @@ int mnt_rule::cmp(rule_t const &rhs) const {
return cmp_vec_int(opt_flagsv, rhs_mnt.opt_flagsv);
}
static int build_mnt_flags(char *buffer, int size, unsigned int flags,
static bool build_mnt_flags(char *buffer, int size, unsigned int flags,
unsigned int opt_flags)
{
char *p = buffer;
@@ -693,8 +690,8 @@ static int build_mnt_flags(char *buffer, int size, unsigned int flags,
/* all flags are optional */
len = snprintf(p, size, "%s", default_match_pattern);
if (len < 0 || len >= size)
return FALSE;
return TRUE;
return false;
return true;
}
for (i = 0; i <= 31; ++i) {
if ((opt_flags) & (1 << i))
@@ -705,7 +702,7 @@ static int build_mnt_flags(char *buffer, int size, unsigned int flags,
continue;
if (len < 0 || len >= size)
return FALSE;
return false;
p += len;
size -= len;
}
@@ -716,15 +713,15 @@ static int build_mnt_flags(char *buffer, int size, unsigned int flags,
* like the empty string
*/
if (size < 9)
return FALSE;
return false;
strcpy(p, "(\\xfe|)");
}
return TRUE;
return true;
}
static int build_mnt_opts(std::string& buffer, struct value_list *opts)
static bool build_mnt_opts(std::string& buffer, struct value_list *opts)
{
struct value_list *ent;
pattern_t ptype;
@@ -732,19 +729,19 @@ static int build_mnt_opts(std::string& buffer, struct value_list *opts)
if (!opts) {
buffer.append(default_match_pattern);
return TRUE;
return true;
}
list_for_each(opts, ent) {
ptype = convert_aaregex_to_pcre(ent->value, 0, glob_default, buffer, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
if (ent->next)
buffer.append(",");
}
return TRUE;
return true;
}
void mnt_rule::warn_once(const char *name)

View File

@@ -179,8 +179,6 @@ struct var_string {
#define OPTION_STDOUT 4
#define OPTION_OFILE 5
#define BOOL int
extern int preprocess_only;
#define PATH_CHROOT_REL 0x1
@@ -213,13 +211,6 @@ do { \
errno = perror_error; \
} while (0)
#ifndef TRUE
#define TRUE (1)
#endif
#ifndef FALSE
#define FALSE (0)
#endif
#define MIN_PORT 0
#define MAX_PORT 65535
@@ -251,17 +242,6 @@ do { \
len; \
})
#define list_find_prev(LIST, ENTRY) \
({ \
typeof(ENTRY) tmp, prev = NULL; \
list_for_each((LIST), tmp) { \
if (tmp == (ENTRY)) \
break; \
prev = tmp; \
} \
prev; \
})
#define list_pop(LIST) \
({ \
typeof(LIST) _entry = (LIST); \
@@ -279,12 +259,6 @@ do { \
(LIST) = (ENTRY)->next; \
(ENTRY)->next = NULL; \
#define list_remove(LIST, ENTRY) \
do { \
typeof(ENTRY) prev = list_find_prev((LIST), (ENTRY)); \
list_remove_at((LIST), prev, (ENTRY)); \
} while (0)
#define DUP_STRING(orig, new, field, fail_target) \
do { \
@@ -423,10 +397,10 @@ extern const char *basedir;
#define glob_null 1
extern pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
std::string& pcre, int *first_re_pos);
extern int build_list_val_expr(std::string& buffer, struct value_list *list);
extern int convert_entry(std::string& buffer, char *entry);
extern bool build_list_val_expr(std::string& buffer, struct value_list *list);
extern bool convert_entry(std::string& buffer, char *entry);
extern int clear_and_convert_entry(std::string& buffer, char *entry);
extern int convert_range(std::string& buffer, bignum start, bignum end);
extern bool convert_range(std::string& buffer, bignum start, bignum end);
extern int process_regex(Profile *prof);
extern int post_process_entry(struct cod_entry *entry);
@@ -445,7 +419,6 @@ extern void free_var_string(struct var_string *var);
extern void warn_uppercase(void);
extern int is_blacklisted(const char *name, const char *path);
extern struct value_list *new_value_list(char *value);
extern struct value_list *dup_value_list(struct value_list *list);
extern void free_value_list(struct value_list *list);
extern void print_value_list(struct value_list *list);
extern struct cond_entry *new_cond_entry(char *name, int eq, struct value_list *list);

View File

@@ -142,8 +142,10 @@ static void process_entries(const void *nodep, VISIT value, int level unused)
}
if (dup) {
dup->alias_ignore = true;
/* adds to the front of the list, list iteratition
* will skip it
/* The original entry->next is in dup->next, so we don't lose
* any of the original elements of the linked list. Also, by
* setting dup->alias_ignore, we trigger the check at the start
* of the loop, skipping the new entry we just inserted.
*/
entry->next = dup;

View File

@@ -202,7 +202,7 @@ static void start_include_position(const char *filename)
current_lineno = 1;
}
void push_include_stack(char *filename)
void push_include_stack(const char *filename)
{
struct include_stack_t *include = NULL;

View File

@@ -29,7 +29,7 @@ extern void parse_default_paths(void);
extern int do_include_preprocessing(char *profilename);
FILE *search_path(char *filename, char **fullpath, bool *skip);
extern void push_include_stack(char *filename);
extern void push_include_stack(const char *filename);
extern void pop_include_stack(void);
extern void reset_include_stack(const char *filename);

View File

@@ -245,7 +245,7 @@ static inline void sd_write_uint64(std::ostringstream &buf, u64 b)
static inline void sd_write_name(std::ostringstream &buf, const char *name)
{
PDEBUG("Writing name '%s'\n", name);
PDEBUG("Writing name '%s'\n", name ? name : "(null)");
if (name) {
sd_write8(buf, SD_NAME);
sd_write16(buf, strlen(name) + 1);

View File

@@ -1620,7 +1620,6 @@ int main(int argc, char *argv[])
progname = argv[0];
init_base_dir();
capabilities_init();
process_early_args(argc, argv);
process_config_file(config_file);

View File

@@ -35,6 +35,7 @@
#include <sys/apparmor_private.h>
#include <algorithm>
#include <unordered_map>
#include "capability.h"
#include "lib.h"
@@ -61,6 +62,10 @@ void *reallocarray(void *ptr, size_t nmemb, size_t size)
}
#endif
#ifndef NULL
#define NULL nullptr
#endif
int is_blacklisted(const char *name, const char *path)
{
int retval = _aa_is_blacklisted(name);
@@ -71,12 +76,7 @@ int is_blacklisted(const char *name, const char *path)
return !retval ? 0 : 1;
}
struct keyword_table {
const char *keyword;
unsigned int token;
};
static struct keyword_table keyword_table[] = {
static const unordered_map<string, int> keyword_table = {
/* network */
{"network", TOK_NETWORK},
{"unix", TOK_UNIX},
@@ -132,11 +132,9 @@ static struct keyword_table keyword_table[] = {
{"sqpoll", TOK_SQPOLL},
{"all", TOK_ALL},
{"priority", TOK_PRIORITY},
/* terminate */
{NULL, 0}
};
static struct keyword_table rlimit_table[] = {
static const unordered_map<string, int> rlimit_table = {
{"cpu", RLIMIT_CPU},
{"fsize", RLIMIT_FSIZE},
{"data", RLIMIT_DATA},
@@ -162,37 +160,33 @@ static struct keyword_table rlimit_table[] = {
#ifdef RLIMIT_RTTIME
{"rttime", RLIMIT_RTTIME},
#endif
/* terminate */
{NULL, 0}
};
/* for alpha matches, check for keywords */
static int get_table_token(const char *name unused, struct keyword_table *table,
const char *keyword)
static int get_table_token(const char *name unused, const unordered_map<string, int> &table,
const string &keyword)
{
int i;
for (i = 0; table[i].keyword; i++) {
PDEBUG("Checking %s %s\n", name, table[i].keyword);
if (strcmp(keyword, table[i].keyword) == 0) {
PDEBUG("Found %s %s\n", name, table[i].keyword);
return table[i].token;
}
auto token_entry = table.find(keyword);
if (token_entry == table.end()) {
PDEBUG("Unable to find %s %s\n", name, keyword.c_str());
return -1;
} else {
PDEBUG("Found %s %s\n", name, keyword.c_str());
return token_entry->second;
}
PDEBUG("Unable to find %s %s\n", name, keyword);
return -1;
}
/* for alpha matches, check for keywords */
int get_keyword_token(const char *keyword)
{
return get_table_token("keyword", keyword_table, keyword);
// Can't use string_view because that requires C++17
return get_table_token("keyword", keyword_table, string(keyword));
}
int get_rlimit(const char *name)
{
return get_table_token("rlimit", rlimit_table, name);
// Can't use string_view because that requires C++17
return get_table_token("rlimit", rlimit_table, string(name));
}
@@ -208,55 +202,164 @@ struct capability_table {
capability_flags flags;
};
/*
* Enum for the results of adding a capability, with values assigned to match
* the int values returned by the old capable_add_cap function:
*
* -1: error
* 0: no change - capability already in table
* 1: added flag to capability in table
* 2: added new capability
*/
enum add_cap_result {
ERROR = -1, // Was only used for OOM conditions
ALREADY_EXISTS = 0,
FLAG_ADDED = 1,
CAP_ADDED = 2
};
static struct capability_table base_capability_table[] = {
/* capabilities */
#include "cap_names.h"
};
static const size_t BASE_CAP_TABLE_SIZE = sizeof(base_capability_table)/sizeof(struct capability_table);
/* terminate */
{NULL, 0, 0, CAPFLAGS_CLEAR}
class capability_lookup {
vector<capability_table> cap_table;
// Use unordered_map to avoid pulling in two map implementations
// We may want to switch to boost::multiindex to avoid duplication
unordered_map<string, capability_table&> name_cap_map;
unordered_map<unsigned int, capability_table&> int_cap_map;
private:
void add_capability_table_entry_raw(capability_table entry) {
cap_table.push_back(entry);
capability_table &entry_ref = cap_table.back();
name_cap_map.emplace(string(entry_ref.name), entry_ref);
int_cap_map.emplace(entry_ref.cap, entry_ref);
}
public:
capability_lookup() :
cap_table(vector<capability_table>()),
name_cap_map(unordered_map<string, capability_table&>(BASE_CAP_TABLE_SIZE)),
int_cap_map(unordered_map<unsigned int, capability_table&>(BASE_CAP_TABLE_SIZE)) {
cap_table.reserve(BASE_CAP_TABLE_SIZE);
for (size_t i=0; i<BASE_CAP_TABLE_SIZE; i++) {
add_capability_table_entry_raw(base_capability_table[i]);
}
}
capability_table* find_cap_entry_by_name(string const & name) const {
auto map_entry = this->name_cap_map.find(name);
if (map_entry == this->name_cap_map.end()) {
return NULL;
} else {
PDEBUG("Found %s %s\n", name.c_str(), map_entry->second.name);
return &map_entry->second;
}
}
capability_table* find_cap_entry_by_num(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return NULL;
} else {
PDEBUG("Found %d %d\n", cap, map_entry->second.cap);
return &map_entry->second;
}
}
int name_to_capability(string const &cap) const {
auto map_entry = this->name_cap_map.find(cap);
if (map_entry == this->name_cap_map.end()) {
PDEBUG("Unable to find %s %s\n", "capability", cap.c_str());
return -1;
} else {
return map_entry->second.cap;
}
}
const char *capability_to_name(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return "invalid-capability";
} else {
return map_entry->second.name;
}
}
int capability_backmap(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return NO_BACKMAP_CAP;
} else {
return map_entry->second.backmap;
}
}
bool capability_in_kernel(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return false;
} else {
return map_entry->second.flags & CAPFLAG_KERNEL_FEATURE;
}
}
void __debug_capabilities(uint64_t capset, const char *name) const {
printf("%s:", name);
for (auto it = this->cap_table.cbegin(); it != this->cap_table.cend(); it++) {
if ((1ull << it->cap) & capset)
printf (" %s", it->name);
}
printf("\n");
}
add_cap_result capable_add_cap(string const & str, unsigned int cap,
capability_flags flag) {
struct capability_table *ent = this->find_cap_entry_by_name(str);
if (ent) {
if (ent->cap != cap) {
pwarn(WARN_UNEXPECTED, "feature capability '%s:%d' does not equal expected %d. Ignoring ...\n", str.c_str(), cap, ent->cap);
/* TODO: make warn to error config */
return add_cap_result::ALREADY_EXISTS;
}
if (ent->flags & flag)
return add_cap_result::ALREADY_EXISTS;
ent->flags = (capability_flags) (ent->flags | flag);
return add_cap_result::FLAG_ADDED;
} else {
struct capability_table new_entry;
new_entry.name = strdup(str.c_str());
if (!new_entry.name) {
yyerror(_("Out of memory"));
return add_cap_result::ERROR;
}
new_entry.cap = cap;
new_entry.backmap = 0;
new_entry.flags = flag;
try {
this->add_capability_table_entry_raw(new_entry);
} catch (const std::bad_alloc &_e) {
yyerror(_("Out of memory"));
return add_cap_result::ERROR;
}
// TODO: exception catching for causes other than OOM
return add_cap_result::CAP_ADDED;
}
}
void clear_cap_flag(capability_flags flags)
{
for (auto it = this->cap_table.begin(); it != this->cap_table.end(); it++) {
PDEBUG("Clearing capability flag for capability \"%s\"\n", it->name);
it->flags = (capability_flags) (it->flags & ~flags);
}
}
};
static struct capability_table *cap_table;
static int cap_table_size;
void capabilities_init(void)
{
cap_table = (struct capability_table *) malloc(sizeof(base_capability_table));
if (!cap_table)
yyerror(_("Memory allocation error."));
memcpy(cap_table, base_capability_table, sizeof(base_capability_table));
cap_table_size = sizeof(base_capability_table)/sizeof(struct capability_table);
}
struct capability_table *find_cap_entry_by_name(const char *name)
{
int i;
for (i = 0; cap_table[i].name; i++) {
PDEBUG("Checking %s %s\n", name, cap_table[i].name);
if (strcmp(name, cap_table[i].name) == 0) {
PDEBUG("Found %s %s\n", name, cap_table[i].name);
return &cap_table[i];
}
}
return NULL;
}
struct capability_table *find_cap_entry_by_num(unsigned int cap)
{
int i;
for (i = 0; cap_table[i].name; i++) {
PDEBUG("Checking %d %d\n", cap, cap_table[i].cap);
if (cap == cap_table[i].cap) {
PDEBUG("Found %d %d\n", cap, cap_table[i].cap);
return &cap_table[i];
}
}
return NULL;
}
static capability_lookup cap_table;
/* don't mark up str with \0 */
static const char *strn_token(const char *str, size_t &len)
@@ -294,59 +397,6 @@ bool strcomp (const char *lhs, const char *rhs)
return null_strcmp(lhs, rhs) < 0;
}
/*
* Returns: -1: error
* 0: no change - capability already in table
* 1: added flag to capability in table
* 2: added new capability
*/
static int capable_add_cap(const char *str, int len, unsigned int cap,
capability_flags flag)
{
/* extract name from str so we can treat as a string */
autofree char *name = strndup(str, len);
if (!name) {
yyerror(_("Out of memory"));
return -1;
}
struct capability_table *ent = find_cap_entry_by_name(name);
if (ent) {
if (ent->cap != cap) {
pwarn(WARN_UNEXPECTED, "feature capability '%s:%d' does not equal expected %d. Ignoring ...\n", name, cap, ent->cap);
/* TODO: make warn to error config */
return 0;
}
if (ent->flags & flag)
return 0; /* no change */
ent->flags = (capability_flags) (ent->flags | flag);
return 1; /* modified */
} else {
struct capability_table *tmp;
tmp = (struct capability_table *) reallocarray(cap_table, sizeof(struct capability_table), cap_table_size+1);
if (!tmp) {
yyerror(_("Out of memory"));
/* TODO: change away from yyerror */
return -1;
}
cap_table = tmp;
ent = &cap_table[cap_table_size - 1]; /* overwrite null */
ent->name = strndup(name, len);
if (!ent->name) {
/* TODO: change away from yyerror */
yyerror(_("Out of memory"));
return -1;
}
ent->cap = cap;
ent->flags = flag;
cap_table[cap_table_size].name = NULL; /* new null */
cap_table_size++;
}
return 2; /* added */
}
bool add_cap_feature_mask(struct aa_features *features, capability_flags flags)
{
autofree char *value = NULL;
@@ -363,7 +413,8 @@ bool add_cap_feature_mask(struct aa_features *features, capability_flags flags)
for (capstr = strn_token(value, len);
capstr;
capstr = strn_token(capstr + len, len)) {
if (capable_add_cap(capstr, len, n, flags) < 0)
string capstr_as_str = string(capstr, len);
if (cap_table.capable_add_cap(capstr_as_str, n, flags) < 0)
return false;
n++;
if (len > valuelen) {
@@ -379,70 +430,32 @@ bool add_cap_feature_mask(struct aa_features *features, capability_flags flags)
void clear_cap_flag(capability_flags flags)
{
int i;
for (i = 0; cap_table[i].name; i++) {
PDEBUG("Clearing capability flag for capability \"%s\"\n", cap_table[i].name);
cap_table[i].flags = (capability_flags) (cap_table[i].flags & ~flags);
}
cap_table.clear_cap_flag(flags);
}
int name_to_capability(const char *cap)
{
struct capability_table *ent;
ent = find_cap_entry_by_name(cap);
if (ent)
return ent->cap;
PDEBUG("Unable to find %s %s\n", "capability", cap);
return -1;
return cap_table.name_to_capability(string(cap));
}
const char *capability_to_name(unsigned int cap)
{
struct capability_table *ent;
ent = find_cap_entry_by_num(cap);
if (ent)
return ent->name;
return "invalid-capability";
return cap_table.capability_to_name(cap);
}
int capability_backmap(unsigned int cap)
{
struct capability_table *ent;
ent = find_cap_entry_by_num(cap);
if (ent)
return ent->backmap;
return NO_BACKMAP_CAP;
return cap_table.capability_backmap(cap);
}
bool capability_in_kernel(unsigned int cap)
{
struct capability_table *ent;
ent = find_cap_entry_by_num(cap);
if (ent)
return ent->flags & CAPFLAG_KERNEL_FEATURE;
return false;
return cap_table.capability_in_kernel(cap);
}
void __debug_capabilities(uint64_t capset, const char *name)
{
unsigned int i;
printf("%s:", name);
for (i = 0; cap_table[i].name; i++) {
if ((1ull << cap_table[i].cap) & capset)
printf (" %s", cap_table[i].name);
}
printf("\n");
cap_table.__debug_capabilities(capset, name);
}
char *processunquoted(const char *string, int len)
@@ -1200,37 +1213,6 @@ void free_value_list(struct value_list *list)
}
}
struct value_list *dup_value_list(struct value_list *list)
{
struct value_list *entry, *dup, *head = NULL;
char *value;
list_for_each(list, entry) {
value = NULL;
if (list->value) {
value = strdup(list->value);
if (!value)
goto fail2;
}
dup = new_value_list(value);
if (!dup)
goto fail;
if (head)
list_append(head, dup);
else
head = dup;
}
return head;
fail:
free(value);
fail2:
free_value_list(head);
return NULL;
}
void print_value_list(struct value_list *list)
{
struct value_list *entry;

View File

@@ -50,7 +50,7 @@ enum error_type {
void filter_slashes(char *path)
{
char *sptr, *dptr;
BOOL seen_slash = 0;
bool seen_slash = false;
if (!path || (strlen(path) < 2))
return;
@@ -69,7 +69,7 @@ void filter_slashes(char *path)
++sptr;
} else {
*dptr++ = *sptr++;
seen_slash = TRUE;
seen_slash = true;
}
} else {
seen_slash = 0;
@@ -111,14 +111,14 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
#define MAX_ALT_DEPTH 50
*first_re_pos = 0;
int ret = TRUE;
int ret = 1;
/* flag to indicate input error */
enum error_type error;
const char *sptr;
pattern_t ptype;
BOOL bEscape = 0; /* flag to indicate escape */
bool bEscape = false; /* flag to indicate escape */
int ingrouping = 0; /* flag to indicate {} context */
int incharclass = 0; /* flag to indicate [ ] context */
int grouping_count[MAX_ALT_DEPTH] = {0};
@@ -150,7 +150,7 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
if (bEscape) {
pcre.append("\\\\");
} else {
bEscape = TRUE;
bEscape = true;
++sptr;
continue; /*skip turning bEscape off */
} /* bEscape */
@@ -393,7 +393,7 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
break;
} /* switch (*sptr) */
bEscape = FALSE;
bEscape = false;
++sptr;
} /* while error == e_no_error && *sptr) */
@@ -419,12 +419,12 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
PERROR(_("%s: Unable to parse input line '%s'\n"),
progname, aare);
ret = FALSE;
ret = 0;
goto out;
}
out:
if (ret == FALSE)
if (ret == 0)
ptype = ePatternInvalid;
if (parseopts.dump & DUMP_DFA_RULE_EXPR)
@@ -464,7 +464,7 @@ static void warn_once_xattr(const char *name)
common_warn_once(name, "xattr attachment conditional ignored", &warned_name);
}
static int process_profile_name_xmatch(Profile *prof)
static bool process_profile_name_xmatch(Profile *prof)
{
std::string tbuf;
pattern_t ptype;
@@ -479,7 +479,7 @@ static int process_profile_name_xmatch(Profile *prof)
/* don't filter_slashes for profile names, do on attachment */
name = strdup(local_name(prof->name));
if (!name)
return FALSE;
return false;
}
filter_slashes(name);
ptype = convert_aaregex_to_pcre(name, 0, glob_default, tbuf,
@@ -491,7 +491,7 @@ static int process_profile_name_xmatch(Profile *prof)
PERROR(_("%s: Invalid profile name '%s' - bad regular expression\n"), progname, name);
if (!prof->attachment)
free(name);
return FALSE;
return false;
}
if (!prof->attachment)
@@ -506,11 +506,11 @@ static int process_profile_name_xmatch(Profile *prof)
/* build a dfa */
aare_rules *rules = new aare_rules();
if (!rules)
return FALSE;
return false;
if (!rules->add_rule(tbuf.c_str(), 0, RULE_ALLOW,
AA_MAY_EXEC, 0, parseopts)) {
delete rules;
return FALSE;
return false;
}
if (prof->altnames) {
struct alt_name *alt;
@@ -525,7 +525,7 @@ static int process_profile_name_xmatch(Profile *prof)
RULE_ALLOW, AA_MAY_EXEC,
0, parseopts)) {
delete rules;
return FALSE;
return false;
}
}
}
@@ -567,7 +567,7 @@ static int process_profile_name_xmatch(Profile *prof)
&len);
if (!rules->append_rule(tbuf.c_str(), true, true, parseopts)) {
delete rules;
return FALSE;
return false;
}
}
}
@@ -581,10 +581,10 @@ build:
prof->xmatch = rules->create_dfablob(&prof->xmatch_size, &prof->xmatch_len, prof->xmatch_perms_table, parseopts, false, false, false);
delete rules;
if (!prof->xmatch)
return FALSE;
return false;
}
return TRUE;
return true;
}
static int warn_change_profile = 1;
@@ -606,21 +606,21 @@ static bool is_change_profile_perms(perm32_t perms)
return perms & AA_CHANGE_PROFILE;
}
static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
{
std::string tbuf;
pattern_t ptype;
int pos;
if (!entry) /* shouldn't happen */
return TRUE;
return false;
if (!is_change_profile_perms(entry->perms))
filter_slashes(entry->name);
ptype = convert_aaregex_to_pcre(entry->name, 0, glob_default, tbuf, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
entry->pattern_type = ptype;
@@ -649,13 +649,13 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
entry->perms & ~(AA_LINK_BITS | AA_CHANGE_PROFILE),
entry->audit == AUDIT_FORCE ? entry->perms & ~(AA_LINK_BITS | AA_CHANGE_PROFILE) : 0,
parseopts))
return FALSE;
return false;
} else if (!is_change_profile_perms(entry->perms)) {
if (!dfarules->add_rule(tbuf.c_str(), entry->priority,
entry->rule_mode, entry->perms,
entry->audit == AUDIT_FORCE ? entry->perms : 0,
parseopts))
return FALSE;
return false;
}
if (entry->perms & (AA_LINK_BITS)) {
@@ -669,7 +669,7 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
filter_slashes(entry->link_name);
ptype = convert_aaregex_to_pcre(entry->link_name, 0, glob_default, lbuf, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
if (entry->subset)
perms |= LINK_TO_LINK_SUBSET(perms);
vec[1] = lbuf.c_str();
@@ -681,7 +681,7 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
entry->rule_mode, perms,
entry->audit == AUDIT_FORCE ? perms & AA_LINK_BITS : 0,
2, vec, parseopts, false))
return FALSE;
return false;
}
if (is_change_profile_perms(entry->perms)) {
const char *vec[3];
@@ -702,7 +702,7 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (entry->onexec) {
ptype = convert_aaregex_to_pcre(entry->onexec, 0, glob_default, xbuf, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
vec[0] = xbuf.c_str();
} else
/* allow change_profile for all execs */
@@ -713,14 +713,14 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (!parse_label(&stack, &ns, &name,
tbuf.c_str(), false)) {
return FALSE;
return false;
}
if (stack) {
fprintf(stderr,
_("The current kernel does not support stacking of named transitions: %s\n"),
tbuf.c_str());
return FALSE;
return false;
}
if (ns)
@@ -734,13 +734,13 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (!dfarules->add_rule_vec(entry->priority, entry->rule_mode,
AA_CHANGE_PROFILE | onexec_perms,
0, index - 1, &vec[1], parseopts, false))
return FALSE;
return false;
/* onexec rules - both rules are needed for onexec */
if (!dfarules->add_rule_vec(entry->priority, entry->rule_mode,
onexec_perms,
0, 1, vec, parseopts, false))
return FALSE;
return false;
/**
* pick up any exec bits, from the frontend parser, related to
@@ -750,19 +750,19 @@ static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (!dfarules->add_rule_vec(entry->priority, entry->rule_mode,
onexec_perms, 0, index, vec,
parseopts, false))
return FALSE;
return false;
}
return TRUE;
return true;
}
int post_process_entries(Profile *prof)
bool post_process_entries(Profile *prof)
{
int ret = TRUE;
int ret = true;
struct cod_entry *entry;
list_for_each(prof->entries, entry) {
if (!process_dfa_entry(prof->dfa.rules, entry))
ret = FALSE;
ret = false;
}
return ret;
@@ -815,7 +815,7 @@ out:
return error;
}
int build_list_val_expr(std::string& buffer, struct value_list *list)
bool build_list_val_expr(std::string& buffer, struct value_list *list)
{
struct value_list *ent;
pattern_t ptype;
@@ -823,7 +823,7 @@ int build_list_val_expr(std::string& buffer, struct value_list *list)
if (!list) {
buffer.append(default_match_pattern);
return TRUE;
return true;
}
buffer.append("(");
@@ -840,12 +840,12 @@ int build_list_val_expr(std::string& buffer, struct value_list *list)
}
buffer.append(")");
return TRUE;
return true;
fail:
return FALSE;
return false;
}
int convert_entry(std::string& buffer, char *entry)
bool convert_entry(std::string& buffer, char *entry)
{
pattern_t ptype;
int pos;
@@ -853,12 +853,12 @@ int convert_entry(std::string& buffer, char *entry)
if (entry) {
ptype = convert_aaregex_to_pcre(entry, 0, glob_default, buffer, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
} else {
buffer.append(default_match_pattern);
}
return TRUE;
return true;
}
int clear_and_convert_entry(std::string& buffer, char *entry)
@@ -959,7 +959,7 @@ static std::string generate_regex_range(bignum start, bignum end)
return result.str();
}
int convert_range(std::string& buffer, bignum start, bignum end)
bool convert_range(std::string& buffer, bignum start, bignum end)
{
pattern_t ptype;
int pos;
@@ -969,24 +969,24 @@ int convert_range(std::string& buffer, bignum start, bignum end)
if (!regex_range.empty()) {
ptype = convert_aaregex_to_pcre(regex_range.c_str(), 0, glob_default, buffer, &pos);
if (ptype == ePatternInvalid)
return FALSE;
return false;
} else {
buffer.append(default_match_pattern);
}
return TRUE;
return true;
}
int post_process_policydb_ents(Profile *prof)
bool post_process_policydb_ents(Profile *prof)
{
for (RuleList::iterator i = prof->rule_ents.begin(); i != prof->rule_ents.end(); i++) {
if ((*i)->skip())
continue;
if ((*i)->gen_policy_re(*prof) == RULE_ERROR)
return FALSE;
return false;
}
return TRUE;
return true;
}

View File

@@ -79,7 +79,7 @@ struct var_string *split_out_var(const char *string)
{
struct var_string *n = NULL;
const char *sptr;
BOOL bEscape = 0; /* flag to indicate escape */
bool bEscape = false; /* flag to indicate escape */
if (!string) /* shouldn't happen */
return NULL;
@@ -89,15 +89,11 @@ struct var_string *split_out_var(const char *string)
while (!n && *sptr) {
switch (*sptr) {
case '\\':
if (bEscape) {
bEscape = FALSE;
} else {
bEscape = TRUE;
}
bEscape = !bEscape;
break;
case '@':
if (bEscape) {
bEscape = FALSE;
bEscape = false;
} else if (*(sptr + 1) == '{') {
const char *eptr = get_var_end(sptr + 2);
if (!eptr)
@@ -111,8 +107,7 @@ struct var_string *split_out_var(const char *string)
}
break;
default:
if (bEscape)
bEscape = FALSE;
bEscape = false;
}
sptr++;
}

View File

@@ -704,7 +704,7 @@ rules: rules opt_prefix block
if (($2).priority != 0) {
yyerror(_("priority is not allowed on rule blocks"));
}
PDEBUG("matched: %s%s%sblock\n",
PDEBUG("matched: %s%s%s%sblock\n",
$2.audit == AUDIT_FORCE ? "audit " : "",
$2.rule_mode == RULE_DENY ? "deny " : "",
$2.rule_mode == RULE_PROMPT ? "prompt " : "",

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# Translations for apparmor_parser
# Copyright (C) 2024 YEAR Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2011.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2025-02-18 07:32-0800\n"
"POT-Creation-Date: 2024-08-31 15:55-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
@@ -326,7 +326,7 @@ msgstr ""
#: parser_yacc.y:744 parser_yacc.y:1073 parser_yacc.y:1160 parser_yacc.y:1169
#: parser_yacc.y:1173 parser_yacc.y:1183 parser_yacc.y:1193 parser_yacc.y:1287
#: parser_yacc.y:1365 parser_yacc.y:1561 parser_yacc.y:1569 parser_yacc.y:1619
#: parser_yacc.y:1624 parser_yacc.y:1701 parser_yacc.y:1750 ../network.cc:945
#: parser_yacc.y:1624 parser_yacc.y:1701 parser_yacc.y:1750 ../network.cc:899
#: ../af_unix.cc:197 ../all_rule.cc:102 ../all_rule.cc:131
msgid "Memory allocation error."
msgstr ""
@@ -411,12 +411,12 @@ msgid "AppArmor parser error: %s\n"
msgstr ""
#: ../parser_merge.c:92 ../parser_merge.c:91 ../parser_merge.c:83
#: ../parser_merge.c:71 ../parser_merge.c:77
#: ../parser_merge.c:71 ../parser_merge.c:74
msgid "Couldn't merge entries. Out of Memory\n"
msgstr ""
#: ../parser_merge.c:111 ../parser_merge.c:113 ../parser_merge.c:105
#: ../parser_merge.c:93 ../parser_merge.c:100
#: ../parser_merge.c:93 ../parser_merge.c:97
#, c-format
msgid "profile %s: has merged rule %s with conflicting x modifiers\n"
msgstr ""
@@ -542,7 +542,7 @@ msgstr ""
#: parser_yacc.y:975 parser_yacc.y:985 parser_yacc.y:1057 parser_yacc.y:1067
#: parser_yacc.y:1145 parser_yacc.y:1155 parser_yacc.y:1234 parser_yacc.y:1244
#: ../network.cc:515
#: ../network.cc:484
msgid "Invalid network entry."
msgstr ""
@@ -830,16 +830,16 @@ msgstr ""
msgid "%s: Regex error: trailing '\\' escape character\n"
msgstr ""
#: ../parser_common.c:112 ../parser_common.c:139
#: ../parser_common.c:112 ../parser_common.c:134
#, c-format
msgid "%s from %s (%s%sline %d): %s"
msgstr ""
#: ../parser_common.c:113 ../parser_common.c:140
#: ../parser_common.c:113 ../parser_common.c:135
msgid "Warning converted to Error"
msgstr ""
#: ../parser_common.c:113 ../parser_common.c:140
#: ../parser_common.c:113 ../parser_common.c:135
msgid "Warning"
msgstr ""
@@ -1051,13 +1051,13 @@ msgstr ""
msgid "Internal: unexpected %s perms character '%c' in input"
msgstr ""
#: ../parser_misc.c:1100
#: ../parser_misc.c:1098
msgid ""
"Invalid perms, in deny rules 'x' must not be preceded by exec qualifier 'i', "
"'p', or 'u'"
msgstr ""
#: ../parser_misc.c:1104
#: ../parser_misc.c:1102
msgid "Invalid perms, 'x' must be preceded by exec qualifier 'i', 'p', or 'u'"
msgstr ""
@@ -1091,16 +1091,16 @@ msgstr ""
msgid "attach_disconnected_path value must begin with a /"
msgstr ""
#: ../mount.cc:903
#: ../mount.cc:897
msgid ""
"The use of source as mount point for propagation type flags is deprecated.\n"
msgstr ""
#: ../network.h:202
#: ../network.h:200
msgid "priority prefix not allowed on network rules"
msgstr ""
#: ../network.h:206
#: ../network.h:204
msgid "owner prefix not allowed on network rules"
msgstr ""

View File

@@ -226,13 +226,13 @@ static bool add_proc_access(Profile *prof, const char *rule)
char *buffer = strdup("/proc/*/attr/apparmor/");
if (!buffer) {
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
new_ent = new_entry(buffer, AA_MAY_READ, NULL);
if (!new_ent) {
free(buffer);
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
add_entry_to_policy(prof, new_ent);
@@ -240,13 +240,13 @@ static bool add_proc_access(Profile *prof, const char *rule)
buffer = strdup("/sys/module/apparmor/parameters/enabled");
if (!buffer) {
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
new_ent = new_entry(buffer, AA_MAY_READ, NULL);
if (!new_ent) {
free(buffer);
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
add_entry_to_policy(prof, new_ent);
@@ -254,17 +254,17 @@ static bool add_proc_access(Profile *prof, const char *rule)
buffer = strdup(rule);
if (!buffer) {
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
new_ent = new_entry(buffer, AA_MAY_WRITE, NULL);
if (!new_ent) {
free(buffer);
PERROR("Memory allocation error\n");
return FALSE;
return false;
}
add_entry_to_policy(prof, new_ent);
return TRUE;
return true;
}
#define CHANGEPROFILE_PATH "/proc/*/attr/{apparmor/,}{current,exec}"

View File

@@ -363,7 +363,7 @@ public:
struct cond_entry_list xattrs;
/* char *sub_name; */ /* subdomain name or NULL */
/* int default_deny; */ /* TRUE or FALSE */
/* bool default_deny; */
bool local;
Profile *parent;

View File

@@ -23,7 +23,7 @@
#include <iomanip>
#include <string>
#include <sstream>
#include <map>
#include <unordered_map>
#include "parser.h"
#include "profile.h"
@@ -35,7 +35,7 @@
#define MAXRT_SIG 32 /* Max RT above MINRT_SIG */
/* Signal names mapped to and internal ordering */
static struct signal_map { const char *name; int num; } signal_map[] = {
static unordered_map<string, int> signal_map = {
{"hup", 1},
{"int", 2},
{"quit", 3},
@@ -55,7 +55,8 @@ static struct signal_map { const char *name; int num; } signal_map[] = {
{"chld", 17},
{"cont", 18},
{"stop", 19},
{"stp", 20},
{"stp", 20}, // parser's previous name for SIGTSTP
{"tstp", 20},
{"ttin", 21},
{"ttou", 22},
{"urg", 23},
@@ -64,14 +65,12 @@ static struct signal_map { const char *name; int num; } signal_map[] = {
{"vtalrm", 26},
{"prof", 27},
{"winch", 28},
{"io", 29},
{"io", 29}, // SIGIO == SIGPOLL
{"poll", 29},
{"pwr", 30},
{"sys", 31},
{"emt", 32},
{"exists", 35},
/* terminate */
{NULL, 0}
};
/* this table is ordered post sig_map[sig] mapping */
@@ -96,7 +95,7 @@ static const char *const sig_names[MAXMAPPED_SIG + 1] = {
"chld",
"cont",
"stop",
"stp",
"tstp",
"ttin",
"ttou",
"urg",
@@ -105,7 +104,7 @@ static const char *const sig_names[MAXMAPPED_SIG + 1] = {
"vtalrm",
"prof",
"winch",
"io",
"io", // SIGIO == SIGPOLL
"pwr",
"sys",
"emt",
@@ -130,12 +129,14 @@ int find_signal_mapping(const char *sig)
return -1;
return MINRT_SIG + n;
} else {
for (int i = 0; signal_map[i].name; i++) {
if (strcmp(sig, signal_map[i].name) == 0)
return signal_map[i].num;
// Can't use string_view because that requires C++17
auto sigmap = signal_map.find(string(sig));
if (sigmap != signal_map.end()) {
return sigmap->second;
} else {
return -1;
}
}
return -1;
}
void signal_rule::extract_sigs(struct value_list **list)

View File

@@ -6,7 +6,7 @@ PARSER_BIN=apparmor_parser
PARSER=$(PARSER_DIR)/$(PARSER_BIN)
# parser.conf to use in tests. Note that some test scripts have the parser options hardcoded, so passing PARSER_ARGS=... is not enough to override it.
PARSER_ARGS=--config-file=./parser.conf
PROVE_ARG=-f --directives
PROVE_ARG=-f --directives -j2
ifeq ($(VERBOSE),1)
PROVE_ARG+=-v
@@ -37,6 +37,11 @@ error_output: $(PARSER)
parser_sanity: $(PARSER) gen_xtrans gen_dbus
$(Q)LANG=C APPARMOR_PARSER="$(PARSER)" ${PROVE} ${PROVE_ARG} ${TESTS}
# use this target for faster manual testing if you don't want/need to test all the profiles generated by gen-*.py
parser_sanity-no-gen: clean $(PARSER)
@echo WARNING: not creating the profiles using the gen-*.py scripts
$(Q)LANG=C APPARMOR_PARSER="$(PARSER)" ${PROVE} ${PROVE_ARG} ${TESTS}
caching: $(PARSER)
LANG=C ./caching.py -p "$(PARSER)" $(PYTEST_ARG)

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - missing end of range
#=EXRESULT FAIL
#
/usr/bin/foo {
network port=22-,
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - missing end of range
#=EXRESULT FAIL
#
/usr/bin/foo {
network peer=(port=2222-),
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - spaces in range not allowed
#=EXRESULT FAIL
#
/usr/bin/foo {
network port=22 - 443,
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - spaces in range not allowed
#=EXRESULT FAIL
#
/usr/bin/foo {
network peer=(port=22 - 443),
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - invalid "--"
#=EXRESULT FAIL
#
/usr/bin/foo {
network port=22--443,
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - invalid "--"
#=EXRESULT FAIL
#
/usr/bin/foo {
network peer=(port=22--443),
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - 3 items in range
#=EXRESULT FAIL
#
/usr/bin/foo {
network port=22-443-1024,
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - 3 items in range
#=EXRESULT FAIL
#
/usr/bin/foo {
network peer=(port=22-443-1024),
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - additional spaces
#=EXRESULT PASS
#
/usr/bin/foo {
network port = 22-443 ,
}

View File

@@ -0,0 +1,8 @@
#
#=DESCRIPTION network port range conditional test - additional spaces
#=EXRESULT PASS
#
/usr/bin/foo {
network peer=( port = 22-443 ),
}

View File

@@ -98,12 +98,11 @@ class AATestTemplate(unittest.TestCase, metaclass=AANoCleanupMetaClass):
except OSError as e:
return 127, str(e), ''
timeout_communicate = TimeoutFunction(sp.communicate, timeout)
out, outerr = (None, None)
try:
out, outerr = timeout_communicate(input)
out, outerr = sp.communicate(input, timeout)
rc = sp.returncode
except TimeoutFunctionException:
except subprocess.TimeoutExpired:
sp.terminate()
outerr = 'test timed out, killed'
rc = TIMEOUT_ERROR_CODE
@@ -117,31 +116,6 @@ class AATestTemplate(unittest.TestCase, metaclass=AANoCleanupMetaClass):
return rc, out, outerr
# Timeout handler using alarm() from John P. Speno's Pythonic Avocado
class TimeoutFunctionException(Exception):
"""Exception to raise on a timeout"""
class TimeoutFunction:
def __init__(self, function, timeout):
self.timeout = timeout
self.function = function
def handle_timeout(self, signum, frame):
raise TimeoutFunctionException()
def __call__(self, *args, **kwargs):
old = signal.signal(signal.SIGALRM, self.handle_timeout)
signal.alarm(self.timeout)
try:
result = self.function(*args, **kwargs)
finally:
signal.signal(signal.SIGALRM, old)
signal.alarm(0)
return result
def filesystem_time_resolution():
"""detect whether the filesystem stores subsecond timestamps"""

View File

@@ -154,11 +154,12 @@ check-logprof: test-dependencies
.PHONY: check-abstractions.d
check-abstractions.d:
@echo "*** Checking if all abstractions (with a few exceptions) contain 'include if exists <abstractions/*.d>'"
@echo "*** Checking if all abstractions (with a few exceptions) contain 'include if exists <abstractions/*.d>' and 'abi <abi/4.0>,'"
$(Q)for file in $$(find ${ABSTRACTIONS_SOURCE} ${EXTRAS_ABSTRACTIONS_SOURCE} -maxdepth 1 -type f) ; do \
case "$${file}" in */ubuntu-browsers | */ubuntu-helpers) continue ;; esac ; \
include="include if exists <abstractions/$$(basename $${file}).d>" ; \
grep -q "^ $${include}\$$" $${file} || { echo "$${file} does not contain '$${include}'"; exit 1; } ; \
grep -q "^ *abi <abi/4.0>," $${file} || { echo "$${file} does not contain 'abi <abi/4.0>,'"; exit 1; } ; \
done
.PHONY: check-tunables.d
@@ -172,9 +173,10 @@ check-tunables.d:
.PHONY: check-local
check-local:
@echo "*** Checking if all profiles contain 'include if exists <local/*>'"
@echo "*** Checking if all profiles contain 'include if exists <local/*>' and 'abi <abi/4.0>,'"
$(Q)for file in $$(find ${PROFILES_SOURCE} ${EXTRAS_SOURCE} -maxdepth 1 -type f) ; do \
case "$${file}" in */README) continue ;; esac ; \
include="include if exists <local/$$(basename $${file})>" ; \
grep -q "^ *$${include}\$$" $${file} || { echo "$${file} does not contain '$${include}'"; exit 1; } ; \
grep -q "^ *abi <abi/4.0>," $${file} || { echo "$${file} does not contain 'abi <abi/4.0>,'"; exit 1; } ; \
done

View File

@@ -1,22 +0,0 @@
# ------------------------------------------------------------------
#
# Copyright (C) 2021 Mikhail Morfikov
# Copyright (C) 2021-2025 Alexandre Pujol <alexandre@pujol.io>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
# License published by the Free Software Foundation.
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <abstractions/devices-usb-read>
/dev/bus/usb/@{int}/@{int} wk,
@{sys}/devices/**/usb@{int}/{,**} w,
include if exists <abstractions/devices-usb.d>
# vim:syntax=apparmor

View File

@@ -1,35 +0,0 @@
# ------------------------------------------------------------------
#
# Copyright (C) 2021 Mikhail Morfikov
# Copyright (C) 2021-2025 Alexandre Pujol <alexandre@pujol.io>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
# License published by the Free Software Foundation.
#
# ------------------------------------------------------------------
abi <abi/4.0>,
/dev/ r,
/dev/bus/usb/ r,
/dev/bus/usb/@{int}/ r,
/dev/bus/usb/@{int}/@{int} r,
@{sys}/class/ r,
@{sys}/class/usbmisc/ r,
@{sys}/bus/ r,
@{sys}/bus/usb/ r,
@{sys}/bus/usb/devices/{,**} r,
@{sys}/devices/**/usb@{int}/{,**} r,
# Udev data about usb devices (~equal to content of lsusb -v)
@{run}/udev/data/+usb:* r,
@{run}/udev/data/c16[6,7]:@{int} r, # USB modems
@{run}/udev/data/c18[0,8,9]:@{int} r, # USB devices & USB serial converters
include if exists <abstractions/devices-usb-read.d>
# vim:syntax=apparmor

View File

@@ -10,6 +10,8 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
# Note: executing groff and nroff themself is not included in this abstraction
# so that you can choose to ix, Px or Cx them in your profile

View File

@@ -42,9 +42,6 @@
# have open
@{run}/nscd/db* mix,
# make libnss-libvirt name resolution work.
/var/lib/libvirt/dnsmasq/* r,
# make libnss-libvirt name resolution work.
/var/lib/libvirt/dnsmasq/ r,
/var/lib/libvirt/dnsmasq/*.status r,

View File

@@ -13,19 +13,19 @@
abi <abi/4.0>,
/{usr/,}bin/ r,
/{usr/,}bin/python{2.[4-7],3,3.[0-9],3.1[0-9]} r,
/{usr/,}bin/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/**.{pyc,so,so.*[0-9]} mr,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/**.{egg,py,pth} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/ r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/**/ r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/*.dist-info/{METADATA,namespace_packages.txt} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/*.VERSION r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/*.egg-info/PKG-INFO r,
/usr/{local/,}lib{,32,64}/python3.{1,}[0-9]/lib-dynload/*.so mr,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/**.{pyc,so,so.*[0-9]} mr,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/**.{egg,py,pth} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/ r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/**/ r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/*.dist-info/{METADATA,namespace_packages.txt} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/*.VERSION r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/*.egg-info/PKG-INFO r,
/usr/{local/,}lib{,32,64}/python{3.[0-9],3.[1-9][0-9]}/lib-dynload/*.so mr,
# Site-wide configuration
/etc/python{2.[4-7],3.[0-9],3.1[0-9]}/** r,
/etc/python{2.[4-7],3.[0-9],3.[1-9][0-9]}/** r,
# shared python paths
/usr/share/{pyshared,pycentral,python-support}/** r,
@@ -38,12 +38,12 @@
/usr/lib/wx/python/*.pth r,
# python build configuration and headers
/usr/include/python{2.[4-7],3.[0-9],3.1[0-9]}*/pyconfig.h r,
/usr/include/python{2.[4-7],3.[0-9],3.[1-9][0-9]}*/pyconfig.h r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.1[0-9]}/**.{pyc,so} mr,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.1[0-9]}/**.{egg,py,pth} r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/ r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/**/ r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/**.{pyc,so} mr,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/**.{egg,py,pth} r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/ r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/**/ r,
# Starting with Python 3.8, you can use the PYTHONPYCACHEPREFIX environment
# variable to define a cache directory for Python.

View File

@@ -1,3 +1,5 @@
abi <abi/4.0>,
profile snap_browsers {
include if exists <abstractions/snap_browsers.d>
include <abstractions/base>

View File

@@ -2,6 +2,8 @@
# LOGPROF-SUGGEST: no
# Author: Daniel Richard G. <skunk@iSKUNK.ORG>
abi <abi/4.0>,
include <abstractions/base>
include <abstractions/freedesktop.org>
include <abstractions/nameservice>

37
profiles/apparmor.d/tar Normal file
View File

@@ -0,0 +1,37 @@
#------------------------------------------------------------------
# Copyright (C) 2024 Canonical Ltd.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
# License published by the Free Software Foundation.
#------------------------------------------------------------------
# vim: ft=apparmor
abi <abi/4.0>,
include <tunables/global>
profile tar /usr/bin/tar {
include <abstractions/base>
# used to extract user files as root
capability chown,
# used to compress user files as root
capability dac_override,
capability dac_read_search,
file rwl /**,
# tar can be made to filter archives through an arbitrary program
/{usr{/local,},}/{bin,sbin}/* ix,
/opt/** ix,
# tar can compress/extract files over rsh/ssh
network stream ip=127.0.0.1,
network stream ip=::1,
# Site-specific additions and overrides. See local/README for details.
include if exists <local/tar>
}

View File

@@ -17,7 +17,6 @@ include <tunables/multiarch>
include <tunables/proc>
include <tunables/alias>
include <tunables/kernelvars>
include <tunables/system>
include <tunables/xdg-user-dirs>
include <tunables/share>
include <tunables/etc>

View File

@@ -1,99 +0,0 @@
# ------------------------------------------------------------------
#
# Copyright (C) 2025 Alexandre Pujol <alexandre@pujol.io>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
# License published by the Free Software Foundation.
#
# ------------------------------------------------------------------
# Any digit
@{d}=[0-9]
# Any letter
@{l}=[a-zA-Z]
# Single alphanumeric character
@{c}=[0-9a-zA-Z]
# Word character: matches any letter, digit or underscore.
@{w}=[a-zA-Z0-9_]
# Single hexadecimal character
@{h}=[0-9a-fA-F]
# Integer up to 10 digits (0-9999999999)
@{int}=@{d}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}
# hexadecimal, alphanumeric and word up to 64 characters
@{hex}=@{h}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}
@{rand}=@{c}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}
@{word}=@{w}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}
# Unsigned integer over 8 bits (0...255)
@{u8}=[0-9]{[0-9],} 1[0-9][0-9] 2[0-4][0-9] 25[0-5]
# Unsigned integer over 16 bits (0...65,535 5 digits)
@{u16}={@{d},[1-9]@{d},[1-9][@{d}@{d},[1-9]@{d}@{d}@{d},[1-6]@{d}@{d}@{d}@{d}}
# Unsigned integer over 32 bits (0...4,294,967,295 10 digits)
@{u32}={@{d},[1-9]@{d},[1-9]@{d}@{d},[1-9]@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-4]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}}
# Unsigned integer over 64 bits (0...18,446,744,073,709,551,615 20 digits).
@{u64}={@{d},[1-9]@{d},[1-9]@{d}@{d},[1-9]@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},1@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}}
# Any x digits characters
@{int2}=@{d}@{d}
@{int4}=@{int2}@{int2}
@{int6}=@{int4}@{int2}
@{int8}=@{int4}@{int4}
@{int9}=@{int8}@{d}
@{int10}=@{int8}@{int2}
@{int12}=@{int8}@{int4}
@{int15}=@{int8}@{int4}@{int2}@{d}
@{int16}=@{int8}@{int8}
@{int32}=@{int16}@{int16}
@{int64}=@{int32}@{int32}
# Any x hexadecimal characters
@{hex2}=@{h}@{h}
@{hex4}=@{hex2}@{hex2}
@{hex6}=@{hex4}@{hex2}
@{hex8}=@{hex4}@{hex4}
@{hex9}=@{hex8}@{h}
@{hex10}=@{hex8}@{hex2}
@{hex12}=@{hex8}@{hex4}
@{hex15}=@{hex8}@{hex4}@{hex2}@{h}
@{hex16}=@{hex8}@{hex8}
@{hex32}=@{hex16}@{hex16}
@{hex38}=@{hex32}@{hex6}
@{hex64}=@{hex32}@{hex32}
# Any x alphanumeric characters
@{rand2}=@{c}@{c}
@{rand4}=@{rand2}@{rand2}
@{rand6}=@{rand4}@{rand2}
@{rand8}=@{rand4}@{rand4}
@{rand9}=@{rand8}@{c}
@{rand10}=@{rand8}@{rand2}
@{rand12}=@{rand8}@{rand4}
@{rand15}=@{rand8}@{rand4}@{rand2}@{c}
@{rand16}=@{rand8}@{rand8}
@{rand32}=@{rand16}@{rand16}
@{rand64}=@{rand32}@{rand32}
# Any x word characters
@{word2}=@{w}@{w}
@{word4}=@{word2}@{word2}
@{word6}=@{word4}@{word2}
@{word8}=@{word4}@{word4}
@{word9}=@{word8}@{w}
@{word10}=@{word8}@{word2}
@{word12}=@{word8}@{word4}
@{word15}=@{word8}@{word4}@{word2}@{w}
@{word16}=@{word8}@{word8}
@{word32}=@{word16}@{word16}
@{word64}=@{word32}@{word32}
include if exists <tunables/system.d>

View File

@@ -9,6 +9,8 @@
# ------------------------------------------------------------------
# vim: ft=apparmor
abi <abi/4.0>,
include <tunables/global>
profile dovecot-director /usr/lib*/dovecot/director flags=(attach_disconnected) {

View File

@@ -9,6 +9,8 @@
# ------------------------------------------------------------------
# vim: ft=apparmor
abi <abi/4.0>,
include <tunables/global>
profile dovecot-doveadm-server /usr/lib*/dovecot/doveadm-server flags=(attach_disconnected) {

View File

@@ -12,6 +12,8 @@
# vim: ft=apparmor
# for https://wiki.dovecot.org/Replication
abi <abi/4.0>,
include <tunables/dovecot>
include <tunables/global>

View File

@@ -11,19 +11,17 @@
#
# disabled by default as it can break some use cases on a system that
# doesn't have or has disable user namespace restrictions for unconfined
# use aa-enforce to enable it
abi <abi/4.0>,
include <tunables/global>
profile unshare /usr/bin/unshare flags=(attach_disconnected mediate_deleted) {
# not allow all, to allow for pix stack on systems that don't support
# rule priority.
#
# sadly we have to allow 'm' every where to allow children to work under
# profile stacking atm.
profile unshare /usr/bin/unshare flags=(attach_disconnected) {
# not allow all, to allow for cix transition
# and to limit executable mapping to just unshare
allow capability,
allow file rwmlk /{**,},
allow file rwlk /{**,},
allow network,
allow unix,
allow ptrace,
@@ -35,41 +33,33 @@ profile unshare /usr/bin/unshare flags=(attach_disconnected mediate_deleted) {
allow umount,
allow pivot_root,
allow dbus,
# This will stack a target profile against unpriv_unshare
# Most of the comments for the pix transition in bwrap-userns-restrict
# also apply here, with the exception of unshare not using no-new-privs
# Thus, we only need a two-layer stack instead of a three-layer stack
audit allow pix /** -> &unpriv_unshare,
audit allow cx /** -> unpriv,
allow file m /usr/lib/@{multiarch}/libc.so.6,
allow file m /usr/bin/unshare,
# the local include should not be used without understanding the userns
# restriction.
# Site-specific additions and overrides. See local/README for details.
include if exists <local/unshare-userns-restrict>
}
profile unpriv_unshare flags=(attach_disconnected mediate_deleted) {
# not allow all, to allow for pix stack
allow file rwlkm /{**,},
allow network,
allow unix,
allow ptrace,
allow signal,
allow mqueue,
allow io_uring,
allow userns,
allow mount,
allow umount,
allow pivot_root,
allow dbus,
# Maintain the stack against itself for further transitions
# If done recursively the stack will remove any duplicate
allow pix /** -> &unpriv_unshare,
audit deny capability,
# the local include should not be used without understanding the userns
# restriction.
# Site-specific additions and overrides. See local/README for details.
include if exists <local/unpriv_unshare>
profile unpriv flags=(attach_disconnected) {
# not allow all, to allow for pix stack
allow file rwlkm /{**,},
allow network,
allow unix,
allow ptrace,
allow signal,
allow mqueue,
allow io_uring,
allow userns,
allow mount,
allow umount,
allow pivot_root,
allow dbus,
allow pix /** -> &unshare//unpriv,
audit deny capability,
}
}

View File

@@ -8,6 +8,8 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <tunables/global>
profile pyzorsocket /usr/bin/pyzorsocket {

View File

@@ -8,6 +8,8 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <tunables/global>
profile razorsocket /usr/bin/razorsocket {

View File

@@ -8,6 +8,8 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <tunables/global>
profile clamd /usr/sbin/clamd {

View File

@@ -8,6 +8,8 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <tunables/global>
profile haproxy /usr/sbin/haproxy {

View File

@@ -66,6 +66,7 @@ backends:
- ubuntu-cloud-24.04:
username: ubuntu
password: ubuntu
workers: 4
manual: true
- ubuntu-cloud-24.10:
username: ubuntu

View File

@@ -0,0 +1,13 @@
summary: smoke test for the tar profile
execute: |
# tar works (this is a very basic test).
# create a text file, archive it and delete the original file
echo "test" > file.txt
tar -czf archive.tar file.txt
rm file.txt
# extract archive, assert content is correct
tar -xzf archive.tar
test "$(cat file.txt)" = "test"
# The profile is attached based on the program path.
"$SPREAD_PATH"/tests/bin/actual-profile-of tar | MATCH 'tar \(enforce\)'

View File

@@ -64,7 +64,7 @@ mount_cleanup() {
}
do_onexit="mount_cleanup"
fallocate -l 512K ${mount_file}
dd if=/dev/zero of=${mount_file} bs=1024 count=512 2> /dev/null
/sbin/mkfs -t${fstype} -F ${mount_file} > /dev/null 2> /dev/null
/bin/mkdir ${mount_point}
/bin/mkdir ${mount_point2}
@@ -77,8 +77,8 @@ if [ ! -b /dev/loop0 ] ; then
fi
# find the next free loop device and mount it
/sbin/losetup -f ${mount_file} || fatalerror 'Unable to set up a loop device'
loop_device="$(/sbin/losetup -n -O NAME -l -j ${mount_file})"
loop_device=$(losetup -f) || fatalerror 'Unable to find a free loop device'
/sbin/losetup "$loop_device" ${mount_file} > /dev/null 2> /dev/null
options=(
# default and non-default options

View File

@@ -83,7 +83,7 @@ environment:
# test is expected to fail.
#
# Error: unix_fd_server passed. Test 'ATTACH_DISCONNECTED (attach_disconnected.path rule at /)' was expected to 'fail'
XFAIL/attach_disconnected: opensuse-cloud-tumbleweed debian-cloud-12 debian-cloud-13 ubuntu-cloud-22.04
XFAIL/attach_disconnected: opensuse-cloud-tumbleweed debian-cloud-12 debian-cloud-13 ubuntu-cloud-22.04 ubuntu-cloud-24.04 ubuntu-cloud-24.10
# Error: unix_fd_server failed. Test 'fd passing; unconfined client' was expected to 'pass'. Reason for failure 'FAIL - bind failed: Permission denied'
# Error: unix_fd_server failed. Test 'fd passing; confined client w/ rw' was expected to 'pass'. Reason for failure 'FAIL - bind failed: Permission denied'
XFAIL/deleted: opensuse-cloud-tumbleweed debian-cloud-12 debian-cloud-13
@@ -102,6 +102,21 @@ environment:
# Error: unix_socket passed. Test 'AF_UNIX pathname socket (seqpacket); confined server w/ a missing af_unix access (create)' was expected to 'fail'
# Error: unix_socket failed. Test 'AF_UNIX pathname socket (seqpacket); confined client w/ access (rw)' was expected to 'pass'. Reason for failure 'FAIL - setsockopt (SO_RCVTIMEO): Permission denied'
XFAIL/unix_socket_pathname: opensuse-cloud-tumbleweed debian-cloud-12 debian-cloud-13
# using ptrace v6 tests ...
# Error: ptrace failed. Test 'test allow all' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -c' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -h' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -hc' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -h prog' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -hc prog' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
XFAIL/ptrace: debian-cloud-12 debian-cloud-13 ubuntu-cloud-22.04 ubuntu-cloud-24.04 ubuntu-cloud-24.10 opensuse-cloud-tumbleweed
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename- Permission denied'
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all : mq_notify)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename 4- Permission denied'
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all : select)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename- Permission denied'
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all : poll)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename- Permission denied'
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all : epoll)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename- Permission denied'
XFAIL/mqueue: debian-cloud-12 debian-cloud-13 ubuntu-cloud-22.04 ubuntu-cloud-24.04 ubuntu-cloud-24.10 opensuse-cloud-tumbleweed
XFAIL/posix_ipc: ubuntu-cloud-22.04 ubuntu-cloud-24.04 ubuntu-cloud-24.10
artifacts:
- bash.log
- bash.err
@@ -123,13 +138,14 @@ execute: |
echo "Test execution logs are in the files bash.{log,err,trace} and are collected as artifacts"
echo "Bash errors are listed below:"
cat bash.err
echo "Tail of the trace is:"
tail bash.trace
exit 1
else
for xfail in ${XFAIL:-}; do
if [ "$SPREAD_SYSTEM" = "$xfail" ]; then
echo "Test $SPREAD_VARIANT has unexpectedly passed"
echo "Test execution logs are in the files bash.{log,err,trace} and are collected as artifacts"
echo "Bash errors are listed below:"
cat bash.err
exit 1
fi
done

View File

@@ -39,10 +39,10 @@ all: docs
docs: ${MANPAGES} ${HTMLMANPAGES}
# need some better way of determining this
DESTDIR?=/
BINDIR?=${DESTDIR}/usr/sbin
CONFDIR?=${DESTDIR}/etc/apparmor
PYPREFIX?=/usr
DESTDIR=/
BINDIR=${DESTDIR}/usr/sbin
CONFDIR=${DESTDIR}/etc/apparmor
PYPREFIX=/usr
po/${NAME}.pot: ${TOOLS} ${PYMODULES}

View File

@@ -139,7 +139,7 @@ ratelimit_saved = sysctl_read(ratelimit_sysctl)
try:
sysctl_write(ratelimit_sysctl, 0)
except OSError: # will fail in lxd
except PermissionError: # will fail in lxd
warn("Can't set printk_ratelimit, some events might be lost")
atexit.register(restore_ratelimit)

View File

@@ -1,7 +1,7 @@
#! /usr/bin/python3
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2014-2018 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2014-2024 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -17,7 +17,6 @@ import argparse
import apparmor.aa
import apparmor.cleanprofile as cleanprofile
import apparmor.severity
import apparmor.ui as aaui
from apparmor.fail import enable_aa_exception_handler
from apparmor.translations import init_translation
@@ -114,12 +113,11 @@ class Merge(object):
def ask_merge_questions(self):
other = self.base
log_dict = {'merge': apparmor.aa.split_to_merged(other.aa)}
log_dict = {'merge': other.active_profiles.get_all_profiles()}
apparmor.aa.loadincludes()
if not apparmor.aa.sev_db:
apparmor.aa.sev_db = apparmor.severity.Severity(apparmor.aa.CONFDIR + '/severity.db', _('unknown'))
apparmor.aa.load_sev_db()
# ask about preamble rules
apparmor.aa.ask_rule_questions(

View File

@@ -37,9 +37,11 @@ import os
import pwd
import re
import sys
import tempfile
import time
import subprocess
from collections import defaultdict
import notify2
import psutil
@@ -54,7 +56,7 @@ from apparmor.fail import enable_aa_exception_handler
from apparmor.notify import get_last_login_timestamp
from apparmor.translations import init_translation
from apparmor.logparser import ReadLog
from apparmor.gui import UsernsGUI, ErrorGUI, ShowMoreGUI, set_interface_theme
from apparmor.gui import UsernsGUI, ErrorGUI, ShowMoreGUI, ShowMoreGUIAggregated, set_interface_theme
from apparmor.rule.file import FileRule
from dbus import DBusException
@@ -121,7 +123,7 @@ def is_event_in_filter(event, filters):
return True
def notify_about_new_entries(logfile, filters, wait=0):
def daemonize():
"""Run the notification daemon in the background."""
# Kill other instances of aa-notify if already running
for process in psutil.process_iter():
@@ -144,19 +146,9 @@ def notify_about_new_entries(logfile, filters, wait=0):
except DBusException:
sys.exit(_('Cannot initialize notify2. Please check that your terminal can use a graphical interface'))
try:
thread = threading.Thread(target=start_glib_loop)
thread.daemon = True
thread.start()
for event in follow_apparmor_events(logfile, wait):
if not is_event_in_filter(event, filters):
continue
debug_logger.info(format_event(event, logfile))
yield event, format_event(event, logfile)
except PermissionError:
sys.exit(_("ERROR: Cannot read {}. Please check permissions.").format(logfile))
thread = threading.Thread(target=start_glib_loop)
thread.daemon = True
thread.start()
else:
print(_('Notification emitter started in the background'))
# pids = (os.getpid(), newpid)
@@ -164,6 +156,17 @@ def notify_about_new_entries(logfile, filters, wait=0):
os._exit(0) # Exit child without calling exit handlers etc
def notify_about_new_entries(logfile, filters, wait=0):
try:
for event in follow_apparmor_events(logfile, wait):
if not is_event_in_filter(event, filters):
continue
debug_logger.info(format_event(event, logfile))
yield event, format_event(event, logfile)
except PermissionError:
sys.exit(_("ERROR: Cannot read {}. Please check permissions.").format(logfile))
def show_entries_since_epoch(logfile, epoch_since, filters):
"""Show AppArmor notifications since given timestamp."""
count = 0
@@ -292,6 +295,22 @@ def reopen_logfile_if_needed(logfile, logdata, log_inode, log_size):
return (logdata, log_inode, log_size)
def get_apparmor_events_return(logfile, since=0):
"""Read audit events from log source and return all relevant events."""
out = []
# Get logdata from file
# @TODO Implement more log sources in addition to just the logfile
try:
with open_file_read(logfile) as logdata:
for event in parse_logdata(logdata):
if event.epoch > since:
out.append(event)
return out
except PermissionError:
sys.exit(_("ERROR: Cannot read {}. Please check permissions.".format(logfile)))
def get_apparmor_events(logfile, since=0):
"""Read audit events from log source and yield all relevant events."""
@@ -378,6 +397,10 @@ def drop_privileges():
os.setegid(int(next_gid))
os.seteuid(int(next_uid))
# sudo does not preserve DBUS address, so we need to guess it based on UID
if 'DBUS_SESSION_BUS_ADDRESS' not in os.environ:
os.environ['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/{}/bus'.format(os.geteuid())
def raise_privileges():
"""Raise privileges of process.
@@ -477,77 +500,95 @@ def ask_for_user_ns_denied(path, name, interactive=True):
debug_logger.debug('No action from the user for {}'.format(path))
def prompt_userns(ev, special_profiles):
"""If the user namespace creation denial was generated by an unconfined binary, displays a graphical notification.
Creates a new profile to allow userns if the user wants it. Returns whether a notification was displayed to the user
"""
if not is_special_profile_userns(ev, special_profiles):
return False
def can_leverage_userns_event(ev):
if ev['execpath'] is None:
UsernsGUI.show_error_cannot_find_execpath(ev['comm'], os.path.dirname(os.path.abspath(__file__)) + '/default_unconfined.template')
return True
return 'error_cannot_find_path'
aa.update_profiles()
if aa.get_profile_filename_from_profile_name(ev['comm']):
return 'error_userns_profile_exists'
return 'ok'
def prompt_userns(ev):
"""If the user namespace creation denial was generated by an unconfined binary, displays a graphical notification.
Creates a new profile to allow userns if the user wants it. Returns whether a notification was displayed to the user
"""
userns_event_usable = can_leverage_userns_event(ev)
if userns_event_usable == 'error_cannot_find_path':
UsernsGUI.show_error_cannot_find_execpath(ev['comm'], os.path.dirname(os.path.abspath(__file__)) + '/default_unconfined.template')
elif userns_event_usable == 'error_userns_profile_exists':
# There is already a profile with this name: we show an error to the user.
# We could use the full path as profile name like for the old profiles if we want to handle this case
# but if execpath is not supported by the kernel it could also mean that we inferred a bad path
# So we do nothing beyond showing this error.
ErrorGUI(
_('Application {profile} tried to create an user namespace, but a profile already exists with this name.\n'
'This is likely because there is several binaries named {profile} thus the path inferred by AppArmor ({inferred_path}) is not correct.\n'
'You should review your profiles (in {profile_dir}).').format(profile=ev['comm'], inferred_path=ev['execpath'], profile_dir=aa.profile_dir),
'This is likely because there is several binaries named {profile} thus the path inferred by AppArmor ({inferred_path}) is not correct.\n'
'You should review your profiles (in {profile_dir}).').format(profile=ev['comm'], inferred_path=ev['execpath'], profile_dir=aa.profile_dir),
False).show()
return True
elif userns_event_usable == 'ok':
ask_for_user_ns_denied(ev['execpath'], ev['comm'])
ask_for_user_ns_denied(ev['execpath'], ev['comm'])
return True
def get_more_info_about_event(rl, ev, special_profiles, header='', get_clean_rule=False):
out = header
clean_rule = None
for key, value in ev.items():
if value:
out += '\t{} = {}\n'.format(_(key), value)
out += _('\nThe software that declined this operation is {}\n').format(ev['profile'])
rule = rl.create_rule_from_ev(ev)
if rule:
if type(rule) is FileRule and rule.exec_perms == FileRule.ANY_EXEC:
rule.exec_perms = 'Pix'
aa.update_profiles()
if customized_message['userns']['cond'](ev, special_profiles):
profile_path = None
out += _('You may allow it through a dedicated unconfined profile for {}.').format(ev['comm'])
userns_event_usable = can_leverage_userns_event(ev)
if userns_event_usable == 'error_cannot_find_path':
clean_rule = _('# You may allow it through a dedicated unconfined profile for {0}. However, apparmor cannot find {0}. If you want to allow it, please create a profile for it manually.').format(ev['comm'])
elif userns_event_usable == 'error_userns_profile_exists':
clean_rule = _('# You may allow it through a dedicated unconfined profile for {} ({}). However, a profile already exists with this name. If you want to allow it, please create a profile for it manually.').format(ev['comm'], ev['execpath'])
elif userns_event_usable == 'ok':
clean_rule = _('# You may allow it through a dedicated unconfined profile for {} ({})').format(ev['comm'], ev['execpath'])
else:
profile_path = aa.get_profile_filename_from_profile_name(ev['profile'])
clean_rule = rule.get_clean()
if profile_path:
out += _('If you want to allow this operation you can add the line below in profile {}\n').format(profile_path)
out += clean_rule
else:
out += _('However {profile} is not in {profile_dir}\nIt is likely that the profile was not stored in {profile_dir} or was removed.\n').format(profile=ev['profile'], profile_dir=aa.profile_dir)
else: # Should not happen
out += _('ERROR: Could not create rule from event.')
profile_path = None
if get_clean_rule:
return out, profile_path, clean_rule
else:
return out, profile_path
# TODO reuse more code from aa-logprof in callbacks
def cb_more_info(notification, action, _args):
(raw_ev, rl, special_profiles) = _args
(ev, rl, special_profiles) = _args
args.wait = args.min_wait
notification.close()
parsed_event = rl.parse_record(raw_ev)
out = _('Operation denied by AppArmor\n\n')
out, profile_path, clean_rule = get_more_info_about_event(rl, ev, special_profiles, _('Operation denied by AppArmor\n\n'), get_clean_rule=True)
for key, value in parsed_event.items():
if value:
out += '\t{} = {}\n'.format(_(key), value)
out += _('\nThe software that declined this operation is {}\n').format(parsed_event['profile'])
rule = rl.create_rule_from_ev(parsed_event)
# Exec events are created with the default FileRule.ANY_EXEC. We use Pix for actual rules
if type(rule) is FileRule and rule.exec_perms == FileRule.ANY_EXEC:
rule.exec_perms = 'Pix'
if rule:
aa.update_profiles()
if customized_message['userns']['cond'](parsed_event, special_profiles):
profile_path = None
out += _('You may allow it through a dedicated unconfined profile for {}.').format(parsed_event['comm'])
else:
profile_path = aa.get_profile_filename_from_profile_name(parsed_event['profile'])
if profile_path:
out += _('If you want to allow this operation you can add the line below in profile {}\n').format(profile_path)
out += rule.get_clean()
else:
out += _('However {profile} is not in {profile_dir}\nIt is likely that the profile was not stored in {profile_dir} or was removed.\n').format(profile=parsed_event['profile'], profile_dir=aa.profile_dir)
else: # Should not happen
out += _('ERROR: Could not create rule from event.')
return
ans = ShowMoreGUI(profile_path, out, rule.get_clean(), parsed_event['profile'], profile_path is not None).show()
ans = ShowMoreGUI(profile_path, out, clean_rule, ev['profile'], profile_path is not None).show()
if ans == 'add_rule':
add_to_profile(rule.get_clean(), parsed_event['profile'])
add_to_profile(clean_rule, ev['profile'])
elif ans in {'allow', 'deny'}:
create_userns_profile(parsed_event['comm'], parsed_event['execpath'], ans)
create_userns_profile(ev['comm'], ev['execpath'], ans)
def add_to_profile(rule, profile_name):
@@ -573,12 +614,69 @@ def add_to_profile(rule, profile_name):
ErrorGUI(_('Failed to add rule {rule} to {profile}\nError code = {retcode}').format(rule=rule, profile=profile_name, retcode=e.returncode), False).show()
def cb_add_to_profile(notification, action, _args):
(raw_ev, rl, special_profiles) = _args
notification.close()
parsed_event = rl.parse_record(raw_ev)
def create_from_file(file_path):
update_profile_path = update_profile.__file__
command = ['pkexec', '--keep-cwd', update_profile_path, 'from_file', file_path]
try:
subprocess.run(command, check=True)
except subprocess.CalledProcessError as e:
if e.returncode != 126: # return code 126 means the user cancelled the request
ErrorGUI(_('Failed to add some rules'), False).show()
rule = rl.create_rule_from_ev(parsed_event)
def allow_all(clean_rules):
local_template_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'default_unconfined.template')
if os.path.exists(local_template_path): # We are using local aa-notify -> we use local template
template_path = local_template_path
else:
template_path = aa.CONFDIR + '/default_unconfined.template'
tmp = tempfile.NamedTemporaryFile()
with open(tmp.name, mode='w') as f:
profile = None
for line in clean_rules.splitlines():
if line == '':
continue
elif line[0] == '#':
profile = None
pass
elif line[0] != '\t':
profile = line[8:-1] # 8:-1 is to remove 'profile ' and ':'
else:
if line[1] == '#': # Add to userns
if line[-1] == '.': # '.' <==> There is an error: we cannot add the profile automatically
continue
profile_name = line.split()[-2] # line always finishes by <profile_name>
bin_path = line.split()[-1][1:-1] # 1:-1 to remove the parenthesis
profile_path = aa.get_profile_filename_from_profile_name(profile_name, True)
if not profile_path:
ErrorGUI(_('Cannot get profile path for {}.').format(profile_name), False).show()
continue
f.write('create_userns\t{}\t{}\t{}\t{}\t{}\n'.format(template_path, profile_name, bin_path, profile_path, 'allow'))
else:
if profile is not None:
f.write('add_rule\t{}\t{}\n'.format(line[1:], profile))
else:
print(_("Rule {} cannot be added automatically").format(line[1:]), file=sys.stdout)
create_from_file(tmp.name)
# TODO reuse more code from aa-logprof in callbacks
def cb_more_info_aggregated(notification, action, _args):
(to_display, aggregated, clean_rules) = _args
args.wait = args.min_wait
res = ShowMoreGUIAggregated(to_display, aggregated, clean_rules).show()
if res == 'allow_all':
allow_all(clean_rules)
def cb_add_to_profile(notification, action, _args):
(ev, rl, special_profiles) = _args
args.wait = args.min_wait
notification.close()
rule = rl.create_rule_from_ev(ev)
# Exec events are created with the default FileRule.ANY_EXEC. We use Pix for actual rules
if type(rule) is FileRule and rule.exec_perms == FileRule.ANY_EXEC:
@@ -590,10 +688,10 @@ def cb_add_to_profile(notification, action, _args):
aa.update_profiles()
if customized_message['userns']['cond'](parsed_event, special_profiles):
ask_for_user_ns_denied(parsed_event['execpath'], parsed_event['comm'], False)
if customized_message['userns']['cond'](ev, special_profiles):
ask_for_user_ns_denied(ev['execpath'], ev['comm'], False)
else:
add_to_profile(rule.get_clean(), parsed_event['profile'])
add_to_profile(rule.get_clean(), ev['profile'])
customized_message = {
@@ -616,6 +714,83 @@ def start_glib_loop():
loop.run()
def aggregate_event(agg, ev, keys_to_aggregate):
profile = ev['profile']
agg[profile]['count'] += 1
agg[profile]['events'].append(ev)
for key in keys_to_aggregate:
if key in ev:
value = ev[key]
agg[profile]['values'][key][value] += 1
return agg
def get_aggregated(rl, agg, max_nb_profiles, keys_to_aggregate, special_profiles):
notification = ''
summary = ''
more_info = ''
clean_rules = ''
summary = _('Notifications were raised for profiles: {}\n').format(', '.join(list(agg.keys())))
sorted_profiles = sorted(agg.items(), key=lambda item: item[1]['count'], reverse=True)
for profile, data in sorted_profiles:
profile_notif = _('profile: {} — {} events\n').format(profile, data['count'])
notification += profile_notif
summary += profile_notif
if len(agg) <= max_nb_profiles:
for key in keys_to_aggregate:
if key in data['values']:
total_key_events = sum(data['values'][key].values())
sorted_values = sorted(data['values'][key].items(), key=lambda item: item[1], reverse=True)
for value, count in sorted_values:
percent = (count / total_key_events) * 100
if percent >= 20: # We exclude rare cases for clarity. 20% is arbitrary
summary += _('\t{} was {} {:.1f}% of the time\n').format(key, value, percent)
summary += '\n'
more_info += _('profile {}, {} events\n').format(profile, data['count'])
rules_for_profiles = set()
found_profile = True
for i, ev in enumerate(data['events']):
more_info_rule, profile_path, clean_rule = get_more_info_about_event(rl, ev, special_profiles, _(' - Event {} -\n').format(i + 1), get_clean_rule=True)
if i != 0:
more_info += '\n\n'
if not profile_path:
found_profile = False
more_info += more_info_rule
if clean_rule:
rules_for_profiles.add(clean_rule)
if rules_for_profiles != set():
if profile not in special_profiles:
if found_profile:
clean_rules += _('profile {}:').format(profile)
else:
clean_rules += _('# Unknown profile {}').format(profile)
else:
clean_rules += _('# unprivileged userns denials ({}):').format(profile)
clean_rules += '\n\t' + '\n\t'.join(rules_for_profiles) + '\n'
return notification, summary, more_info, clean_rules
def display_notification(ev, rl, message, userns_special_profiles):
message = customize_notification_message(ev, message, userns_special_profiles)
n = notify2.Notification(_('AppArmor security notice'), message, 'gtk-dialog-warning')
if can_allow_rule(ev, userns_special_profiles):
n.add_action('clicked', 'Allow', cb_add_to_profile, (ev, rl, userns_special_profiles))
n.add_action('more_clicked', 'Show More', cb_more_info, (ev, rl, userns_special_profiles))
n.show()
def display_aggregated_notification(rl, aggregated, maximum_number_notification_profiles, keys_to_aggregate, special_profiles):
notification, summary, more_info, clean_rules = get_aggregated(rl, aggregated, maximum_number_notification_profiles, keys_to_aggregate, special_profiles)
n = notify2.Notification(_('AppArmor security notice'), notification, 'gtk-dialog-warning')
n.add_action('more_aggregated_clicked', 'Show More Info', cb_more_info_aggregated, (summary, more_info, clean_rules))
n.show()
def main():
"""Run aa-notify.
@@ -652,6 +827,7 @@ def main():
parser.add_argument('-v', '--verbose', action='store_true', help=_('show messages with stats'))
parser.add_argument('-u', '--user', type=str, help=_('user to drop privileges to when not using sudo'))
parser.add_argument('-w', '--wait', type=int, metavar=('NUM'), help=_('wait NUM seconds before displaying notifications (with -p)'))
parser.add_argument('-m', '--merge-notifications', action='store_true', help=_('Merge notification for improved readability (with -p)'))
parser.add_argument('--prompt-filter', type=str, metavar=('PF'), help=_('kind of operations which display a popup prompt'))
parser.add_argument('--debug', action='store_true', help=_('debug mode'))
parser.add_argument('--configdir', type=str, help=argparse.SUPPRESS)
@@ -736,6 +912,8 @@ def main():
- ignore_denied_capability
- interface_theme
- prompt_filter
- maximum_number_notification_profiles
- keys_to_aggregate
- filter.profile,
- filter.operation,
- filter.name,
@@ -756,6 +934,8 @@ def main():
'show_notifications',
'message_body',
'message_footer',
'maximum_number_notification_profiles',
'keys_to_aggregate',
'filter.profile',
'filter.operation',
'filter.name',
@@ -852,6 +1032,16 @@ def main():
if unsupported:
sys.exit(_('ERROR: using an unsupported prompt filter: {}\nSupported values: {}').format(', '.join(unsupported), ', '.join(supported_prompt_filter)))
if 'maximum_number_notification_profiles' in config['']:
maximum_number_notification_profiles = int(config['']['maximum_number_notification_profiles'].strip())
else:
maximum_number_notification_profiles = 2
if 'keys_to_aggregate' in config['']:
keys_to_aggregate = config['']['keys_to_aggregate'].strip().split(',')
else:
keys_to_aggregate = {'operation', 'class', 'name', 'denied', 'target'}
if args.file:
logfile = args.file
elif os.path.isfile('/var/run/auditd.pid') and os.path.isfile('/var/log/audit/audit.log'):
@@ -888,49 +1078,76 @@ def main():
# Initialize the list of profiles for can_allow_rule
aa.read_profiles()
# At this point this script needs to be able to read 'logfile' but once
# the for loop starts, privileges can be dropped since the file descriptor
# has been opened and access granted. Further reads of the file will not
# trigger any new permission checks.
# @TODO Plan to catch PermissionError here or..?
for (event, message) in notify_about_new_entries(logfile, filters, args.wait):
ev = rl.parse_record(event)
drop_privileges()
daemonize()
raise_privileges()
# @TODO redo special behaviours with a more regular function
# We ignore capability denials for binaries in ignore_denied_capability
if ev['operation'] == 'capable' and ev['comm'] in ignore_denied_capability:
continue
if args.merge_notifications:
if not args.wait or args.wait == 0:
# args.wait now uses an exponential backoff.
# If there is several notifications on a time period, the time period doubles to avoid flooding.
# If there is no notification on a time period, the time period is divided by two.
args.wait = 5
args.min_wait = args.wait
args.max_wait = args.wait * 2**5 # Arbitrary power of two (2 minutes 40 if args.wait is 5 seconds)
# Special behaivor for userns:
if args.prompt_filter and 'userns' in args.prompt_filter and customized_message['userns']['cond'](ev, userns_special_profiles):
if prompt_userns(ev, userns_special_profiles):
old_time = int(time.time())
while True:
raw_evs = get_apparmor_events_return(logfile, old_time)
drop_privileges()
if len(raw_evs) == 1: # Single event: we handle it without aggregation
raw_ev = raw_evs[0]
ev = rl.parse_record(raw_ev)
display_notification(ev, rl, format_event(raw_ev, logfile), userns_special_profiles)
elif len(raw_evs) > 1:
if args.wait < args.max_wait:
args.wait *= 2
aggregated = defaultdict(lambda: {'count': 0, 'values': defaultdict(lambda: defaultdict(int)), 'events': []})
for raw_ev in raw_evs:
ev = rl.parse_record(raw_ev)
aggregate_event(aggregated, ev, keys_to_aggregate)
display_aggregated_notification(rl, aggregated, maximum_number_notification_profiles, keys_to_aggregate, userns_special_profiles)
else:
if args.wait > args.min_wait:
args.wait /= 2
old_time = int(time.time())
# When notification is sent, raise privileged back to root if the
# original effective user id was zero (to be able to read AppArmor logs)
raise_privileges()
time.sleep(args.wait)
else:
args.min_wait = args.wait
# At this point this script needs to be able to read 'logfile' but once
# the for loop starts, privileges can be dropped since the file descriptor
# has been opened and access granted. Further reads of the file will not
# trigger any new permission checks.
# @TODO Plan to catch PermissionError here or..?
for (event, message) in notify_about_new_entries(logfile, filters, args.wait):
ev = rl.parse_record(event)
# @TODO redo special behaviours with a more regular function
# We ignore capability denials for binaries in ignore_denied_capability
if ev['operation'] == 'capable' and ev['comm'] in ignore_denied_capability:
continue
# Special behaivor for userns:
if args.prompt_filter and 'userns' in args.prompt_filter and customized_message['userns']['cond'](ev, userns_special_profiles):
prompt_userns(ev)
continue # Notification already displayed for this event, we go to the next one.
# Notifications should not be run as root, since root probably is
# the wrong desktop user and not the one getting the notifications.
drop_privileges()
# Notifications should not be run as root, since root probably is
# the wrong desktop user and not the one getting the notifications.
drop_privileges()
# sudo does not preserve DBUS address, so we need to guess it based on UID
if 'DBUS_SESSION_BUS_ADDRESS' not in os.environ:
os.environ['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/{}/bus'.format(os.geteuid())
display_notification(ev, rl, message, userns_special_profiles)
message = customize_notification_message(ev, message, userns_special_profiles)
n = notify2.Notification(
_('AppArmor security notice'),
message,
'gtk-dialog-warning'
)
if can_allow_rule(ev, userns_special_profiles):
n.add_action('clicked', 'Allow', cb_add_to_profile, (event, rl, userns_special_profiles))
n.add_action('more_clicked', 'Show More', cb_more_info, (event, rl, userns_special_profiles))
n.show()
# When notification is sent, raise privileged back to root if the
# original effective user id was zero (to be able to read AppArmor logs)
raise_privileges()
# When notification is sent, raise privileged back to root if the
# original effective user id was zero (to be able to read AppArmor logs)
raise_privileges()
elif args.since_last:
show_entries_since_last_login(logfile, filters)

View File

@@ -3,7 +3,7 @@ Type=Application
Name=AppArmor Notify
Comment=Receive on screen notifications of AppArmor denials
TryExec=/usr/bin/aa-notify
Exec=/usr/bin/aa-notify -p -s 1 -w 60
Exec=/usr/bin/aa-notify --poll --merge-notifictions --since-days 1 --wait 5
StartupNotify=false
NoDisplay=true
X-Ubuntu-Gettext-Domain=aa-notify

View File

@@ -1,6 +1,6 @@
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2014-2021 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2014-2024 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -69,6 +69,7 @@ use_abstractions = True
include = dict()
active_profiles = ProfileList()
original_profiles = ProfileList()
extra_profiles = ProfileList()
# To store the globs entered by users so they can be provided again
@@ -78,9 +79,6 @@ user_globs = {}
# let ask_addhat() remember answers for already-seen change_hat events
transitions = {}
aa = {} # Profiles originally in sd, replace by aa
original_aa = hasher()
changed = dict()
created = []
helpers = dict() # Preserve this between passes # was our
@@ -92,12 +90,11 @@ def reset_aa():
Used by aa-mergeprof and some tests.
"""
global aa, include, active_profiles, original_aa
global include, active_profiles, original_profiles
aa = {}
include = dict()
active_profiles = ProfileList()
original_aa = hasher()
original_profiles = ProfileList()
def on_exit():
@@ -417,6 +414,7 @@ def create_new_profile(localfile, is_stub=False):
full_hat = combine_profname((localfile, hat))
if not local_profile.get(full_hat, False):
local_profile[full_hat] = ProfileStorage('NEW', hat, 'create_new_profile() required_hats')
local_profile[full_hat]['parent'] = localfile
local_profile[full_hat]['is_hat'] = True
local_profile[full_hat]['flags'] = 'complain'
@@ -428,30 +426,10 @@ def create_new_profile(localfile, is_stub=False):
return local_profile
def delete_profile(local_prof):
"""Deletes the specified file from the disk and remove it from our list"""
profile_file = get_profile_filename_from_profile_name(local_prof, True)
if os.path.isfile(profile_file):
os.remove(profile_file)
if aa.get(local_prof, False):
aa.pop(local_prof)
# prof_unload(local_prof)
def confirm_and_abort():
ans = aaui.UI_YesNo(_('Are you sure you want to abandon this set of profile changes and exit?'), 'n')
if ans == 'y':
aaui.UI_Info(_('Abandoning all changes.'))
for prof in created:
delete_profile(prof)
sys.exit(0)
def get_profile(prof_name):
"""search for inactive/extra profile, and ask if it should be used"""
if not extra_profiles.profiles.get(prof_name, False):
if not extra_profiles.profile_exists(prof_name):
return None # no inactive profile found
# TODO: search based on the attachment, not (only?) based on the profile name
@@ -524,14 +502,14 @@ def autodep(bin_name, pname=''):
file = get_profile_filename_from_profile_name(pname, True)
profile_data[pname]['filename'] = file # change filename from extra_profile_dir to /etc/apparmor.d/
attach_profile_data(aa, profile_data)
attach_profile_data(original_aa, profile_data)
for p in profile_data.keys():
original_profiles.add_profile(file, p, profile_data[p]['attachment'], deepcopy(profile_data[p]))
attachment = profile_data[pname]['attachment']
if not attachment and pname.startswith('/'):
attachment = pname # use name as name and attachment
active_profiles.add_profile(file, pname, attachment)
active_profiles.add_profile(file, pname, attachment, profile_data[pname])
if os.path.isfile(profile_dir + '/abi/4.0'):
active_profiles.add_abi(file, AbiRule('abi/4.0', False, True))
@@ -581,6 +559,8 @@ def change_profile_flags(prof_filename, program, flag, set_flag):
for lineno, line in enumerate(f_in):
if RE_PROFILE_START.search(line):
depth += 1
# TODO: hand over profile and hat (= parent profile)
# (and find out why it breaks test-aa.py with several "a child profile inside another child profile is not allowed" errors when doing so)
(profile, hat, prof_storage) = ProfileStorage.parse(line, prof_filename, lineno, '', '')
old_flags = prof_storage['flags']
newflags = ', '.join(add_or_remove_flag(old_flags, flag, set_flag))
@@ -601,6 +581,7 @@ def change_profile_flags(prof_filename, program, flag, set_flag):
line = '%s\n' % line[0]
elif RE_PROFILE_HAT_DEF.search(line):
depth += 1
# TODO: hand over profile and hat (= parent profile)
(profile, hat, prof_storage) = ProfileStorage.parse(line, prof_filename, lineno, '', '')
old_flags = prof_storage['flags']
newflags = ', '.join(add_or_remove_flag(old_flags, flag, set_flag))
@@ -610,6 +591,7 @@ def change_profile_flags(prof_filename, program, flag, set_flag):
line = '%s\n' % line[0]
elif RE_PROFILE_END.search(line):
depth -= 1
# TODO: restore 'profile' and 'hat' to previous values (not really needed/used for aa-complain etc., but can't hurt)
f_out.write(line)
os.rename(temp_file.name, prof_filename)
@@ -621,23 +603,6 @@ def change_profile_flags(prof_filename, program, flag, set_flag):
raise AppArmorException("%(file)s doesn't contain a valid profile for %(profile)s (syntax error?)" % {'file': prof_filename, 'profile': program})
def profile_exists(program):
"""Returns True if profile exists, False otherwise"""
# Check cache of profiles
if active_profiles.filename_from_attachment(program):
return True
# Check the disk for profile
prof_path = get_profile_filename_from_attachment(program, True)
# print(prof_path)
if os.path.isfile(prof_path):
# Add to cache of profile
raise AppArmorBug('Reached strange condition in profile_exists(), please open a bugreport!')
# active_profiles[program] = prof_path
# return True
return False
def build_x_functions(default, options, exec_toggle):
ret_list = []
fallback_toggle = False
@@ -692,7 +657,7 @@ def ask_addhat(hashlog):
for full_hat in hashlog[aamode][profile]['change_hat']:
hat = full_hat.split('//')[-1]
if aa[profile].get(hat, False):
if active_profiles.profile_exists(full_hat):
continue # no need to ask if the hat already exists
default_hat = None
@@ -729,18 +694,26 @@ def ask_addhat(hashlog):
transitions[context] = ans
filename = active_profiles.filename_from_profile_name(profile) # filename of parent profile, will be used for new hats
if ans == 'CMD_ADDHAT':
aa[profile][hat] = ProfileStorage(profile, hat, 'ask_addhat addhat')
aa[profile][hat]['flags'] = aa[profile][profile]['flags']
hashlog[aamode][full_hat]['final_name'] = '%s//%s' % (profile, hat)
hat_obj = ProfileStorage(profile, hat, 'ask_addhat addhat')
hat_obj['parent'] = profile
hat_obj['flags'] = active_profiles[profile]['flags']
new_full_hat = combine_profname([profile, hat])
active_profiles.add_profile(filename, new_full_hat, hat, hat_obj)
hashlog[aamode][full_hat]['final_name'] = new_full_hat
changed[profile] = True
elif ans == 'CMD_USEDEFAULT':
hat = default_hat
hashlog[aamode][full_hat]['final_name'] = '%s//%s' % (profile, default_hat)
if not aa[profile].get(hat, False):
new_full_hat = combine_profname([profile, hat])
hashlog[aamode][full_hat]['final_name'] = new_full_hat
if not active_profiles.profile_exists(full_hat):
# create default hat if it doesn't exist yet
aa[profile][hat] = ProfileStorage(profile, hat, 'ask_addhat default hat')
aa[profile][hat]['flags'] = aa[profile][profile]['flags']
hat_obj = ProfileStorage(profile, hat, 'ask_addhat default hat')
hat_obj['parent'] = profile
hat_obj['flags'] = active_profiles[profile]['flags']
active_profiles.add_profile(filename, new_full_hat, hat, hat_obj)
changed[profile] = True
elif ans == 'CMD_DENY':
# As unknown hat is denied no entry for it should be made
@@ -748,7 +721,7 @@ def ask_addhat(hashlog):
continue
def ask_exec(hashlog):
def ask_exec(hashlog, default_ans=''):
"""ask the user about exec events (requests to execute another program) and which exec mode to use"""
for aamode in hashlog:
@@ -763,14 +736,14 @@ def ask_exec(hashlog):
raise AppArmorBug(
'exec permissions requested for directory %s (profile %s). This should not happen - please open a bugreport!' % (exec_target, full_profile))
if not aa.get(profile):
if not active_profiles.profile_exists(profile):
continue # ignore log entries for non-existing profiles
if not aa[profile].get(hat):
if not active_profiles.profile_exists(full_profile):
continue # ignore log entries for non-existing hats
exec_event = FileRule(exec_target, None, FileRule.ANY_EXEC, FileRule.ALL, owner=False, log_event=True)
if is_known_rule(aa[profile][hat], 'file', exec_event):
if is_known_rule(active_profiles[full_profile], 'file', exec_event):
continue
# nx is not used in profiles but in log files.
@@ -836,7 +809,10 @@ def ask_exec(hashlog):
# ask user about the exec mode to use
ans = ''
while ans not in ('CMD_ix', 'CMD_px', 'CMD_cx', 'CMD_nx', 'CMD_pix', 'CMD_cix', 'CMD_nix', 'CMD_ux', 'CMD_DENY'): # add '(I)gnore'? (hotkey conflict with '(i)x'!)
ans = q.promptUser()[0]
if default_ans:
ans = default_ans
else:
ans = q.promptUser()[0]
if ans.startswith('CMD_EXEC_IX_'):
exec_toggle = not exec_toggle
@@ -871,20 +847,22 @@ def ask_exec(hashlog):
elif ans in ('CMD_px', 'CMD_cx', 'CMD_pix', 'CMD_cix'):
exec_mode = ans.replace('CMD_', '')
px_msg = _(
"Should AppArmor sanitise the environment when\n"
"switching profiles?\n"
"Should AppArmor enable secure-execution mode\n"
"when switching profiles?\n"
"\n"
"Sanitising environment is more secure,\n"
"but some applications depend on the presence\n"
"of LD_PRELOAD or LD_LIBRARY_PATH.")
"Doing so is more secure, but some applications\n"
"depend on the presence of LD_PRELOAD or\n"
"LD_LIBRARY_PATH, which would be sanitized by\n"
"enabling secure-execution mode.")
if parent_uses_ld_xxx:
px_msg = _(
"Should AppArmor sanitise the environment when\n"
"switching profiles?\n"
"Should AppArmor enable secure-execution mode\n"
"when switching profiles?\n"
"\n"
"Sanitising environment is more secure,\n"
"Doing so is more secure,\n"
"but this application appears to be using LD_PRELOAD\n"
"or LD_LIBRARY_PATH and sanitising the environment\n"
"or LD_LIBRARY_PATH, and sanitising those environment\n"
"variables by enabling secure-execution mode\n"
"could cause functionality problems.")
ynans = aaui.UI_YesNo(px_msg, 'y')
@@ -918,7 +896,7 @@ def ask_exec(hashlog):
file_perm = 'mr'
else:
if ans == 'CMD_DENY':
aa[profile][hat]['file'].add(FileRule(exec_target, None, 'x', FileRule.ALL, owner=False, log_event=True, deny=True))
active_profiles[full_profile]['file'].add(FileRule(exec_target, None, 'x', FileRule.ALL, owner=False, log_event=True, deny=True))
changed[profile] = True
if target_profile and hashlog[aamode].get(target_profile):
hashlog[aamode][target_profile]['final_name'] = ''
@@ -931,7 +909,7 @@ def ask_exec(hashlog):
else:
rule_to_name = FileRule.ALL
aa[profile][hat]['file'].add(FileRule(exec_target, file_perm, exec_mode, rule_to_name, owner=False, log_event=True))
active_profiles[full_profile]['file'].add(FileRule(exec_target, file_perm, exec_mode, rule_to_name, owner=False, log_event=True))
changed[profile] = True
@@ -942,16 +920,16 @@ def ask_exec(hashlog):
exec_target_rule = FileRule(exec_target, 'r', None, FileRule.ALL, owner=False)
interpreter_rule = FileRule(interpreter_path, None, 'ix', FileRule.ALL, owner=False)
if not is_known_rule(aa[profile][hat], 'file', exec_target_rule):
aa[profile][hat]['file'].add(exec_target_rule)
if not is_known_rule(aa[profile][hat], 'file', interpreter_rule):
aa[profile][hat]['file'].add(interpreter_rule)
if not is_known_rule(active_profiles[full_profile], 'file', exec_target_rule):
active_profiles[full_profile]['file'].add(exec_target_rule)
if not is_known_rule(active_profiles[full_profile], 'file', interpreter_rule):
active_profiles[full_profile]['file'].add(interpreter_rule)
if abstraction:
abstraction_rule = IncludeRule(abstraction, False, True)
if not aa[profile][hat]['inc_ie'].is_covered(abstraction_rule):
aa[profile][hat]['inc_ie'].add(abstraction_rule)
if not active_profiles[full_profile]['inc_ie'].is_covered(abstraction_rule):
active_profiles[full_profile]['inc_ie'].add(abstraction_rule)
# Update tracking info based on kind of change
@@ -991,19 +969,21 @@ def ask_exec(hashlog):
if to_name:
exec_target = to_name
if not aa[profile].get(exec_target, False):
full_exec_target = combine_profname([profile, exec_target])
if not active_profiles.profile_exists(full_exec_target):
ynans = 'y'
if 'i' in exec_mode:
ynans = aaui.UI_YesNo(_('A profile for %s does not exist.\nDo you want to create one?') % exec_target, 'n')
if ynans == 'y':
if not aa[profile].get(exec_target, False):
stub_profile = merged_to_split(create_new_profile(exec_target, True))
aa[profile][exec_target] = stub_profile[exec_target][exec_target]
if not active_profiles.profile_exists(full_exec_target):
stub_profile = create_new_profile(exec_target, True)
for p in stub_profile:
active_profiles.add_profile(prof_filename, p, stub_profile[p]['attachment'], stub_profile[p])
if profile != exec_target:
aa[profile][exec_target]['flags'] = aa[profile][profile]['flags']
active_profiles[full_exec_target]['flags'] = active_profiles[profile]['flags']
aa[profile][exec_target]['flags'] = 'complain'
active_profiles[full_exec_target]['flags'] = 'complain'
if target_profile and hashlog[aamode].get(target_profile):
hashlog[aamode][target_profile]['final_name'] = '%s//%s' % (profile, exec_target)
@@ -1056,8 +1036,8 @@ def ask_the_questions(log_dict):
else:
sev_db.set_variables({})
if aa.get(profile): # only continue/ask if the parent profile exists
if not aa[profile].get(hat, {}).get('file'):
if active_profiles.profile_exists(profile): # only continue/ask if the parent profile exists # XXX check direct parent or top-level? Also, get rid of using "profile" here!
if not active_profiles.profile_exists(full_profile):
if aamode != 'merge':
# Ignore log events for a non-existing profile or child profile. Such events can occur
# after deleting a profile or hat manually, or when processing a foreign log.
@@ -1090,18 +1070,21 @@ def ask_the_questions(log_dict):
continue # don't ask about individual rules if the user doesn't want the additional subprofile/hat
if log_dict[aamode][full_profile]['is_hat']:
aa[profile][hat] = ProfileStorage(profile, hat, 'mergeprof ask_the_questions() - missing hat')
aa[profile][hat]['is_hat'] = True
prof_obj = ProfileStorage(profile, hat, 'mergeprof ask_the_questions() - missing hat')
prof_obj['is_hat'] = True
else:
aa[profile][hat] = ProfileStorage(profile, hat, 'mergeprof ask_the_questions() - missing subprofile')
aa[profile][hat]['is_hat'] = False
prof_obj = ProfileStorage(profile, hat, 'mergeprof ask_the_questions() - missing subprofile')
prof_obj['is_hat'] = False
prof_obj['parent'] = profile
active_profiles.add_profile(prof_filename, full_profile, hat, prof_obj)
# check for and ask about conflicting exec modes
ask_conflict_mode(aa[profile][hat], log_dict[aamode][full_profile])
ask_conflict_mode(active_profiles[full_profile], log_dict[aamode][full_profile])
prof_changed, end_profiling = ask_rule_questions(
log_dict[aamode][full_profile], combine_name(profile, hat),
aa[profile][hat], ruletypes)
log_dict[aamode][full_profile], full_profile,
active_profiles[full_profile], ruletypes)
if prof_changed:
changed[profile] = True
if end_profiling:
@@ -1114,7 +1097,7 @@ def ask_rule_questions(prof_events, profile_name, the_profile, r_types):
parameter typical value
prof_events log_dict[aamode][full_profile]
profile_name profile name (possible profile//hat)
the_profile aa[profile][hat] -- will be modified
the_profile active_profiles[full_profile] -- will be modified
r_types ruletypes
returns:
@@ -1481,14 +1464,11 @@ def set_logfile(filename):
def do_logprof_pass(logmark='', out_dir=None):
# set up variables for this pass
global active_profiles
global sev_db
# aa = hasher()
# changed = dict()
aaui.UI_Info(_('Reading log entries from %s.') % logfile)
if not sev_db:
sev_db = apparmor.severity.Severity(CONFDIR + '/severity.db', _('unknown'))
load_sev_db()
# print(pid)
# print(active_profiles)
@@ -1508,7 +1488,7 @@ def do_logprof_pass(logmark='', out_dir=None):
def save_profiles(is_mergeprof=False, out_dir=None):
# Ensure the changed profiles are actual active profiles
for prof_name in changed.keys():
if not aa.get(prof_name, False):
if not active_profiles.profile_exists(prof_name):
print("*** save_profiles(): removing %s" % prof_name)
print('*** This should not happen. Please open a bugreport!')
changed.pop(prof_name)
@@ -1545,19 +1525,19 @@ def save_profiles(is_mergeprof=False, out_dir=None):
elif ans == 'CMD_VIEW_CHANGES':
oldprofile = None
if aa[profile_name][profile_name].get('filename', False):
oldprofile = aa[profile_name][profile_name]['filename']
if active_profiles[profile_name].get('filename', False):
oldprofile = active_profiles[profile_name]['filename']
else:
oldprofile = get_profile_filename_from_attachment(profile_name, True)
serialize_options = {'METADATA': True}
newprofile = serialize_profile(split_to_merged(aa), profile_name, serialize_options)
newprofile = serialize_profile(active_profiles, profile_name, serialize_options)
aaui.UI_Changes(oldprofile, newprofile, comments=True)
elif ans == 'CMD_VIEW_CHANGES_CLEAN':
oldprofile = serialize_profile(split_to_merged(original_aa), profile_name, {})
newprofile = serialize_profile(split_to_merged(aa), profile_name, {})
oldprofile = serialize_profile(original_profiles, profile_name, {})
newprofile = serialize_profile(active_profiles, profile_name, {})
aaui.UI_Changes(oldprofile, newprofile)
@@ -1590,9 +1570,9 @@ def collapse_log(hashlog, ignore_null_profiles=True):
profile, hat = split_name(final_name) # XXX limited to two levels to avoid an Exception on nested child profiles or nested null-*
# TODO: support nested child profiles
# used to avoid to accidentally initialize aa[profile][hat] or calling is_known_rule() on events for a non-existing profile
# used to avoid calling is_known_rule() on events for a non-existing profile
hat_exists = False
if aa.get(profile) and aa[profile].get(hat):
if active_profiles.profile_exists(profile) and active_profiles.profile_exists(final_name): # we need to check for the target profile here
hat_exists = True
if not log_dict[aamode].get(final_name):
@@ -1601,7 +1581,7 @@ def collapse_log(hashlog, ignore_null_profiles=True):
for ev_type, ev_class in ReadLog.ruletypes.items():
for rule in ev_class.from_hashlog(hashlog[aamode][full_profile][ev_type]):
if not hat_exists or not is_known_rule(aa[profile][hat], ev_type, rule):
if not hat_exists or not is_known_rule(active_profiles[full_profile], ev_type, rule):
log_dict[aamode][final_name][ev_type].add(rule)
return log_dict
@@ -1621,9 +1601,8 @@ def read_profiles(ui_msg=False, skip_profiles=()):
#
# The skip_profiles parameter should only be specified by tests.
global aa, original_aa
aa = {}
original_aa = hasher()
global original_profiles
original_profiles = ProfileList()
if ui_msg:
aaui.UI_Info(_('Updating AppArmor profiles in %s.') % profile_dir)
@@ -1677,7 +1656,7 @@ def read_inactive_profiles(skip_profiles=()):
read_profile(full_file, False)
def read_profile(file, active_profile, read_error_fatal=False):
def read_profile(file, is_active_profile, read_error_fatal=False):
data = None
try:
with open_file_read(file) as f_in:
@@ -1695,30 +1674,17 @@ def read_profile(file, active_profile, read_error_fatal=False):
if not profile_data:
return
if active_profile:
attach_profile_data(aa, profile_data)
attach_profile_data(original_aa, profile_data)
for profile in profile_data:
attachment = profile_data[profile]['attachment']
filename = profile_data[profile]['filename']
for profile in profile_data:
if '//' in profile:
continue # TODO: handle hats/child profiles independent of main profiles
attachment = profile_data[profile]['attachment']
filename = profile_data[profile]['filename']
if not attachment and profile.startswith('/'):
attachment = profile # use profile as name and attachment
active_profiles.add_profile(filename, profile, attachment)
else:
for profile in profile_data:
attachment = profile_data[profile]['attachment']
filename = profile_data[profile]['filename']
if not attachment and profile.startswith('/'):
attachment = profile # use profile as name and attachment
if not attachment and profile.startswith('/'):
attachment = profile # use profile as name and attachment
if is_active_profile:
active_profiles.add_profile(filename, profile, attachment, profile_data[profile])
original_profiles.add_profile(filename, profile, attachment, deepcopy(profile_data[profile]))
else:
extra_profiles.add_profile(filename, profile, attachment, profile_data[profile])
@@ -1896,6 +1862,7 @@ def parse_profile_data(data, file, do_include, in_preamble):
profname = combine_profname((parsed_prof, hat))
if not profile_data.get(profname, False):
profile_data[profname] = ProfileStorage(parsed_prof, hat, 'parse_profile_data() required_hats')
profile_data[profname]['parent'] = parsed_prof
profile_data[profname]['is_hat'] = True
# End of file reached but we're stuck in a profile
@@ -1965,23 +1932,6 @@ def merged_to_split(profile_data):
return compat
def split_to_merged(profile_data):
"""(temporary) helper function to convert a traditional compat['foo']['bar'] to a profile['foo//bar'] list"""
merged = {}
for profile in profile_data:
for hat in profile_data[profile]:
if profile == hat:
merged_name = profile
else:
merged_name = combine_profname((profile, hat))
merged[merged_name] = profile_data[profile][hat]
return merged
def write_piece(profile_data, depth, name, nhat):
pre = ' ' * depth
data = []
@@ -2030,7 +1980,8 @@ def write_piece(profile_data, depth, name, nhat):
def serialize_profile(profile_data, name, options):
string = ''
''' combine the preamble and profiles in a file to a string (to be written to the profile file) '''
data = []
if not isinstance(options, dict):
@@ -2039,12 +1990,11 @@ def serialize_profile(profile_data, name, options):
include_metadata = options.get('METADATA', False)
if include_metadata:
string = '# Last Modified: %s\n' % time.asctime()
data.extend(['# Last Modified: %s' % time.asctime()])
# if profile_data[name].get('initial_comment', False):
# comment = profile_data[name]['initial_comment']
# comment.replace('\\n', '\n')
# string += comment + '\n'
# data.append(comment)
if options.get('is_attachment'):
prof_filename = get_profile_filename_from_attachment(name, True)
@@ -2055,23 +2005,29 @@ def serialize_profile(profile_data, name, options):
# Here should be all the profiles from the files added write after global/common stuff
for prof in sorted(active_profiles.profiles_in_file(prof_filename)):
if active_profiles.profiles[prof]['parent']:
continue # child profile or hat, already part of its parent profile
# aa-logprof asks to save each file separately. Therefore only update the given profile, and keep the original version of other profiles in the file
if prof != name:
if original_aa[prof][prof].get('initial_comment', False):
comment = original_aa[prof][prof]['initial_comment']
comment.replace('\\n', '\n')
data.append(comment + '\n')
data.extend(write_piece(split_to_merged(original_aa), 0, prof, prof))
if original_profiles.profile_exists(prof) and original_profiles[prof].get('initial_comment'):
comment = original_profiles[prof]['initial_comment']
data.extend([comment, ''])
data.extend(write_piece(original_profiles.get_profile_and_childs(prof), 0, prof, prof))
else:
if profile_data[name].get('initial_comment', False):
comment = profile_data[name]['initial_comment']
comment.replace('\\n', '\n')
data.append(comment + '\n')
data.extend([comment, ''])
data.extend(write_piece(profile_data, 0, name, name))
# write_piece() expects a dict, not a ProfileList - TODO: change write_piece()?
if type(profile_data) is dict:
data.extend(write_piece(profile_data, 0, name, name))
else:
data.extend(write_piece(profile_data.get_profile_and_childs(name), 0, name, name))
string += '\n'.join(data)
return string + '\n'
return '\n'.join(data) + '\n'
def write_profile_ui_feedback(profile, is_attachment=False, out_dir=None):
@@ -2080,15 +2036,15 @@ def write_profile_ui_feedback(profile, is_attachment=False, out_dir=None):
def write_profile(profile, is_attachment=False, out_dir=None):
if aa[profile][profile].get('filename', False):
prof_filename = aa[profile][profile]['filename']
if active_profiles[profile]['filename']:
prof_filename = active_profiles[profile]['filename']
elif is_attachment:
prof_filename = get_profile_filename_from_attachment(profile, True)
else:
prof_filename = get_profile_filename_from_profile_name(profile, True)
serialize_options = {'METADATA': True, 'is_attachment': is_attachment}
profile_string = serialize_profile(split_to_merged(aa), profile, serialize_options)
profile_string = serialize_profile(active_profiles, profile, serialize_options)
try:
with NamedTemporaryFile('w', suffix='~', delete=False, dir=out_dir or profile_dir) as newprof:
@@ -2113,7 +2069,9 @@ def write_profile(profile, is_attachment=False, out_dir=None):
else:
debug_logger.info("Unchanged profile written: %s (not listed in 'changed' list)", profile)
original_aa[profile] = deepcopy(aa[profile])
for full_profile in active_profiles.get_profile_and_childs(profile):
if profile == full_profile or active_profiles[full_profile]['parent']: # copy main profile and childs, but skip external hats
original_profiles.replace_profile(full_profile, deepcopy(active_profiles[full_profile]))
def include_list_recursive(profile, in_preamble=False):
@@ -2143,10 +2101,8 @@ def include_list_recursive(profile, in_preamble=False):
def is_known_rule(profile, rule_type, rule_obj):
# XXX get rid of get() checks after we have a proper function to initialize a profile
if profile.get(rule_type, False):
if profile[rule_type].is_covered(rule_obj, False):
return True
if profile[rule_type].is_covered(rule_obj, False):
return True
includelist = include_list_recursive(profile)
@@ -2415,3 +2371,10 @@ def init_aa(confdir=None, profiledir=None):
parser = conf.find_first_file(cfg['settings'].get('parser')) or '/sbin/apparmor_parser'
if not os.path.isfile(parser) or not os.access(parser, os.EX_OK):
raise AppArmorException("Can't find apparmor_parser at %s" % (parser))
def load_sev_db():
global sev_db
if not sev_db:
sev_db = apparmor.severity.Severity(CONFDIR + '/severity.db', _('unknown'))

View File

@@ -1,6 +1,6 @@
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2014-2015 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2014-2024 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -18,7 +18,6 @@ import apparmor.aa as apparmor
class Prof:
def __init__(self, filename):
apparmor.init_aa()
self.aa = apparmor.aa
self.active_profiles = apparmor.active_profiles
self.include = apparmor.include
self.filename = filename
@@ -36,7 +35,7 @@ class CleanProf:
deleted += self.other.active_profiles.delete_preamble_duplicates(self.other.filename)
for profile in self.profile.aa.keys():
for profile in self.profile.active_profiles.get_all_profiles():
deleted += self.remove_duplicate_rules(profile)
return deleted
@@ -50,22 +49,22 @@ class CleanProf:
deleted += self.profile.active_profiles.delete_preamble_duplicates(self.profile.filename)
# Process every hat in the profile individually
for hat in sorted(self.profile.aa[program].keys()):
includes = self.profile.aa[program][hat]['inc_ie'].get_all_full_paths(apparmor.profile_dir)
for full_profile in sorted(self.profile.active_profiles.get_profile_and_childs(program)):
includes = self.profile.active_profiles[full_profile]['inc_ie'].get_all_full_paths(apparmor.profile_dir)
# Clean up superfluous rules from includes in the other profile
for inc in includes:
if not self.profile.include.get(inc, {}).get(inc, False):
apparmor.load_include(inc)
if self.other.aa[program].get(hat): # carefully avoid to accidentally initialize self.other.aa[program][hat]
deleted += apparmor.delete_all_duplicates(self.other.aa[program][hat], inc, apparmor.ruletypes)
if self.other.active_profiles.profile_exists(full_profile):
deleted += apparmor.delete_all_duplicates(self.other.active_profiles[full_profile], inc, apparmor.ruletypes)
# Clean duplicate rules in other profile
for ruletype in apparmor.ruletypes:
if not self.same_file:
if self.other.aa[program].get(hat): # carefully avoid to accidentally initialize self.other.aa[program][hat]
deleted += self.other.aa[program][hat][ruletype].delete_duplicates(self.profile.aa[program][hat][ruletype])
if self.other.active_profiles.profile_exists(full_profile):
deleted += self.other.active_profiles[full_profile][ruletype].delete_duplicates(self.profile.active_profiles[full_profile][ruletype])
else:
deleted += self.other.aa[program][hat][ruletype].delete_duplicates(None)
deleted += self.other.active_profiles[full_profile][ruletype].delete_duplicates(None)
return deleted

View File

@@ -40,8 +40,8 @@ class GUI:
self.label_frame = ttk.Frame(self.master, padding=(20, 10))
self.label_frame.pack()
self.button_frame = ttk.Frame(self.master, padding=(0, 10))
self.button_frame.pack()
self.button_frame = ttk.Frame(self.master, padding=(10, 10))
self.button_frame.pack(fill='x', expand=True)
def show(self):
self.master.mainloop()
@@ -86,6 +86,75 @@ class ShowMoreGUI(GUI):
self.do_nothing_button.pack(side=tk.LEFT, fill=tk.BOTH, expand=True, padx=5, pady=5)
class ShowMoreGUIAggregated(GUI):
def __init__(self, summary, detailed_text, clean_rules):
self.summary = summary
self.detailed_text = detailed_text
self.clean_rules = clean_rules
self.states = {
'summary': {
'msg': self.summary,
'btn_left': _('Show more details'),
'btn_right': _('Show rules only')
},
'detailed': {
'msg': self.detailed_text,
'btn_left': _('Show summary'),
'btn_right': _('Show rules only')
},
'rules_only': {
'msg': self.clean_rules,
'btn_left': _('Show more details'),
'btn_right': _('Show summary')
}
}
self.state = 'rules_only'
super().__init__()
self.master.title(_('AppArmor - More info'))
self.text_display = tk.Text(self.label_frame, height=40, width=100, wrap='word')
if ttkthemes:
self.text_display.configure(background=self.bg_color, foreground=self.fg_color)
self.text_display.insert('1.0', self.states[self.state]['msg'])
self.text_display['state'] = 'disabled'
self.scrollbar = ttk.Scrollbar(self.label_frame, command=self.text_display.yview)
self.text_display['yscrollcommand'] = self.scrollbar.set
self.scrollbar.pack(side='right', fill='y')
self.text_display.pack(side='left', fill='both', expand=True)
self.btn_left = ttk.Button(self.button_frame, text=self.states[self.state]['btn_left'], width=1, command=lambda: self.change_view('btn_left'))
self.btn_left.grid(row=0, column=0, padx=5, pady=5, sticky="ew")
self.btn_right = ttk.Button(self.button_frame, text=self.states[self.state]['btn_right'], width=1, command=lambda: self.change_view('btn_right'))
self.btn_right.grid(row=0, column=1, padx=5, pady=5, sticky="ew")
self.btn_allow_all = ttk.Button(self.button_frame, text="Allow All", width=1, command=lambda: self.set_result('allow_all'))
self.btn_allow_all.grid(row=0, column=2, padx=5, pady=5, sticky="ew")
for i in range(3):
self.button_frame.grid_columnconfigure(i, weight=1)
def change_view(self, action):
if action == 'btn_left':
self.state = 'detailed' if self.state != 'detailed' else 'summary'
elif action == 'btn_right':
self.state = 'rules_only' if self.state != 'rules_only' else 'summary'
self.btn_left['text'] = self.states[self.state]['btn_left']
self.btn_right['text'] = self.states[self.state]['btn_right']
self.text_display['state'] = 'normal'
self.text_display.delete('1.0', 'end')
self.text_display.insert('1.0', self.states[self.state]['msg'])
self.text_display['state'] = 'disabled'
class UsernsGUI(GUI):
def __init__(self, name, path):
self.name = name

View File

@@ -1,5 +1,5 @@
# ----------------------------------------------------------------------
# Copyright (C) 2018-2020 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2018-2024 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -52,6 +52,12 @@ class ProfileList:
name = type(self).__name__
return '\n<%s>\n%s\n</%s>\n' % (name, '\n'.join(self.files), name)
def __getitem__(self, key):
if key in self.profiles:
return self.profiles[key]
else:
raise AppArmorBug('attempt to read unknown profile %s' % key)
def init_file(self, filename):
if self.files.get(filename):
return # don't re-initialize / overwrite existing data
@@ -63,7 +69,7 @@ class ProfileList:
for rule in preamble_ruletypes:
self.files[filename][rule] = preamble_ruletypes[rule]['ruleset']()
def add_profile(self, filename, profile_name, attachment, prof_storage=None):
def add_profile(self, filename, profile_name, attachment, prof_storage):
"""Add the given profile and attachment to the list"""
if not filename:
@@ -72,7 +78,7 @@ class ProfileList:
if not profile_name and not attachment:
raise AppArmorBug('Neither profile name or attachment given')
if type(prof_storage) is not ProfileStorage and prof_storage is not None:
if type(prof_storage) is not ProfileStorage:
raise AppArmorBug('Invalid profile type: %s' % type(prof_storage))
if profile_name in self.profile_names:
@@ -101,6 +107,21 @@ class ProfileList:
self.files[filename]['profiles'].append(attachment)
self.profiles[attachment] = prof_storage
def replace_profile(self, profile_name, prof_storage):
"""Replace the given profile in the profile list"""
if profile_name not in self.profiles:
raise AppArmorBug('Attempt to replace non-existing profile %s' % profile_name)
if type(prof_storage) is not ProfileStorage:
raise AppArmorBug('Invalid profile type: %s' % type(prof_storage))
# we might lift this restriction later, but for now, forbid changing the attachment instead of updating self.attachments
if self.profiles[profile_name]['attachment'] != prof_storage['attachment']:
raise AppArmorBug('Attempt to change atttachment while replacing profile %s - original: %s, new: %s' % (profile_name, self.profiles[profile_name]['attachment'], prof_storage['attachment']))
self.profiles[profile_name] = prof_storage
def add_rule(self, filename, ruletype, rule):
"""Store the given rule for the given profile filename preamble"""
@@ -168,6 +189,9 @@ class ProfileList:
return deleted
def get_all_profiles(self):
return self.profiles
def get_profile_and_childs(self, profile_name):
found = {}
for prof in self.profiles:
@@ -283,6 +307,9 @@ class ProfileList:
return merged_variables
def profile_exists(self, profile_name):
return profile_name in self.profiles
def profiles_in_file(self, filename):
"""Return list of profiles in the given file"""
if not self.files.get(filename):

View File

@@ -1,6 +1,6 @@
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2014-2021 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2014-2024 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -77,6 +77,7 @@ class ProfileStorage:
data['filename'] = ''
data['logprof_suggest'] = '' # set in abstractions that should be suggested by aa-logprof
data['parent'] = '' # parent profile, or '' for top-level profiles and external hats
data['name'] = ''
data['attachment'] = ''
data['xattrs'] = ''
@@ -221,11 +222,13 @@ class ProfileStorage:
_('%(profile)s profile in %(file)s contains syntax errors in line %(line)s: a child profile inside another child profile is not allowed.')
% {'profile': profile, 'file': file, 'line': lineno + 1})
parent = profile
hat = matches['profile']
prof_or_hat_name = hat
pps_set_hat_external = False
else: # stand-alone profile
parent = ''
profile = matches['profile']
prof_or_hat_name = profile
if len(profile.split('//')) > 2:
@@ -241,6 +244,7 @@ class ProfileStorage:
prof_storage = cls(profile, hat, cls.__name__ + '.parse()')
prof_storage['parent'] = parent
prof_storage['name'] = prof_or_hat_name
prof_storage['filename'] = file
prof_storage['external'] = pps_set_hat_external

View File

@@ -1,6 +1,6 @@
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2015-2023 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2015-2024 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -64,7 +64,7 @@ class aa_tools:
prof_filename = apparmor.get_profile_filename_from_attachment(fq_path, True)
else:
which_ = which(p)
if self.name == 'cleanprof' and p in apparmor.aa:
if self.name == 'cleanprof' and apparmor.active_profiles.profile_exists(p):
program = p # not really correct, but works
profile = p
prof_filename = apparmor.get_profile_filename_from_profile_name(profile)
@@ -104,14 +104,14 @@ class aa_tools:
if program is None:
program = profile
if not program or not (os.path.exists(program) or profile in apparmor.aa):
if not program or not (os.path.exists(program) or apparmor.active_profiles.profile_exists(profile)):
if program and not program.startswith('/'):
program = aaui.UI_GetString(_('The given program cannot be found, please try with the fully qualified path name of the program: '), '')
else:
aaui.UI_Info(_("%s does not exist, please double-check the path.") % program)
sys.exit(1)
if program and profile in apparmor.aa:
if program and apparmor.active_profiles.profile_exists(profile):
self.clean_profile(program, profile, prof_filename)
else:
@@ -207,8 +207,8 @@ class aa_tools:
apparmor.write_profile_ui_feedback(profile)
self.reload_profile(prof_filename)
elif ans == 'CMD_VIEW_CHANGES':
# oldprofile = apparmor.serialize_profile(apparmor.split_to_merged(apparmor.original_aa), profile, {})
newprofile = apparmor.serialize_profile(apparmor.split_to_merged(apparmor.aa), profile, {}) # , {'is_attachment': True})
# oldprofile = apparmor.serialize_profile(apparmor.original_profiles, profile, {})
newprofile = apparmor.serialize_profile(apparmor.active_profiles, profile, {}) # , {'is_attachment': True})
aaui.UI_Changes(prof_filename, newprofile, comments=True)
def unload_profile(self, prof_filename):

View File

@@ -29,15 +29,15 @@ def create_userns(template_path, name, bin_path, profile_path, decision):
def add_to_profile(rule, profile_name):
aa.init_aa()
aa.read_profiles()
aa.update_profiles()
rule_type, rule_class = ReadLog('', '', '').get_rule_type(rule)
rule_obj = rule_class.create_instance(rule)
if profile_name not in aa.aa or profile_name not in aa.aa[profile_name]:
if not aa.active_profiles.profile_exists(profile_name):
exit(_('Cannot find {} in profiles').format(profile_name))
aa.aa[profile_name][profile_name][rule_type].add(rule_obj, cleanup=True)
aa.active_profiles[profile_name][rule_type].add(rule_obj, cleanup=True)
# Save changes
aa.write_profile_ui_feedback(profile_name)
@@ -48,31 +48,50 @@ def usage(is_help):
print('This tool is a low level tool - do not use it directly')
print('{} create_userns <template_path> <name> <bin_path> <profile_path> <decision>'.format(sys.argv[0]))
print('{} add_rule <rule> <profile_name>'.format(sys.argv[0]))
print('{} from_file <file>'.format(sys.argv[0]))
if is_help:
exit(0)
else:
exit(1)
def create_from_file(file_path):
with open(file_path) as file:
for line in file:
args = line[:-1].split('\t')
if len(args) > 1:
command = args[0]
else:
command = None # Handle the case where no command is provided
do_command(command, args)
def do_command(command, args):
if command == 'from_file':
if not len(args) == 2:
usage(False)
create_from_file(args[1])
elif command == 'create_userns':
if not len(args) == 6:
usage(False)
create_userns(args[1], args[2], args[3], args[4], args[5])
elif command == 'add_rule':
if not len(args) == 3:
usage(False)
add_to_profile(args[1], args[2])
elif command == 'help':
usage(True)
else:
usage(False)
def main():
if len(sys.argv) > 1:
command = sys.argv[1]
else:
command = None # Handle the case where no command is provided
if command == 'create_userns':
if not len(sys.argv) == 7:
usage(False)
create_userns(sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5], sys.argv[6])
elif command == 'add_rule':
if not len(sys.argv) == 4:
usage(False)
add_to_profile(sys.argv[2], sys.argv[3])
elif command == 'help':
usage(True)
else:
usage(False)
do_command(command, sys.argv[1:])
if __name__ == '__main__':

View File

@@ -4,7 +4,7 @@
"http://www.freedesktop.org/standards/PolicyKit/1/policyconfig.dtd">
<policyconfig>
<action id="net.apparmor.pkexec.aa-notify.modify_profile">
<action id="com.ubuntu.pkexec.aa-notify.modify_profile">
<description>AppArmor: modifying security profile</description>
<message>To modify an AppArmor security profile, you need to authenticate.</message>
<defaults>
@@ -15,7 +15,7 @@
<annotate key="org.freedesktop.policykit.exec.path">{LIB_PATH}apparmor/update_profile.py</annotate>
<annotate key="org.freedesktop.policykit.exec.argv1">add_rule</annotate>
</action>
<action id="net.apparmor.pkexec.aa-notify.create_userns">
<action id="com.ubuntu.pkexec.aa-notify.create_userns">
<description>AppArmor: adding userns profile</description>
<message>To allow a program to use unprivileged user namespaces, you need to authenticate.</message>
<defaults>
@@ -26,5 +26,16 @@
<annotate key="org.freedesktop.policykit.exec.path">{LIB_PATH}apparmor/update_profile.py</annotate>
<annotate key="org.freedesktop.policykit.exec.argv1">create_userns</annotate>
</action>
<action id="com.ubuntu.pkexec.aa-notify.from_file">
<description>AppArmor: Modifying profile from file</description>
<message>To modify an AppArmor security profile from file, you need to authenticate.</message>
<defaults>
<allow_any>auth_admin</allow_any>
<allow_inactive>auth_admin</allow_inactive>
<allow_active>auth_admin</allow_active>
</defaults>
<annotate key="org.freedesktop.policykit.exec.path">{LIB_PATH}apparmor/update_profile.py</annotate>
<annotate key="org.freedesktop.policykit.exec.argv1">from_file</annotate>
</action>
</policyconfig>

View File

@@ -1,424 +0,0 @@
;;; apparmor-mode.el --- Major mode for editing AppArmor policy files -*- lexical-binding: t; -*-
;; Copyright (c) 2018 Alex Murray
;; Author: Alex Murray <murray.alex@gmail.com>
;; Maintainer: Alex Murray <murray.alex@gmail.com>
;; URL: https://gitlab.com/apparmor/apparmor
;; Version: 0.8.2
;; Package-Requires: ((emacs "26.1"))
;; This file is not part of GNU Emacs.
;; This program is free software: you can redistribute it and/or modify
;; it under the terms of the GNU General Public License as published by
;; the Free Software Foundation, either version 3 of the License, or
;; (at your option) any later version.
;; This program is distributed in the hope that it will be useful,
;; but WITHOUT ANY WARRANTY; without even the implied warranty of
;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
;; GNU General Public License for more details.
;; You should have received a copy of the GNU General Public License
;; along with this program. If not, see <https://www.gnu.org/licenses/>.
;;; Commentary:
;; The following documentation sources were used:
;; https://gitlab.com/apparmor/apparmor/wikis/QuickProfileLanguage
;; https://gitlab.com/apparmor/apparmor/wikis/ProfileLanguage
;; TODO:
;; - do smarter completion syntactically via regexps
;; - decide if to use entire line regexp for statements or
;; - not (ie just a subset?) if we use regexps above then
;; - should probably keep full regexps here so can reuse
;; - expand highlighting of mount rules (options=...) similar to dbus
;; - add tests via ert etc
;;;; Setup
;; (require 'apparmor-mode)
;;; Code:
;; for flycheck integration
(declare-function flycheck-define-command-checker "ext:flycheck.el"
(symbol docstring &rest properties))
(declare-function flycheck-valid-checker-p "ext:flycheck.el")
(defvar flycheck-checkers)
(defgroup apparmor nil
"Major mode for editing AppArmor policies."
:group 'tools)
(defcustom apparmor-mode-indent-offset 2
"Indentation offset in `apparmor-mode' buffers."
:type 'integer
:group 'apparmor)
(defvar apparmor-mode-keywords '("all" "audit" "capability" "chmod" "delegate"
"dbus" "deny" "file" "flags" "io_uring" "include"
"include if exists" "link" "mount" "mqueue"
"network" "on" "owner" "pivot_root" "profile"
"quiet" "remount" "rlimit" "safe" "subset" "to"
"umount" "unsafe" "userns"))
(defvar apparmor-mode-profile-flags '("enforce" "complain" "debug" "kill"
"chroot_relative" "namespace_relative"
"attach_disconnected" "no_attach_disconnected"
"chroot_attach" "chroot_no_attach"
"unconfined"))
(defvar apparmor-mode-capabilities '("audit_control" "audit_write" "chown"
"dac_override" "dac_read_search" "fowner"
"fsetid" "ipc_lock" "ipc_owner" "kill"
"lease" "linux_immutable" "mac_admin"
"mac_override" "mknod" "net_admin"
"net_bind_service" "net_broadcast"
"net_raw" "setfcap" "setgid" "setpcap"
"setuid" "syslog" "sys_admin" "sys_boot"
"sys_chroot" "sys_module" "sys_nice"
"sys_pacct" "sys_ptrace" "sys_rawio"
"sys_resource" "sys_time"
"sys_tty_config"))
(defvar apparmor-mode-network-permissions '("create" "accept" "bind" "connect"
"listen" "read" "write" "send"
"receive" "getsockname" "getpeername"
"getsockopt" "setsockopt" "fcntl"
"ioctl" "shutdown" "getpeersec"
"sqpoll" "override_creds"))
(defvar apparmor-mode-network-domains '("inet" "ax25" "ipx" "appletalk" "netrom"
"bridge" "atmpvc" "x25" "inet6" "rose"
"netbeui" "security" "key" "packet"
"ash" "econet" "atmsvc" "sna" "irda"
"pppox" "wanpipe" "bluetooth" "unix"))
(defvar apparmor-mode-network-types '("stream" "dgram" "seqpacket" "raw" "rdm"
"packet" "dccp"))
;; TODO: this is not complete since is not fully documented
(defvar apparmor-mode-network-protocols '("tcp" "udp" "icmp"))
(defvar apparmor-mode-dbus-permissions '("r" "w" "rw" "send" "receive"
"acquire" "bind" "read" "write"))
(defvar apparmor-mode-rlimit-types '("fsize" "data" "stack" "core" "rss" "as"
"memlock" "msgqueue" "nofile" "locks"
"sigpending" "nproc" "rtprio" "cpu"
"nice"))
(defvar apparmor-mode-abi-regexp "^\\s-*\\(#?abi\\)\\s-+\\([<\"][[:graph:]]+[\">]\\)")
(defvar apparmor-mode-include-regexp "^\\s-*\\(#?include\\( if exists\\)?\\)\\s-+\\([<\"][[:graph:]]+[\">]\\)")
(defvar apparmor-mode-capability-regexp (concat "^\\s-*\\(capability\\)\\s-+\\("
(regexp-opt apparmor-mode-capabilities)
"\\s-*\\)*"))
(defvar apparmor-mode-variable-name-regexp "@{[[:alpha:]_]+}")
(defvar apparmor-mode-variable-regexp
(concat "^\\s-*\\(" apparmor-mode-variable-name-regexp "\\)\\s-*\\(\\+?=\\)\\s-*\\([[:graph:]]+\\)\\(\\s-+\\([[:graph:]]+\\)\\)?\\s-*\\(#.*\\)?$"))
(defvar apparmor-mode-profile-name-regexp "[[:alnum:]]+")
(defvar apparmor-mode-profile-attachment-regexp "[][[:alnum:]*@/_{},-.?]+")
(defvar apparmor-mode-profile-flags-regexp
(concat "\\(flags\\)=(\\(" (regexp-opt apparmor-mode-profile-flags) "\\s-*\\)*)") )
(defvar apparmor-mode-profile-regexp
(concat "^\\s-*\\(\\(profile\\)\\s-+\\(\\(" apparmor-mode-profile-name-regexp "\\)\\s-+\\)?\\)?\\(\\^?" apparmor-mode-profile-attachment-regexp "\\)\\(\\s-+" apparmor-mode-profile-flags-regexp "\\)?\\s-+{\\s-*$"))
(defvar apparmor-mode-file-rule-permissions-regexp "[CPUaciklmpruwx]+")
(defvar apparmor-mode-file-rule-permissions-prefix-regexp
(concat "^\\s-*\\(\\(audit\\|owner\\|deny\\)\\s-+\\)*\\(file\\s-+\\)?"
"\\(" apparmor-mode-file-rule-permissions-regexp "\\)\\s-+"
"\\(" apparmor-mode-profile-attachment-regexp "\\)\\s-*"
"\\(->\\s-+\\(" apparmor-mode-profile-attachment-regexp "\\)\\)?\\s-*"
","))
(defvar apparmor-mode-file-rule-permissions-suffix-regexp
(concat "^\\s-*\\(\\(audit\\|owner\\|deny\\)\\s-+\\)*\\(file\\s-+\\)?"
"\\(" apparmor-mode-profile-attachment-regexp "\\)\\s-+"
"\\(" apparmor-mode-file-rule-permissions-regexp "\\)\\s-*"
"\\(->\\s-+\\(" apparmor-mode-profile-attachment-regexp "\\)\\)?\\s-*"
","))
(defvar apparmor-mode-network-rule-regexp
(concat
"^\\s-*\\(\\(audit\\|quiet\\|deny\\)\\s-+\\)*network\\s-*"
"\\(" (regexp-opt apparmor-mode-network-permissions 'words) "\\)?\\s-*"
"\\(" (regexp-opt apparmor-mode-network-domains 'words) "\\)?\\s-*"
"\\(" (regexp-opt apparmor-mode-network-types 'words) "\\)?\\s-*"
"\\(" (regexp-opt apparmor-mode-network-protocols 'words) "\\)?\\s-*"
;; TODO: address expression
"\\(delegate to\\s-+\\(" apparmor-mode-profile-attachment-regexp "\\)\\)?\\s-*"
","))
(defvar apparmor-mode-dbus-rule-regexp
(concat
"^\\s-*\\(\\(audit\\|deny\\)\\s-+\\)?dbus\\s-*"
"\\(\\(bus\\)=\\(system\\|session\\)\\)?\\s-*"
"\\(\\(dest\\)=\\([[:alpha:].]+\\)\\)?\\s-*"
"\\(\\(path\\)=\\([[:alpha:]/]+\\)\\)?\\s-*"
"\\(\\(interface\\)=\\([[:alpha:].]+\\)\\)?\\s-*"
"\\(\\(method\\)=\\([[:alpha:]_]+\\)\\)?\\s-*"
;; permissions - either a single permission or multiple permissions in
;; parentheses with commas and whitespace separating
"\\("
(regexp-opt apparmor-mode-dbus-permissions 'words)
"\\|"
"("
(regexp-opt apparmor-mode-dbus-permissions 'words)
"\\("
(regexp-opt apparmor-mode-dbus-permissions 'words) ",\\s-+"
"\\)"
"\\)?\\s-*"
","))
(defvar apparmor-mode-font-lock-defaults
`(((,(regexp-opt apparmor-mode-keywords 'symbols) . font-lock-keyword-face)
(,(regexp-opt apparmor-mode-rlimit-types 'symbols) . font-lock-type-face)
;; comma at end-of-line
(",\\s-*$" . 'font-lock-builtin-face)
;; TODO be more specific about where these are valid
("->" . 'font-lock-builtin-face)
("[=\\+()]" . 'font-lock-builtin-face)
("+=" . 'font-lock-builtin-face)
("<=" . 'font-lock-builtin-face) ; rlimit
;; abi
(,apparmor-mode-abi-regexp
(1 font-lock-preprocessor-face t)
(2 font-lock-string-face t))
;; includes
(,apparmor-mode-include-regexp
(1 font-lock-preprocessor-face t)
(3 font-lock-string-face t))
;; variables
(,apparmor-mode-variable-name-regexp 0 font-lock-variable-name-face t)
;; profiles
(,apparmor-mode-profile-regexp
(4 font-lock-function-name-face t nil)
(5 font-lock-variable-name-face t))
;; capabilities
(,apparmor-mode-capability-regexp 2 font-lock-type-face t)
;; file rules
(,apparmor-mode-file-rule-permissions-prefix-regexp
(3 font-lock-keyword-face nil t) ; class
(4 font-lock-constant-face t) ; permissions
(7 font-lock-function-name-face nil t)) ;profile
(,apparmor-mode-file-rule-permissions-suffix-regexp
(3 font-lock-keyword-face nil t) ; class
(5 font-lock-constant-face t) ; permissions
(7 font-lock-function-name-face nil t)) ;profile
;; network rules
(,apparmor-mode-network-rule-regexp
(3 font-lock-constant-face t) ;permissions
(4 font-lock-function-name-face t) ;domain
(5 font-lock-variable-name-face t) ;type
(6 font-lock-type-face t)) ; protocol
;; dbus rules
(,apparmor-mode-dbus-rule-regexp
(4 font-lock-variable-name-face t) ;bus
(5 font-lock-constant-face t) ;system/session
(7 font-lock-variable-name-face t) ;dest
(10 font-lock-variable-name-face t)
(13 font-lock-variable-name-face t)
(16 font-lock-variable-name-face t)))))
(defvar apparmor-mode-syntax-table
(let ((table (make-syntax-table)))
;; # is comment start
(modify-syntax-entry ?# "<" table)
;; newline finishes comment line
(modify-syntax-entry ?\n ">" table)
;; / and + is used in path names which we want to treat as an entire word
(modify-syntax-entry ?/ "w" table)
(modify-syntax-entry ?+ "w" table)
table))
(defun apparmor-mode-complete-include (prefix &optional local)
"Return list of completions of include for PREFIX which could be LOCAL."
(let* ((file-name (file-name-base prefix))
(parent (file-name-directory prefix))
(directory (concat (if local default-directory "/etc/apparmor.d") "/"
parent)))
;; need to prepend all of directory part of prefix
(mapcar (lambda (f) (concat parent f))
(file-name-all-completions file-name directory))))
;; TODO - make a lot smarter than just keywords - complete paths from the
;; system if we look like a path, do sub-completion based on the current lines
;; keyword etc - ie match against syntax highlighting regexes and use those to
;; further complete etc
(defun apparmor-mode-completion-at-point ()
"`completion-at-point' function for `apparmor-mode'."
(let ((prefix (or (thing-at-point 'word t) ""))
(bounds (bounds-of-thing-at-point 'word))
(bol (save-excursion (beginning-of-line) (point)))
(candidates nil))
(setq candidates
(cond ((looking-back "#?include\\s-+\\([<\"]\\)[[:graph:]]*" bol)
(apparmor-mode-complete-include
prefix (string= (match-string 1) "\"")))
(t apparmor-mode-keywords)))
(list (car bounds) ; start
(cdr bounds) ; end
candidates
:company-docsig #'identity)))
(defun apparmor-mode-indent-line ()
"Indent current line in `apparmor-mode'."
(interactive)
(if (bolp)
(apparmor-mode--indent-line)
(save-excursion
(apparmor-mode--indent-line))))
(defun apparmor-mode--indent-line ()
"Indent current line in `apparmor-mode'."
(beginning-of-line)
(cond
((bobp)
;; simple case indent to 0
(indent-line-to 0))
((looking-at "^\\s-*}\\s-*$")
;; block closing, deindent relative to previous line
(indent-line-to (save-excursion
(forward-line -1)
(max 0 (- (current-indentation) apparmor-mode-indent-offset)))))
;; other cases need to look at previous lines
(t
(indent-line-to (save-excursion
(forward-line -1)
;; keep going backwards until we have a line with actual
;; content since blank lines don't count
(while (and (looking-at "^\\s-*$")
(not (bobp)))
(forward-line -1))
(cond
((looking-at "\\(^.*{[^}]*$\\)")
;; previous line opened a block, indent to that line
(+ (current-indentation) apparmor-mode-indent-offset))
(t
;; default case, indent the same as previous line
(current-indentation))))))))
;;;###autoload
(define-derived-mode apparmor-mode prog-mode "AppArmor"
"Major mode for editing AppArmor profiles."
:syntax-table apparmor-mode-syntax-table
(setq font-lock-defaults apparmor-mode-font-lock-defaults)
(setq-local indent-line-function #'apparmor-mode-indent-line)
(add-to-list 'completion-at-point-functions #'apparmor-mode-completion-at-point)
(setq imenu-generic-expression `(("Profiles" ,apparmor-mode-profile-regexp 5)))
(setq comment-start "#")
(setq comment-end "")
(when (require 'flycheck nil t)
(unless (flycheck-valid-checker-p 'apparmor)
(flycheck-define-command-checker 'apparmor
"A checker using apparmor_parser. "
:command '("apparmor_parser"
"-Q" ;; skip kernel load
"-K" ;; skip cache
source)
:error-patterns '((error line-start "AppArmor parser error at line "
line ": " (message)
line-end)
(error line-start "AppArmor parser error for "
(one-or-more not-newline)
" in profile " (file-name)
" at line " line ": " (message)
line-end))
:modes '(apparmor-mode)))
(add-to-list 'flycheck-checkers 'apparmor t)))
;; flymake integration
(defvar-local apparmor-mode--flymake-proc nil)
(defun apparmor-mode-flymake (report-fn &rest _args)
"`flymake' backend function for `apparmor-mode' to report errors via REPORT-FN."
;; disable if apparmor_parser is not available
(unless (executable-find "apparmor_parser")
(error "Cannot find apparmor_parser"))
;; kill any existing running instance
(when (process-live-p apparmor-mode--flymake-proc)
(kill-process apparmor-mode--flymake-proc))
(let ((source (current-buffer))
(contents (buffer-substring (point-min) (point-max))))
;; when the current buffer is an abstraction then fake a profile around it so
;; we can check it
(when (and (buffer-file-name)
(string-match-p ".*/abstractions/.*" (buffer-file-name)))
(setq contents (format "profile %s { %s }" (buffer-name) contents)))
(save-restriction
(widen)
;; Reset the `apparmor-mode--flymake-proc' process to a new process
;; calling check-syntax.
(setq
apparmor-mode--flymake-proc
(make-process
:name "apparmor-mode-flymake" :noquery t :connection-type 'pipe
;; Make output go to a temporary buffer.
:buffer (generate-new-buffer " *apparmor-mode-flymake*")
;; TODO: specify the base directory so that includes resolve correctly
;; rather than using the system ones
:command '("apparmor_parser" "-Q" "-K" "/dev/stdin")
:sentinel
(lambda (proc _event)
(when (memq (process-status proc) '(exit signal))
(unwind-protect
;; Only proceed if `proc' is the same as
;; `apparmor-mode--flymake-proc', which indicates that
;; `proc' is not an obsolete process.
;;
(if (with-current-buffer source (eq proc apparmor-mode--flymake-proc))
(with-current-buffer (process-buffer proc)
(goto-char (point-min))
;; Parse the output buffer for diagnostic's
;; messages and locations, collect them in a list
;; of objects, and call `report-fn'.
;;
(cl-loop
while (search-forward-regexp
"^\\(AppArmor parser error \\(?:for /dev/stdin in profile .*\\)?at line \\)\\([0-9]+\\): \\(.*\\)$"
nil t)
for msg = (match-string 3)
for (beg . end) = (flymake-diag-region
source
(string-to-number (match-string 2)))
for type = :error
collect (flymake-make-diagnostic source beg end type msg)
into diags
finally (funcall report-fn diags)))
(flymake-log :warning "Cancelling obsolete check %s" proc))
;; Cleanup the temporary buffer used to hold the
;; check's output.
(kill-buffer (process-buffer proc)))))))
(process-send-string apparmor-mode--flymake-proc contents)
(process-send-eof apparmor-mode--flymake-proc))))
;;;###autoload
(defun apparmor-mode-setup-flymake-backend ()
"Setup the `flymake' backend for `apparmor-mode'."
(add-hook 'flymake-diagnostic-functions 'apparmor-mode-flymake nil t))
;;;###autoload
(add-hook 'apparmor-mode-hook 'apparmor-mode-setup-flymake-backend)
;;;###autoload
(add-to-list 'auto-mode-alist '("\\`/etc/apparmor\\.d/" . apparmor-mode))
;;;###autoload
(add-to-list 'auto-mode-alist '("\\`/var/lib/snapd/apparmor/profiles/" . apparmor-mode))
(provide 'apparmor-mode)
;;; apparmor-mode.el ends here

View File

@@ -23,6 +23,12 @@ ignore_denied_capability="sudo,su"
# OPTIONAL - kind of operations which display a popup prompt.
# prompt_filter="userns"
# OPTIONAL - Maximum number of profile that can send notification before they are merged
# maximum_number_notification_profiles=2
# OPTIONAL - Keys to aggregate when merging events
# keys_to_aggregate="operation,class,name,denied,target"
# OPTIONAL - restrict using aa-notify to users in the given group
# (if not set, everybody who has permissions to read the logfile can use it)
# use_group="admin"

View File

@@ -1,10 +1,8 @@
GENERATING TRANSLATION MESSAGES
To generate the messages.pot file:
To generate the .pot file, run the following command in the po directory:
Run the following command in Translate.
python pygettext.py ../apparmor/*.py ../Tools/aa*
pygettext3 -o apparmor-utils.pot ../apparmor/*.py $(find .. -executable -name 'aa-*')
It will generate the messages.pot file in the Translate directory.
It will generate the apparmor-utils.pot file in the po directory.
You might need to provide the full path to pygettext.py from your python installation. It will typically be in the /path/to/python/libs/Tools/i18n/pygettext.py

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More