2
0
mirror of https://gitlab.com/apparmor/apparmor synced 2025-09-01 23:05:11 +00:00

Compare commits

..

347 Commits

Author SHA1 Message Date
John Johansen
c72c15cb27 Prepare for 4.1.0~beta5 release
- bump version

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-18 07:44:22 -08:00
John Johansen
d2707329ba apparmor: update translation pot files
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-18 07:39:35 -08:00
John Johansen
3aa8c4959f Merge tests: provide better output on failures
When a test fails because of an unexpected success (XFAIL), do not display the empty error log as that may confuse the reader just as it had confused the author.

In addition, when something legitimately fails then display tail of trace log as that may show some useful information.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1548
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 8711c7754b)
2025-02-18 07:24:04 -08:00
Zygmunt Krynicki
12941af65f tests: display tail of bash.trace on failure
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit c268e5d11b)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-18 07:24:04 -08:00
Zygmunt Krynicki
d0e07e542b tests: do not display bash.err on XFAIL passes
This makes no sense since the test has passed and there's nothing to look at in the log.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 473e791e4e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-18 07:24:03 -08:00
John Johansen
2acd4f5c7d Merge tests: mark three regression tests as fixed
The the `attach_disconnectd` test is now passing on Ubuntu 24.04+.
The `posix_ipc` is passing everywhere.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1547
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 84bf3dee2d)
2025-02-18 07:23:52 -08:00
Zygmunt Krynicki
ae4f303907 tests: remove XFAIL/mqeue, stale
There is no mqueue in Makefile TESTS anywhere. This is a red herring.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit c56cbad5ea)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-18 07:23:52 -08:00
Zygmunt Krynicki
b5e216a8de tests: mark ptrace test as fixed
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 5f8863c7ca)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-18 07:23:52 -08:00
Zygmunt Krynicki
187a1700fa tests: mark posix_ipc test as fixed
The test used to fail on some versions of Ubuntu but it now passes
everywhere.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 083dc9652b)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-18 07:23:52 -08:00
Zygmunt Krynicki
0179d52a9f tests: mark attach_disconnected as fixed
The test is now passing on Ubuntu 24.04+

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 3987bf0f33)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-18 07:23:52 -08:00
John Johansen
5184066d81 Merge aa-notify: rename polkit files and template info from com.ubuntu
We should be using apparmor controlled domains for these files.

Rename the template file from
  com.ubuntu.pkexec.aa-notify.policy
to
  net.apparmor.pkexec.aa-notify.policy

And update the template file and the install file so that the files
that are generated use net.apparmor instead of com.ubuntu

Signed-off-by: John Johansen <john.johansen@canonical.com>

Closes #486
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1541
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit e085a23b40)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-14 18:05:30 -08:00
John Johansen
9890acd0c5 aa-notify: rename polkit files and template info from com.ubuntu
We should be using apparmor controlled domains for these files.

Rename the template file from
  com.ubuntu.pkexec.aa-notify.policy
to
  net.apparmor.pkexec.aa-notify.policy

And update the template file and the install file so that the files
that are generated use net.apparmor instead of com.ubuntu

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit a410f347a3)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-14 18:03:46 -08:00
John Johansen
819b60f37d aa-notify: fix package build install of polkit files
The install of the polkit action files for aa-notify leaks build root
information.

From OBS
  apparmor-utils.noarch: E: file-contains-buildroot (Badness: 10000) /usr/share/polkit-1/actions/com.ubuntu.pkexec.aa-notify.policy

this is present on Ubuntu as well
    <annotate key="org.freedesktop.policykit.exec.path">/build/apparmor-ZUzkoL/apparmor-4.1.0~beta4/debian/tmp/usr/lib/python3/dist-packages/apparmor/update_profile.py</annotate>

this occurs because the {LIB_PATH} template variable is being replaced
with the self.install_lib. Make sure we strip the build prefix if
we are generating the files in a build environment instead of doing
a direct install.

Closes: https://gitlab.com/apparmor/apparmor/-/issues/486
Co-Author: Ryan Lee <ryan.lee@canonical.com>
Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit b4e6f0449b)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-14 11:41:10 -08:00
John Johansen
1c3672a644 Merge aa-notify: fix package build install of polkit files
The install of the polkit action files for aa-notify leaks build root
information.

From OBS
  apparmor-utils.noarch: E: file-contains-buildroot (Badness: 10000) /usr/share/polkit-1/actions/com.ubuntu.pkexec.aa-notify.policy

this is present on Ubuntu as well
    <annotate key="org.freedesktop.policykit.exec.path">/build/apparmor-ZUzkoL/apparmor-4.1.0~beta4/debian/tmp/usr/lib/python3/dist-packages/apparmor/update_profile.py</annotate>

this occurs because the {LIB_PATH} template variable is being replaced
with the self.install_lib. Make sure we strip the build prefix if
we are generating the files in a build environment instead of doing
a direct install.

Closes: https://gitlab.com/apparmor/apparmor/-/issues/486
Signed-off-by: John Johansen <john.johansen@canonical.com>

Closes #486
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1540
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 697e53d752)
2025-02-14 11:41:10 -08:00
John Johansen
529f541e7a Merge tunable: add letter, alphanumeric character, hex and words variables.
Follow up from !1544 with the other basic variables.

Variables such as `@{rand6}` and `@{word6}` are very commonly used as they allow us to restrict access from rules such as: `/tmp/*`, `/tmp/??????`

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1546
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit b5ff20b5f1)
2025-02-14 11:40:59 -08:00
Alexandre Pujol
c91730e8ca tunable: add letter, alphanumeric character, hex and words variables.
(cherry picked from commit 8af71cd5f5)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-14 11:40:59 -08:00
John Johansen
c0f4a91181 Merge abstraction: add devices-usb & devices-usb-read
Needed for https://gitlab.com/apparmor/apparmor/-/merge_requests/1433

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1545
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit dc583bc1d4)
2025-02-14 11:40:49 -08:00
Alexandre Pujol
0a549886d4 abstraction: add devices-usb & devices-usb-read
(cherry picked from commit 4591ed63ba)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-14 11:40:49 -08:00
John Johansen
7f97215c75 Merge tunable: add int variable
This PR only adds the digit `@{d}` and integer `@{int}` variables.

It provides two improvements from the use of the `[0-9]*` glob:
- security: the glob means "a digit followed by anything but `/`", whereas `@{int}` means "up to 10 digits"
Next to the
- stability: using glob in path with `x` can expose to path conflict, removing the glob fixed a lot of issues.

These variables are used by a lot of abstractions that could be upstream here from apparmor.d (PR will follow). It is an import from 33681e14f2/apparmor.d/tunables/multiarch.d/system where other similar variables are in use: `@{hex}`, `@{rand}`, `@{word}`, `@{u8}`, `@{u16}`, `@{u64}`, `@{int2}...@{int64}` ...
They also all could be upstreamed here.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1544
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 783f012074)
2025-02-14 10:39:26 -08:00
Alexandre Pujol
00c84dc82b tunable: add int variable
(cherry picked from commit d7a73847de)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-14 10:39:26 -08:00
John Johansen
48d837d829 Merge utils: aa-genprof fails on lxd with OSError: Read-only file system
On certain lxc containers, when aa-genprof tries to set
printk_ratelimit, it fails with the OSError exception, with the
message "OSError: [Errno 30] Read-only file system" instead of
PermissionError.

Since PermissionError is a subclass of OSError, replace it by broader
OSError exception to include both cases in which running aa-genprof
fails.

Reported-by: Paulo Flabiano Smorigo <paulo.smorigo@canonical.com>
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1539
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 226ab5f050)
2025-02-13 14:35:19 -08:00
Georgia Garcia
65971c8764 utils: aa-genprof fails on lxd with OSError: Read-only file system
On certain lxc containers, when aa-genprof tries to set
printk_ratelimit, it fails with the OSError exception, with the
message "OSError: [Errno 30] Read-only file system" instead of
PermissionError.

Since PermissionError is a subclass of OSError, replace it by broader
OSError exception to include both cases in which running aa-genprof
fails.

Reported-by: Paulo Flabiano Smorigo <paulo.smorigo@canonical.com>
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit e1ae6fa81c)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-13 14:35:19 -08:00
John Johansen
042d3b783f Merge utils: allow install locations to be overridden in Makefile
Instead of setting those variables unconditionally, set them if they
aren't externally set by environment variables. This will allow for usages
like DESTDIR=/some/other/dir make install in the utils directory.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1542
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 49bc2d855f)
2025-02-13 13:56:10 -08:00
Ryan Lee
6089302f0b utils: allow install locations to be overridden in Makefile
Instead of setting those variables unconditionally, set them if they
aren't externally set by environment variables. This will allow for usages
like DESTDIR=/some/other/dir make install in the utils directory.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 2747013d9b)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-13 13:56:10 -08:00
John Johansen
47096faadd aa-notify: make ttkthemes conditional - partial backport of MR1324
ttkthemees may not be installed on some systems, and if not present
will cause aa-notify to fail. Instead of making ttkthemes a required
dependency, make its use conditional on it being present.

Backport by: Christian Boltz <apparmor@cboltz.de>
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-13 00:50:24 -08:00
John Johansen
0587826fb4 Merge libapparmor: swig: various build fixes for 32-bit systems and older systems
Changes include:
 - using `long` instead of `intmax_t` for `pid_t` typemap (32-bit build failure); see commit message for more details
 - specifying messages for `static_assert` declarations (required up until C23, was accepted as a compiler extension on the systems I had tested this on previously)
 - removing label-followed-by-declaration instance (also a C23 feature supported as extension)

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1536
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 55889ef783)
2025-02-13 00:40:51 -08:00
Ryan Lee
f7503ca183 libapparmor: swig: remove instance of label followed by declaration
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit af883bb706)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-13 00:40:51 -08:00
Ryan Lee
908606e8a1 libapparmor: swig: specify message for static_assert usages
The message being optional is apparently a C23 thing that was available as an extension on the systems I tested on previously

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 87b60e4e94)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-13 00:40:51 -08:00
Ryan Lee
ecf89330d4 libapparmor: use long as the intermediate pid_t conversion type
The previous code using intmax_t failed to build on armhf because
intmax_t was long long int instead of long int on that platform.
As to shrinking down to a long: not only does SWIG lack a
SWIG_AsVal_intmax_t, but aalogparse also assumes PIDs fit in a long
by storing them as unsigned longs in aa_log_record. Thus, we can
assume that sizeof(pid_t) <= sizeof(long) right now and deal with
the big headache that a change to pid_t would cause if it becomes
larger than a long in the future.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit c5016e1227)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-13 00:40:51 -08:00
John Johansen
e931561129 Merge man apparmor.d: document how variable expansion and path sanitization works
The documentation was missing information about path sanitization, and
why you shouldn't do a leading @{VAR} on path rules. While the example
doing this was fixed, actual information about why you shouldn't do
this was missing.

Document how apparmor will collapse consecutive / characters into a
single character for paths, except when this occurs at the start of
the path.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1532
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 0c4e452b46)
2025-02-13 00:40:28 -08:00
John Johansen
cb2b8aef20 man apparmor.d: document how variable expansion and path sanitization works
The documentation was missing information about path sanitization, and
why you shouldn't do a leading @{VAR} on path rules. While the example
doing this was fixed, actual information about why you shouldn't do
this was missing.

Document how apparmor will collapse consecutive / characters into a
single character for paths, except when this occurs at the start of
the path.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit cce5bd6e95)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-13 00:40:28 -08:00
John Johansen
e0e464b757 Merge profiles: fix non-user-namespace-related sandbox bypass in unshare profile
The unshare-userns-restrict profile contained a cx transition to
transition to a profile that allows most things while denying
capabilities:

audit allow cx /** -> unpriv,

However, this transition does not stack the unshare//unpriv profile
against any other profile the target binary might have had. As a result,
the lack of stacking resulted in a non-namespace-related sandboxing
bypass in which attachments of other profiles that should have confined
the target binary do not get applied. Instead, we adopt a stack similar
to the one in bwrap-userns-restrict, with the exception that unshare
does not use no-new-privs and therefore only needs a two-layer stack
instead of a three-layer stack.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1533
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 8e586e5492)
2025-02-13 00:39:52 -08:00
Ryan Lee
f7c3a28901 Remove no-longer-true aa-enforce line from unshare-userns-restrict
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit c6ba1bd2fb)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-13 00:39:52 -08:00
Ryan Lee
e3b23b1598 profiles: fix non-user-namespace-related sandbox bypass in unshare profile
The unshare-userns-restrict profile contained a cx transition to
transition to a profile that allows most things while denying
capabilities:

audit allow cx /** -> unpriv,

However, this transition does not stack the unshare//unpriv profile
against any other profile the target binary might have had. As a result,
the lack of stacking resulted in a non-namespace-related sandboxing
bypass in which attachments of other profiles that should have confined
the target binary do not get applied. Instead, we adopt a stack similar
to the one in bwrap-userns-restrict, with the exception that unshare
does not use no-new-privs and therefore only needs a two-layer stack
instead of a three-layer stack.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit ab3ca1a93f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-13 00:39:52 -08:00
John Johansen
f308742119 Prepare for 4.1.0~beta3 release
- bump version

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:36:52 -08:00
John Johansen
7b01cd51e8 libapparmor: bump library version preparing for release
There are minor tweak around the lib, constify some vars etc. That
don't justify a large bump. Bumpt there have been changes to swig
such that we want to force it to be linked against the new lib.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:31:44 -08:00
John Johansen
afb3866c0a Merge parser: fix priority so it is handled on a per permission basis
The current behavior of priority rules can be non-intuitive with
higher priority rules completely overriding lower priority rules even in
permissions not held in common. This behavior does have use cases but
its can be very confusing, and does not normal policy behavior

    Eg.
      priority=0 allow r /**,
      priority=1 deny  w /**,

will result in no allowed permissions even though the deny rule is
only removing the w permission, beause the higher priority rule
completely over ride lower priority permissions sets (including
none shared permissions).

Instead move to tracking the priority at a per permission level. This
allows the w permission to still override at priority 1, while the
read permission is allowed at priority 0.

The final constructed state will still drop priority for the final
permission set on the state.

Note: this patch updates the equality tests for the cases where
      the complete override behavior was being tested for.

      The complete override behavior will be reintroduced in a future
      patch with a keyword extension, enabling that behavior to be used
      for ordered blocks etc.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1522
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 0ab4fc0580)
2025-02-11 15:19:32 -08:00
John Johansen
2ab1941d9d parser: change priority so that it accumulates based on permissions
The current behavior of priority rules can be non-intuitive with
higher priority rules completely overriding lower priority rules even in
permissions not held in common. This behavior does have use cases but
its can be very confusing, and does not normal policy behavior

Eg.
  priority=0 allow r /**,
  priority=1 deny  w /**,

will result in no allowed permissions even though the deny rule is
only removing the w permission, beause the higher priority rule
completely over ride lower priority permissions sets (including
none shared permissions).

Instead move to tracking the priority at a per permission level. This
allows the w permission to still override at priority 1, while the
read permission is allowed at priority 0.

The final constructed state will still drop priority for the final
permission set on the state.

Note: this patch updates the equality tests for the cases where
the complete override behavior was being tested for.

The complete override behavior will be reintroduced in a future
patch with a keyword extension, enabling that behavior to be used
for ordered blocks etc.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 1ebd991155)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:19:32 -08:00
John Johansen
6e37bc0067 parser: fix prefix dump to include priority
The original patch adding priority to the set of prefixes failed to
update the prefix dump to include the priority priority field.

Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit e56dbc2084)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:19:32 -08:00
John Johansen
29f66c3828 parser: drop priority from state permissions
The priority field is only used during state construction, and can
even prevent later optimizations like minimization. The parser already
explcitily clears the states priority field as part of the last thing
done during construction so it doesn't prevent minimization
optimizations.

This means the state priority not only wastes storage because it is
unused post construction but if used it could introduce regressions,
or other issues.

The change to the minimization tests just removes looking for the
priority field that is no longer reported.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit cc31a0da22)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:19:32 -08:00
John Johansen
71dbc73532 parser: stop using dynamic_cast for prompt permission calculations
Like was done for the other MatchFlags switch to using a node type
instead of dynamic_cast as this will result in a performance
improvement.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 9221d291ec)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:19:32 -08:00
John Johansen
d4ee66e8f4 Merge tests: run regression tests with spread (self-hosted)
This requires a runner with the tags: linux, x86_64, kvm. One needs to
be provisioned for the AppArmor project for the pipeline to function.

It is possible to run the same tests on SAAS runners offered by GitLab
but due to issue gitlab-org/gitlab-runner#6208 there is no way to expose
/dev/kvm on the host to the guest. Without this feature emulation works
but is rather slow as to be impractical.

Note that there's some overlap between the build-all job and spread that
might be avoided in the future. At present this is made more difficult
by the fact that the path where build-all job builds libapparmor is
stored internally by autotools. This prevents us from using GitLab
artifacts from moving the built files across to the spread testing jobs
without extra work.

In addition to adding the spread job, remove test-build-regression job.
This job is now redundant since the same operation is done when spread
builds and runs regression tests.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1512
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 12787648a7)
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
a2335e9395 tests: show timestamps of image-garden files
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 5a44cbe661)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
fbb2ae8b05 tests: explicitly cache cloud-init files
We were not building or caching the .seed.iso target, causing make to re-create
the image, as seen in the make --debug --dry-run output:
```
Updating goal targets....
      File ubuntu-cloud-24.04.user-data does not exist.
     Must remake target ubuntu-cloud-24.04.user-data.
echo "${USER_DATA}" | tee ubuntu-cloud-24.04.user-data
     Successfully remade target file ubuntu-cloud-24.04.user-data.
      File ubuntu-cloud-24.04.meta-data does not exist.
     Must remake target ubuntu-cloud-24.04.meta-data.
echo "${META_DATA}" | tee ubuntu-cloud-24.04.meta-data
     Successfully remade target file ubuntu-cloud-24.04.meta-data.
     Prerequisite ubuntu-cloud-24.04.user-data is newer than target ubuntu-cloud-24.04.seed.iso.
     Prerequisite ubuntu-cloud-24.04.meta-data is newer than target ubuntu-cloud-24.04.seed.iso.
    Must remake target ubuntu-cloud-24.04.seed.iso.
/usr/bin/genisoimage \
	-input-charset utf-8 \
	-output ubuntu-cloud-24.04.seed.iso \
	-volid CIDATA \
	-joliet \
	-rock \
	-graft-points \
	user-data=ubuntu-cloud-24.04.user-data \
	meta-data=ubuntu-cloud-24.04.meta-data
    Successfully remade target file ubuntu-cloud-24.04.seed.iso.
   Prerequisite ubuntu-cloud-24.04.seed.iso is newer than target ubuntu-cloud-24.04.x86_64.qcow2.
```

Build and cache the cloud-init seed iso to prevent that.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 4cfeb4a9ad)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
e6d4f79919 tests: debug image reuse logic
We are seeing images cached and then re-constructed as if something had
changed in the meanitime. Debug image construction with make --dry-run --debug.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit b3ce87af23)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
2845e42c5e tests: quote CI_NODE_INDEX
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 62f93b400e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
ef44d8e177 tests: reorganize spread pipeline a little
This way there's somewhat less repetition and the flow of job definitions is,
at least to me, easier to read.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit bcf8c968db)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
f255fcfcde tests: compress cache faster
Our cache is rather compressed already, so this should help
a little with wall-clock time.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit ebb82952bc)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
21922fea25 tests: improve image caching performance
A new explicit, non-parallel job is injected when the .image-garden.mk or
.spread.yaml file changes. This job warms up the cache for the subsequent
parallel testing jobs.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 14ceb92ca0)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
daaf768b3f tests: allow non-default branches to push spread cache
As a security measure, GitLab splits cache into two broad pools: protected and
non-protected. Any job running in a protected branch has access to the
protected cache pool. All other jobs run in the non-protected cache pool.

This effectively forces us to push to cache in non-protected branches, like all
the merge requests, in order to actually use the cache.

Ideally we'd disable this protection and only push from the default branch and
pull otherwise, as changes to dependency set is rather rare.

[1] https://docs.gitlab.com/ee/ci/caching/#use-the-same-cache-for-all-branches

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit a0adb01631)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
1d31e9e3ba tests: remove test-build-regression job
This job is now redundant since the same operation is done when spread
builds and runs regression tests.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit f82c8471f5)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
40510ba5c0 tests: run regression and profile tests with spread
This requires a runner with the tags: linux, x86_64, kvm. One needs to
be provisioned for the AppArmor project for the pipeline to function.

It is possible to run the same tests on SAAS runners offered by GitLab
but due to issue gitlab-org/gitlab-runner#6208 there is no way to expose
/dev/kvm on the host to the guest. Without this feature emulation works
but is rather slow as to be impractical.

Note that there's some overlap between the build-all job and spread that
might be avoided in the future. At present this is made more difficult
by the fact that the path where build-all job builds libapparmor is
stored internally by autotools. This prevents us from using GitLab
artifacts from moving the built files across to the spread testing jobs
without extra work.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 7f68ed174c)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
Zygmunt Krynicki
87d2513823 tests: use one spread worker for ubuntu-cloud-24.04
There's contention between running spread across many nodes, in chunks,
in a CI/CD pipeline, and running spread on one machine, across many
instances at the same time. The case with CI/CD needs one worker, as
parallelism is provided by GitLab. The case with local spread needs many
workers as parallelism is provided locally by spread allocating new
instances.

At present we need to focus on the CI/CD case. I have a plan on how to
avoid the problem entirely down the line, by running multiple copies of
spread locally, as if everything was done in a CI/CD pipeline.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit dfa331dfff)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:28 -08:00
John Johansen
6593912bef Merge profiles: fix unshare for deleted files
Unfortunately similar to bwrap unshare will need the mediate_deleted
flag in some cases.

see
  commit 6488e1fb7 "profiles: add mediate_deleted to bwrap"

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1521
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Ryan Lee <rlee287@yahoo.com>

(cherry picked from commit b5b1944f58)
2025-02-11 15:18:11 -08:00
John Johansen
2270a4f44e profiles: fix unshare for deleted files
Unfortunately similar to bwrap unshare will need the mediate_deleted
flag in some cases.

see
  commit 6488e1fb7 "profiles: add mediate_deleted to bwrap"

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit c157eb0cb6)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:18:11 -08:00
John Johansen
325143a3e8 Merge Some updates to modernize the mount regression test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1449
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 5bc1cd763c)
2025-02-11 15:17:41 -08:00
Ryan Lee
77962f6de3 Replace dd with fallocate for faster file setup
This allows the use of sparse allocation on filesystems that support it,
allowing a fallback when the underlying filesystem doesn't.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 98c60e477d)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:17:41 -08:00
Ryan Lee
a6bb35dbe7 Fix race condition in loop device setup in mount regression test
Calling losetup -f first and passing its result to create the loop device
creates a race condition in which the loop device might be claimed first
in between the two losetup calls. Instead, create the device atomically
and then obtain the loop device /dev/ handle afterwards.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 95f3bdf66b)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:17:41 -08:00
John Johansen
0da5f211b3 Merge spread: Add support for EXPECT_DENIALS in profile tests
This commit adds support for EXPECT_DENIALS in profile tests. Any test
that sets the EXPECT_DENIALS environment variable is expected to trigger
AppArmor denials and will fail if none was generated.

This allows to test that problematic behaviors are correctly blocked.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1515
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>

(cherry picked from commit 002bf1339c)
2025-02-11 15:16:33 -08:00
Maxime Bélair
73188a0da1 spread: Add support for EXPECT_DENIALS in profile tests
Introduce the EXPECT_DENIALS environment variable for profile tests.
Each line of EXPECT_DENIALS is a regex that must match an AppArmor
denial for the corresponding test, and conversely.

This ensures that problematic behaviors are correctly blocked and logged.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>
(cherry picked from commit fc3f27e255)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:16:33 -08:00
John Johansen
535da1dbea Merge parser: misc fixes on apparmor.d man page
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1516
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 4765bcd7bc)
2025-02-11 15:15:46 -08:00
Georgia Garcia
181f49b20f parser: misc fixes on apparmor.d man page
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit 998ee0595e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:15:46 -08:00
John Johansen
0390e2a7ec Merge tests/spread: fix debian system name
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1511
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>

(cherry picked from commit 54561af112)
2025-02-11 15:15:15 -08:00
Zygmunt Krynicki
42313d81c7 tests/spread: fix debian system name
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 8967dee5b9)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:15:14 -08:00
John Johansen
3f40d58642 Merge tests: unify formatting of .gitlab-ci.yml
We had some mixture of indent styles.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1510
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>

(cherry picked from commit 39cd3f6f21)
2025-02-11 15:14:53 -08:00
Zygmunt Krynicki
3164268b4a tests: unify formatting of .gitlab-ci.yml
We had some mixture of indent styles.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit d4582f232f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:14:53 -08:00
John Johansen
74ed54eb28 Merge tests: mark more regression test as known-failures
A number of tests are failing and since spread does not contain a native
XFAIL facility, we have to maintain a silent-failure feature code
ourselves. A few of those have been fixed since the first iteration of
this patch. The remaining known failures are being fixed.

Later on I would like to separate XFAIL from SKIP so that if a test is
known to exercise kernel feature unavailable on the given system, the
test is just not executed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1483
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit d482aab419)
2025-02-11 15:14:34 -08:00
Zygmunt Krynicki
6e078296bc tests: exclude debian systems from toybox test
This is so that we get a baseline that passes to enable testing in CI/CD
but also to spark a discussion around what to do with a profile that
indirectly relies on a kernel feature that is not available on a given
system.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 32bf95bb1e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:14:34 -08:00
Zygmunt Krynicki
ff3db97d5d tests: mark more regression test as known-failures
A number of tests are failing and since spread does not contain a native
XFAIL facility, we have to maintain a silent-failure feature code
ourselves. A few of those have been fixed since the first iteration of
this patch. The remaining known failures are being fixed.

Later on I would like to separate XFAIL from SKIP so that if a test is
known to exercise kernel feature unavailable on the given system, the
test is just not executed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit b0422d5572)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:14:34 -08:00
John Johansen
5f8c2c5fc9 Merge utils: adjusts aa-notify tests to handle Python 3.13+
Python 3.13 changes the formatting of long-short option pairs that use a
meta-variable. Up until 3.13 the meta-variable was repeated. Since
Python change [1] the meta-var is only printed once.

[1] https://github.com/python/cpython/pull/103372

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1495
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>

(cherry picked from commit 219626c503)
2025-02-11 15:14:05 -08:00
Zygmunt Krynicki
ee2dc1bd64 utils: abbreviate delta for Python 3.12 argparse
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 0acc138712)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:14:05 -08:00
Zygmunt Krynicki
161863ea4f utils: adjusts aa-notify tests to handle Python 3.13+
Python 3.13 changes the formatting of long-short option pairs that use a
meta-variable. Up until 3.13 the meta-variable was repeated. Since
Python change [1] the meta-var is only printed once.

[1] https://github.com/python/cpython/pull/103372

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 6336465edf)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:14:05 -08:00
John Johansen
b04faf1afc Merge tests: add fuse-overlayfs to cloud-init
This is a dependency of the overlayfs_fuse regression test.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1509
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit 6405608442)
2025-02-11 15:13:30 -08:00
Zygmunt Krynicki
9e94256fff tests: add fuse-overlayfs to cloud-init
This is a dependency of the overlayfs_fuse regression test.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 237b5c0f73)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:13:30 -08:00
John Johansen
5776e5c9df Merge libapparmor: fixes to the SWIG bindings for SWIG 4.3 and later
Unfortunately we are affected by the backwards-incompatible change introduced by https://github.com/swig/swig/pull/2907

This MR contains fixes to keep the Python-side API the same on systems using SWIG 4.3 or later, e.g. Ubuntu Plucky.

Fixes https://gitlab.com/apparmor/apparmor/-/issues/475.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

Closes #475
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1504
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 265a1656d1)
2025-02-11 15:11:55 -08:00
Ryan Lee
410e486cde Replace aa_find_mountpoint cstring_output_allocate due to $isvoid issue
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 3fa40935f5)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:11:55 -08:00
Ryan Lee
306d11538a Replace simple %append_output uses with ISVOID helpers for SWIG 4.3
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 1620887463)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:11:55 -08:00
Ryan Lee
7cbbcdad42 Create %append_output compatibility wrappers for SWIG 4.3
Unfortunately we are affected by the backwards-incompatible change introduced by https://github.com/swig/swig/pull/2907

These wrappers will be needed to fix tests on systems using SWIG 4.3 or later, e.g. Ubuntu Plucky.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 1b46ab10fd)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:11:55 -08:00
John Johansen
0a0e920c0c Merge Python SWIG binding fixes (API breaking)
Changes to Python SWIG bindings that are breaking changes but that fix bindings that were previously unusable.

This MR also depends on !1334 and !1337 being merged first, though ~~I can rebase this one if necesssary~~ this MR has now been rebased after those two were merged.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1338
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit b2f713dd83)
2025-02-11 15:10:29 -08:00
Ryan Lee
0b27a5e0cb Remove aa_query_file_{path,link}_len wrappers
The prefix can be done in higher-level languages via slicing and having an explicit length exposes an out-of-bounds memory read footgun to those higher level languages

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit a2df3143d1)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
82f815c587 Write test for aa_gettaskcon SWIG wrapper
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 53e3116350)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
e9429a9eaa Write custom SWIG typemap for pid_t
Surprisingly, SWIG did not pick up the "typedef int pid_t" from the C headers.
As such, we need to provide our own wrapper. We don't just replicate the typdef
because we still support systems that have 16-bit PIDs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit d199c2ae33)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
934c41c1e8 Test SWIG Python bindings for aa_query_file_path
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 2ce217b873)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
519b7cf4a4 SWIG aa_query helper bitmask constants and stdint header
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit edb4a72c8c)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
3ab5d7871f SWIG Python test for change_hat type signatures
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 5db4908fd7)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
c2e99bfbfe SWIG Python test refactoring of AppArmor enabled checks
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 930fca1e39)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
e8cb8da296 Test aa_getcon SWIG bindings and leave some comments for untested ones
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 369c9e73de)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
ae08d09995 Write a test for aa_splitcon's SWIG bindings
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 48901f2118)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
dce45c5c4f Typemaps for allowed, audited outputs of query functions
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit c471acbe44)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
7423e4199a Add typemap for Python SWIG aa_change_hatv so it can take a string list
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit cdb3e4a14e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
c671d6c9cc Write basic test for Python aa_find_mountpoint
Also exercises aa_is_enabled

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit ea2c957f14)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
cc40903e99 Write custom typemap for aa_splitcon
Can't use %cstring_mutable because aa_splitcon also returns a ptr

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 04da4c86b0)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
9ce2a7d83a aa_is_enabled now returns a boolean in Python
Because boooleans are a subclass of ints in Python, this isn't a breaking change

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit f05112b5e9)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
e5a86a096c Write an output typemap for errno-based functions
In Python, return status is signalled by exceptions (or lack thereof)
instead of int. Keep the typemap portable for any other languages we may
add in the future.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit a15768b0bf)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
Ryan Lee
26818e3747 Include cstring.i and some cstring output typemaps for libapparmor SWIG
This includes a custom typemap to handle (char **label, char **mode)
pairs and a cstring_output_allocate declaration for char **mnt.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 50d26beb00)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:10:29 -08:00
John Johansen
731782ae47 Merge Rename aa_log_record struct fields (C only) to allow inclusion in C++
Do an identifier rename combined with preprocessor directives and SWIG directives to allow the header to be included in C++ while keeping backwards compatibility to the extent possible.

Closes: #439

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

Closes #439
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1342
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 254b324a83)
2025-02-11 15:08:15 -08:00
Ryan Lee
09cdb28270 Basic test that uses aa_log_record struct fields via old, C++-incompatible names
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 2d7440350f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:08:15 -08:00
Ryan Lee
39af57ff40 Basic test that invokes aalogparse functions from C++ code
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 645b1406d1)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:08:15 -08:00
Ryan Lee
db8dd88f44 Add extern "C" decls to aalogparse.h for C++ usage of aalogparse
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 3cb61b6b41)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:08:15 -08:00
Ryan Lee
d7b1b24736 Add SWIG renames for fields to preserve backcompat
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit e2c407c614)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:08:14 -08:00
Ryan Lee
7be9a394f8 Rename aa_log_record struct fields (C only) to allow inclusion in C++
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 3f5180527d)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:08:14 -08:00
John Johansen
6f1c29eca0 Merge Remove aa_query_label from SWIG bindings
This is one of those functions that never worked anyways, because it
modified the passed-in label in place. Moreover, it is a low-level
interface that requires its callers to manually construct a binary query.
As such, it would be better not to expose it and to add wrappers like
aa_query_file_path for the other query classes if that functionality is
needed later.

The removal of this function from the bindings was dropped from !1337 because it exposed functionality that was not present in wrappers around aa_query_label. However, upon further discussion, we decided that it'd be better to remove it now and add other wrappers to libapparmor itself if the functionality provided by the existing wrappers became insufficient.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1352
Approved-by: John Johansen <john@jjmx.net>
Merged-by: Ryan Lee <rlee287@yahoo.com>

(cherry picked from commit 5b141dd580)
2025-02-11 15:06:58 -08:00
Ryan Lee
07b0cbfafb Remove aa_query_label from SWIG bindings
This is one of those functions that never worked anyways, because it
modified the passed-in label in place. Moreover, it is a low-level
interface that requires its callers to manually construct a binary query.
As such, it would be better not to expose it and to add wrappers like
aa_query_file_path for the other query classes if that functionality is
needed later.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit d3603a1f20)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:06:58 -08:00
John Johansen
77455d848c Merge Remove broken SWIG functions that we don't actually want to expose
It doesn't make sense to expose the *_raw functions or the varg version
of aa_change_hatv to higher-level languages. While technically a breaking
change, the generated bindings for these functions never actually worked
anyways:

 - aa_change_hat_vargs uses C varargs, which SWIG passes in NULL for by
   default. It does not attempt to process the passed-in arguments at all
   (and in fact caused an unused-argument compiler warning when compiling
   the generated bindings).
 - aa_getprocattr_raw and aa_getpeercon_raw both place output into a ``char
   **mode`` pointer. SWIG by default generates these as opaque pointer
   object arguments, rendering them unusable for getting output. Future
   patches would be needed to fix ``char**`` arguments for the other functions
   that use them. Moreover, these functions expect their caller to handle
   memory allocation, which is also not possible from a higher-level
   language point of view.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1337
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Ryan Lee <rlee287@yahoo.com>

(cherry picked from commit d35a6939be)
2025-02-11 15:05:40 -08:00
Ryan Lee
956dc6e9c0 Remove private _aa_is_blacklisted from SWIG bindings
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit bdc8889cc0)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:05:40 -08:00
Ryan Lee
74c69b23eb Remove SWIG aa_change_hat_vargs, aa_get_procattr_raw, aa_get_peercon_raw
It doesn't make sense to expose the *_raw functions or the varg version
of aa_change_hatv to higher-level languages. While technically a breaking
change, the generated bindings for these functions never actually worked
anyways:

 - aa_change_hat_vargs uses C varargs, which SWIG passes in NULL for by
   default. It does not attempt to process the passed-in arguments at all
   (and in fact caused an unused-argument compiler warning when compiling
   the generated bindings).
 - aa_getprocattr_raw and aa_getpeercon_raw both place output into a char
   **mode pointer. SWIG by default generates these as opaque pointer
   object arguments, rendering them unusable for getting output. Future
   patches would be needed to fix char** arguments for the other functions
   that use them. Moreover, these functions expect their caller to handle
   memory allocation, which is also not possible from a higher-level
   language point of view.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 2bd1884654)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:05:40 -08:00
John Johansen
d149113594 Merge Improvements to the SWIG binding handling of aa_log_record and %exception memory management
This patchset adds annotations so that SWIG can automatically manage the memory lifetimes of aa_log_record objects, and ensures proper cleanup is done in the %exception handler.

This is the first of a sequence of MRs to overhaul the SWIG bindings and fix pieces that never actually worked in the first place. As fixing those other pieces will require breaking changes, I am separating out the non-breaking changes into separate MRs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1334
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Ryan Lee <rlee287@yahoo.com>

(cherry picked from commit bcab725670)
2025-02-11 15:03:20 -08:00
Ryan Lee
2396b4ff14 Apply 1 suggestion(s) to 1 file(s)
Co-authored-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit 61b1501f48)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:03:20 -08:00
Ryan Lee
cbbe950898 Add DeprecationWarning emission to Python free_record wrapper
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 398f0790de)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:03:20 -08:00
Ryan Lee
6ddb51e10e Make Python-side free_record a no-op to prevent double-free
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 4a7a8fa213)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:03:20 -08:00
Ryan Lee
aa9e33283e Annotate SWIG aa_log_record alloc+dealloc
Swig generates a "thisown" attribute, which is an escape hatch in case
higher-level code does something weird and needs to tell SWIG whether to
free the C object when Python garbage collects it. Adding this attribute
is not a breaking change w.r.t access to the other attributes of the parsed
record.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit e5fd0fc636)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:03:20 -08:00
Ryan Lee
6fe9d2c6a3 Use SWIG_fail in %exception upon throwing OSError for errno
Unfortunately SWIG_exception does not support throwing OSError, so this
still requires Python-specific code.

Unlike just returning NULL, this will clean up intermediate allocations.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 436ebda9b5)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:03:20 -08:00
John Johansen
15748e2785 libapparmor: merge Rename aa_query_label allow and audit params in headers
This change matches the names in the .c source and the man page for aa_query_label,
and also simplifies the typemap annotations needed to make the SWIG versions usable.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1339
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>

(cherry picked from commit 65e6620014)
2025-02-11 15:01:48 -08:00
Ryan Lee
d3e3aa87a1 Rename aa_query_label allow and audit params in headers
This change matches the names in the .c source and the man page for aa_query_label,
and also simplifies the typemap annotations needed to make the SWIG versions usable.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 0c4cda2f1c)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:01:48 -08:00
John Johansen
cf7f0584dc Merge Change swig prototype of aa_getprocattr to match argname
This will matter later on for adding SWIG annotations

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1329
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit 7dc167ea48)
2025-02-11 15:00:48 -08:00
Ryan Lee
2333d48880 Change swig prototype of aa_getprocattr to match argname
This will matter later on for adding SWIG annotations

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 80bdd22ed7)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 15:00:48 -08:00
John Johansen
14b54439d9 Merge aa-load documentation improvements
This MR includes copyediting of the `aa-load --help` text as well as a man page based on the help text.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1505
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit c81eacacac)
2025-02-11 14:52:44 -08:00
Ryan Lee
5f3879fce4 Write a man page for aa-load based on the help text
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit ee8300545e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:52:44 -08:00
Ryan Lee
cd01b4be6a Copyedit the help text for aa-load
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 6592daff90)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:52:44 -08:00
John Johansen
fff8ea6d0e Merge Set up overlayfs_fuse test that uses a FUSE implementation of overlayfs
This also reorganizes the overlayfs tests slightly in order to maximize code reuse between the old test and the new one.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1503
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit dfb7abf2a6)
2025-02-11 14:52:08 -08:00
Ryan Lee
25740f2b97 Move most file setup and creation to before the overlay mount call
kernel overlayfs propagates the changes, while fuse_overlayfs doesn't

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit be38da7570)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:52:08 -08:00
Ryan Lee
88020379ca Add fuse_overlayfs to apt dependency list of Gitlab CI test-build-regression
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit ed8b6cb663)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:52:08 -08:00
Ryan Lee
48d8ec1774 Set up an overlayfs_fuse regression test by using the other path of the overlayfs_common.inc helper
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 9e05668d5a)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:52:08 -08:00
Ryan Lee
d532104072 Wire up the kernel/fuse argument switch in overlayfs_common.inc regression tests
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit a0f551d5b7)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:52:08 -08:00
Ryan Lee
221e711cd4 Move overlayfs test into include helper and wrap in overlayfs_kernel
By making the test a file to be included as a helper, we can reuse most of the code for a fuse_overlayfs test without copy-pasting

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 9413658277)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:52:08 -08:00
John Johansen
ba0704c206 Merge Upadate man apparmor.d to highlight pivot_root limitation
As pointed out by https://bugs.launchpad.net/apparmor/+bug/2087875 ,
profile transitions with pivot_root are currently not supported on any
kernel.

This commit makes this limitation more obvious to users.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1436
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit dcce4bc62f)
2025-02-11 14:51:27 -08:00
Maxime Bélair
3f15ce23ba Upadate man apparmor.d to highlight pivot_root limitation
As pointed out by https://bugs.launchpad.net/apparmor/+bug/2087875 ,
profile transitions with pivot_root are currently not supported on any
kernel.

This commit makes this limitation more obvious to users.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>
(cherry picked from commit cf51f7aadd)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:51:27 -08:00
John Johansen
6077cf37c6 Merge tests: unify CI/CD preparation phase
We now have GitLab CI/CD pipeline co-existing with spread, coupled with
image-garden and the cloud-init profile defined for each distribution.

To avoid duplicating list of required dependencies, re-use cloud-init
profile as the reference list of dependencies (superset between build
and test) to install.

In addition to the dependency list, the build_all job now re-uses spread
prepare section in similar fashion. If it builds in spread, it should
build in CI as well.

A small quality-of-life improvement is the shape of a collapsible
section around dependency installation should make reading job logs
easier.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1494
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>

(cherry picked from commit 4c8c4a1d77)
2025-02-11 14:50:49 -08:00
Zygmunt Krynicki
0ea717b352 tests: put logs from apt-get in a collapsed section
This is a small quality-of-life improvement when looking at CI/CD logs
on GitLab.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 29c618a11b)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:50:49 -08:00
Zygmunt Krynicki
05e42b6a84 tests: unify CI/CD preparation phase
We now have GitLab CI/CD pipeline co-existing with spread, coupled with
image-garden and the cloud-init profile defined for each distribution.

To avoid duplicating list of required dependencies, re-use cloud-init
profile as the reference list of dependencies (superset between build
and test) to install.

In addition to the dependency list, the build_all job now re-uses spread
prepare section in similar fashion. If it builds in spread, it should
build in CI as well.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit f01a40a77c)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:50:49 -08:00
John Johansen
6956eef4cc Merge tests: skip profile tests on Fedora
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1501
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit c80ef6fb59)
2025-02-11 14:48:56 -08:00
Zygmunt Krynicki
0667dc7318 tests: skip profile tests on Fedora
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

(cherry picked from commit 065c1d67ca)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:48:56 -08:00
John Johansen
859fb4ab72 Merge tests: add tool for observing the profile of a given command
Using gdb in batch mode, put a breakpoint on _start and spawn the
process.  Then using the built-in python interpreter print the
confinement label on the process and terminate everything.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1500
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit e750c6c66c)
2025-02-11 14:48:37 -08:00
Zygmunt Krynicki
b548d02bd8 tests: measure toybox with actual-profile-of
This should be a more readable example to follow in other tests.  The
toybox test was special given the fact that it is a shell itself, and is
fairly programmable.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

(cherry picked from commit ffd38b7ac4)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:48:37 -08:00
Zygmunt Krynicki
15bbe786f9 tests: add tool for observing the profile of a given command
Using gdb in batch mode, put a breakpoint on _start and spawn the
process.  Then using the built-in python interpreter print the
confinement label on the process and terminate everything.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 23df780544)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:48:37 -08:00
John Johansen
654b5a2499 Merge tests: add httpd-devel and pam-devel to fedora cloud-init profile
Those are needed to build the two extension modules.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1499
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>

(cherry picked from commit f98c1098b0)
2025-02-11 14:48:12 -08:00
Zygmunt Krynicki
c7574c8687 tests: add httpd-devel and pam-devel to fedora cloud-init profile
Those are needed to build the two extension modules.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit a2ace0d5d7)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:48:12 -08:00
John Johansen
fce197e45d Merge tests: add integration test for toybox
This is something that was done interactively as a part of a training
session.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1487
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit 25676c4694)
2025-02-11 14:47:43 -08:00
Zygmunt Krynicki
c35eebf008 tests: add integration test for toybox
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

(cherry picked from commit be47567d27)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:47:43 -08:00
Zygmunt Krynicki
5481571cca tests: add suite with profile tests
Hopefully more and more profiles will come with smoke tests. Since the
pattern of those tests is likely to be very similar (compile profile,
run some programs, remove profile) it will be good to check if the
profile had caused any denials to be logged. Having this at the suite
level should make writing actual tests easier.

The prepare-each and restore-each logic compile the profile, check for
errors and finally remove the profile. The debug-each logic shows the
program name (with full path).

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 2ab2c8f8a1)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:47:43 -08:00
Zygmunt Krynicki
ccdd3c8353 profiles: attach toybox profile to /usr/bin/toybox
This is the actual path used on Debian and derivatives.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

(cherry picked from commit 5c17df0219)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:47:43 -08:00
John Johansen
5abcb72699 Merge tests: enable build tests on Fedora 41
Tests that interact with the kernel are skipped (tests/regression and
tests/snapd) but everything else is green. Most of the tests are
actually passing. The only exception is the aa-notify test that was
broken by Python 3.13 stdlib change. The fix for that has been posted
separately.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1496
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>

(cherry picked from commit 1462e1c4b0)
2025-02-11 14:47:15 -08:00
Zygmunt Krynicki
1d999a1735 tests: enable build tests on Fedora 41
Tests that interact with the kernel are skipped (tests/regression and
tests/snapd) but everything else is green. Most of the tests are
actually passing. The only exception is the aa-notify test that was
broken by Python 3.13 stdlib change. The fix for that has been posted
separately.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 7ce6819c53)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:47:15 -08:00
John Johansen
b6ea99bb43 Merge tests: build PAM and apparmor modules in spread
Those fell under the radar during the initial push to expose all of
the tests to spread.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1493
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>

(cherry picked from commit 03215f46c4)
2025-02-11 14:46:50 -08:00
Zygmunt Krynicki
b191574d8f tests: build PAM and apparmor modules in spread
Those fell under the radar during the initial push to expose all of
the tests to spread.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 42c8745e73)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:46:50 -08:00
John Johansen
5efed44a32 Merge tests: switch tumbleweed to boot with security=apparmor
The openSUSE project has decided to switch to security=selinux by
default. For the purpose of continuing to test AppArmor on the
distribution, alter the cloud-init profile to switch to booting with
security=apparmor.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1492
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>

(cherry picked from commit ef880d325f)
2025-02-11 14:45:58 -08:00
Zygmunt Krynicki
79abf37d55 tests: switch tumbleweed to boot with security=apparmor
The openSUSE project has decided to switch to security=selinux by
default. For the purpose of continuing to test AppArmor on the
distribution, alter the cloud-init profile to switch to booting with
security=apparmor.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 2b44cc09a6)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:45:58 -08:00
John Johansen
d597549a73 Merge tests: pair of cleanups for the coverity job
Avoid a deprecated feature and reduce YAML complexity.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1491
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit 85d57b7f06)
2025-02-11 14:45:28 -08:00
Zygmunt Krynicki
c07a77bcc4 tests: inline .send-to-coverity command
There is no other use of this yaml fragment in the project so inline it
for simplicity.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 5abbf31ce1)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:45:28 -08:00
Zygmunt Krynicki
965b78b347 tests: rewrite coverity job to avoid deprecated "only" feature
The "only" feature has been deprecated for a while. The standard
replacement is the rules:if feature.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 61d75a11ef)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:45:28 -08:00
John Johansen
85fddb9e69 gitlab-ci: Build regression test suite in CI
Even if we can't run the regression tests in our GitLab CI environment, we can at least ensure the binaries in the regression test suite compile successfully.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1414
Approved-by: Steve Beattie <steve+gitlab@nxnw.org>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>

(cherry picked from commit 5b98577a4d)
2025-02-11 14:44:53 -08:00
Ryan Lee
c36660c394 Build regression tests in GitLab CI
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 630b38238d)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:44:53 -08:00
John Johansen
467ddd97b0 Merge Use parallelism and make --touch when building in GitLab CI for faster CI times
As per https://docs.gitlab.com/ee/ci/pipelines/compute_minutes.html#gitlab-hosted-runner-cost-factors, GitLab CI computes minutes as wall clock time per stage * a constant cost factor derived from the runner type, so using parallelism in `make -j $(nproc)` will reduce the time it takes for GitLab CI to complete without increasing usage of GitLab CI minutes.

When investigating this, I also found out that the test stages needlessly rebuilt large parts of the C code base due to mtimes not being preserved when artifacts are restored from the build stage. Adding `make --touch` updates the mtimes so that the subsequent tests do not need to rebuild binaries needlessly.

The combined changes in this MR reduce the CI time from 13 minutes and 57 seconds (cb0f84e101 of `master`, https://gitlab.com/rlee287/apparmor/-/pipelines/1501017669 on my own fork without Coverity) to 12 minutes and 49 seconds (https://gitlab.com/rlee287/apparmor/-/pipelines/1502723883). This comparison omits the `make -j $(nproc)` addition to cov-build since I do not have a way of testing its effectiveness.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1387
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 8d6270e1fe)
2025-02-11 14:41:45 -08:00
Ryan Lee
b0ccb9bdf1 Pass -j flag for cov-build as well
This is separated out because I have no way of testing this

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 01435aaaa3)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:41:45 -08:00
Ryan Lee
ac0d740110 GitLab CI: touch built files in test stages before running tests
The artifact restoration step does not preserve mtime, resulting in source files newer than built files, resulting in a needless rebuild of everything before actually running the tests.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 030f991320)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:41:45 -08:00
Ryan Lee
d09df550f1 Invoke tst_binaries target with parallelism in GitLab CI
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit c47943f1af)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:41:45 -08:00
Ryan Lee
3e8f851691 Add a tst_binaries target to the parser to build tst binaries
This allows building the tst_* binaries in parallel independently of running the parser test suite

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 2e841655cf)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:41:45 -08:00
Ryan Lee
1555b8371b Update .gitlab-ci.yml file with -j $(nproc) lines for faster building
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 88287d4eec)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:41:45 -08:00
John Johansen
1500022fa8 Merge gitlab-ci.yml: only run coverity in the upstream project
This pipeline only makes sense to run in the upstream project where
the coverity variables are defined, so they currently fail in forks.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1351
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit 7867a46e2e)
2025-02-11 14:37:22 -08:00
Georgia Garcia
e38516993c gitlab-ci.yml: only run coverity in the upstream project
This pipeline only makes sense to run in the upstream project where
the coverity variables are defined, so they currently fail in forks.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit c382efe119)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:37:22 -08:00
John Johansen
4ef5ac8399 Merge tests: snapd/mount-control: assorted fixes
This makes the snapd/mount-control test pass on all the currently tested systems. Note that there's a somewhat complex problem with the new mount APIs (https://lwn.net/Articles/753473/) from 2018 that are now being used on, for example, Debian 13.

I will need to make similar changes to the profiles generated by snapd, so any insight on what to do there is strongly appreciated.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1479
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit f171f5ebc8)
2025-02-11 14:23:35 -08:00
Zygmunt Krynicki
5e42f492f6 tests: snapd/mount-control: allow paths used on openSUSE
In addition allow linking to libeconf, generalize locale paths to cover
values other than C.UTF-8 and allow reading system-wide locale.alias and
gconv modules.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit cff25b8d17)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:23:35 -08:00
Zygmunt Krynicki
e9c76f03c8 tests: snapd/mount-control: stop/start auditd
This is needed on openSUSE Tumbleweed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 8ed810756b)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:23:35 -08:00
Zygmunt Krynicki
b8cd4c9df9 tests: snapd/mount-control: allow new mount APIs
This is not the best of fixes but it seems that on Debian 13, with new
libmount calling fsopen/fsconfig/move_mount, the current apparmor mount
rule is insufficient to allow the call to go through.

The key problems are:
- the fstype is not visible to LSM
- the source directory is an empty string
- the mount is moved to final position

I don't know the extent of "new" mount API coverage by LSM hooks but
I think we should either synthesize new permissions from old rules,
.e.g match each of the system calls against what the mount class
expression, or somehow allow the exceptions better.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 5556de53c0)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:23:35 -08:00
Zygmunt Krynicki
29f6786eeb tests: snapd/mount-control: fix bash syntax.
This masked failures that were already occuring.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 32116a50b0)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:23:35 -08:00
John Johansen
88c5565552 Merge tests: add dosfstools to image-garden cloud-init
The package is required by the file_unbindable_mount regression test.
To properly re-generate affected images please update image-garden
to version containing 9714dc45d0ef06862ffe7037193dc43386db48ea
(Tie .user-data and .meta-data to MAKEFILE_LIST).

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1480
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Zygmunt Krynicki <me@zygoon.pl>

(cherry picked from commit 43355fada5)
2025-02-11 14:17:55 -08:00
Zygmunt Krynicki
494afc470e tests: sort cloud-init package lists
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 699b598593)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:17:55 -08:00
Zygmunt Krynicki
847233b6d6 tests: add dosfstools to image-garden cloud-init
The package is required by the file_unbindable_mount regression test.
To properly re-generate affected images please update image-garden
to version containing 9714dc45d0ef06862ffe7037193dc43386db48ea
(Tie .user-data and .meta-data to MAKEFILE_LIST).

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 215fab71a5)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:17:55 -08:00
John Johansen
07e4acfd26 Merge tests: regression: separate bash traces from errors
The BASH_XTRACEFD variable can be used to redirect "set -x" traces
to a dedicated file. We can use it to split the execution trace
(what has actually happened) from the failure messages.

On a failing test this does provide improved clarity when debugging
interactively with "spread -debug". On non-interactive runs the now
shorter error list is also implicitly printed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1481
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>

(cherry picked from commit b4cb33b488)
2025-02-11 14:17:25 -08:00
Zygmunt Krynicki
6f2e854320 tests: regression: separate bash traces from errors
The BASH_XTRACEFD variable can be used to redirect "set -x" traces
to a dedicated file. We can use it to split the execution trace
(what has actually happened) from the failure messages.

On a failing test this does provide improved clarity when debugging
interactively with "spread -debug". On non-interactive runs the now
shorter error list is also implicitly printed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 2c2e0478f8)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:17:25 -08:00
John Johansen
c5286ff4df Merge tests: run autotools test verbosely
Instead of showing just the summary, display the actual test log as well.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1482
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>

(cherry picked from commit 7fa4b82235)
2025-02-11 14:16:41 -08:00
Zygmunt Krynicki
00d3e750e6 tests: run autotools test verbosely
Instead of showing just the summary, display the actual test log as well.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit fa33d7199b)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:16:41 -08:00
John Johansen
efb951c2a8 Merge parser: add a hfa dump that matches the renumbered chfa
Construction of the chfa can reorder states from what the numbering
given during the hfa constuctions because of reordering for better
compression, dead state removal to ensure better packing etc.

This however means the dfa dump is difficult (it is possible using
multiple dumpes) to match up to the chfa that the kernel is
using. Make this easier by making the dfa dump be able to take
the remapping as input, and provide an option to dump the
chfa equivalent hfa.

Renumbered states will show up as {new <== {orig}} in the dump

Eg.
```
--D dfa-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{5} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {5}  0 (0x 4/0//0/0/0)
    0x4 -> {5}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {5}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {5}  0 (0x 4/0//0/0/0)
    \n 0xa -> {5}  0 (0x 4/0//0/0/0)
    \  0x20 -> {5}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {6}
{6} perms: none
    1 0x31 -> {5}  0 (0x 4/0//0/0/0)
```

```
-D dfa-compressed-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{2 == {5}} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    0x4 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \n 0xa -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \  0x20 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {4 == {6}}
{4 == {6}} perms: none
    1 0x31 -> {2 == {5}}  0 (0x 4/0//0/0/0)
```

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1474
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 72f9952a5f)
2025-02-11 14:16:06 -08:00
John Johansen
f19ec79869 parser: add a hfa dump that matches the renumbered chfa
Construction of the chfa can reorder states from what the numbering
given during the hfa constuctions because of reordering for better
compression, dead state removal to ensure better packing etc.

This however means the dfa dump is difficult (it is possible using
multiple dumpes) to match up to the chfa that the kernel is
using. Make this easier by making the dfa dump be able to take the
emapping as input, and provide an option to dump the chfa equivalent
hfa.

Renumbered states will show up as {new <== {orig}} in the dump

Eg.
--D dfa-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{5} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {5}  0 (0x 4/0//0/0/0)
    0x4 -> {5}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {5}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {5}  0 (0x 4/0//0/0/0)
    \n 0xa -> {5}  0 (0x 4/0//0/0/0)
    \  0x20 -> {5}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {6}
{6} perms: none
    1 0x31 -> {5}  0 (0x 4/0//0/0/0)

-D dfa-compressed-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{2 == {5}} 0 (0x 4/0//0/0/0)

{1} perms: none
    0x2 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    0x4 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \a 0x7 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \t 0x9 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \n 0xa -> {2 == {5}}  0 (0x 4/0//0/0/0)
    \  0x20 -> {2 == {5}}  0 (0x 4/0//0/0/0)
    4 0x34 -> {3}
{3} perms: none
    0x0 -> {4 == {6}}
{4 == {6}} perms: none
    1 0x31 -> {2 == {5}}  0 (0x 4/0//0/0/0)

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 50452e1147)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:16:06 -08:00
John Johansen
eec48458ac Merge .gitlab-ci.yml: run pipeline in merge requests too
Hopefully this will allow us to run pipelines in regular branches but
also run it on merge requests on the parent project. This is needed
for users that are not verified by Gitlab.
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html#run-pipelines-in-the-parent-project

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1346
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit bb460ba467)
2025-02-11 14:12:30 -08:00
Georgia Garcia
58250a5ca3 .gitlab-ci.yml: run pipeline in merge requests too
Hopefully this will allow us to run pipelines in regular branches but
also run it on merge requests on the parent project. This is needed
for users that are not verified by Gitlab.
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html#run-pipelines-in-the-parent-project

(cherry picked from commit 248e5673ef)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-02-11 14:12:30 -08:00
Christian Boltz
e475b3e2f2 Merge Fix leading slash var typo in apparmor.d var example
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1527
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>


(cherry-picked from commit b4caf8782c)

41be573b Fix leading slash var typo in apparmor.d var example

Co-authored-by: John Johansen <john@jjmx.net>
2025-02-07 20:22:26 +00:00
Georgia Garcia
9a3f7a1f6e Merge utils: test: account for last cmd format change in test-aa-notify
The "last" command, which was supplied by util-linux in older Ubuntu
versions, is now supplied by wtmpdb in Oracular and Plucky. Unfortunately,
this changed the output format and broke our column based parsing.

While the wtmpdb upstream has added json support at
https://github.com/thkukuk/wtmpdb/issues/20, we cannot use it because
we need to support systems that do not have this new feature added.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1508
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>


(cherry picked from commit 3b7ee81f04)

afd6aa05 utils: test: account for last cmd format change in test-aa-notify

Co-authored-by: John Johansen <john@jjmx.net>
2025-01-28 12:35:01 +00:00
Georgia Garcia
728145f3fb Merge utils: look for 'file' class when parsing logs
Since kernel commit 8c4b785a86be the class is available to check if
the log belongs to which class. This fixes cases where the logparser
is not able to distinguish between network and file operations.

This issue does not manifest previous to and including apparmor-4.0
because we did not process auditing logs then.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/478
Reported-by: vyomydv vyom.yadav@canonical.com
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

This patch should be cherry-picked to apparmor-4.1

Closes #478
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1507
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>


(cherry picked from commit 5f06df3868)

af6dfe5b utils: look for 'file' class when parsing logs

Co-authored-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-27 19:34:38 +00:00
Georgia Garcia
51325b3ab7 Merge Allow overrides and preservation of some environment variables in utils make check
Our ubuntu packaging builds Python-enabled libapparmor's in the directories `libapparmor/libapparmor.python[version_identifier]`. In order for the util's `make check` to pick up on the correct libapparmor during the Ubuntu build process, we need the ability to override its search path. This patch introduces a `LIBAPPARMOR_BASEDIR` variable to allow for that.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1497
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>


(cherry picked from commit 17a09d2987)

90143494 Allow overrides and preservation of some environment variables in utils make check

Co-authored-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 19:12:17 +00:00
Georgia Garcia
f4a07a07c2 Merge utils: test: various fixes for utils testing in Ubuntu packaging
The first patch fixes a `test-aa-notify.py` `TypeError` when `APPARMOR_NOTIFY` and `__AA_CONFDIR` are both specified, which is something that was broken all this time.

The second patch ensures that `aa-notify` in the test suite is run using the same Python interpreter that the test suite itself is run with, which is necessary for testing the utils under different Pythons.

The third patch does analogous modifications to the minitools tests that launch `aa-audit`, `aa-complain`, etc.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1498
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>


(cherry picked from commit 625a919bb8)

3365e492 utils: test: test-aa-notify: Ensure aanotify_bin is always a list
77cabf7d utils: test: use sys.executable when launching aa-notify in tests
e32c2673 utils: test: use sys.executable when launching minitools in tests

Co-authored-by: Georgia Garcia <georgia.garcia@canonical.com>
2025-01-23 19:07:44 +00:00
Ryan Lee
b59626a224 Merge regression tests: fix the overlayfs mv test failures
The file being moved from needs rw permissions and not just w permissions.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1488
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit a12004f96c)
2025-01-21 11:12:40 -08:00
Ryan Lee
d72fa8834c regression tests: fix the overlayfs mv test failures
The file being moved from needs rw permissions and not just w permissions

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 63c944a01a)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-21 11:12:40 -08:00
Ryan Lee
f6c7899f36 Merge Add overlayfs regression tests
These tests exercise various common file operations on files in an overlayfs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1461
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit cd4bb05f20)
2025-01-21 11:12:20 -08:00
Ryan Lee
eb89538cab Shellcheck pass over overlayfs.sh
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 1d3d48cc2a)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-21 11:12:20 -08:00
Ryan Lee
049ad49ffb Extend overlayfs test with more file ops
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit b24a820e7a)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-21 11:12:20 -08:00
Ryan Lee
150d81a705 Add more operations to the regression test complain binary
This extra functionality is to be used in a different regression test that reuses the binary

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 8212fa8be4)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-21 11:12:20 -08:00
Ryan Lee
1567a2de16 Add the overlayfs regression test to task.yaml
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit e0127767fd)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-21 11:12:20 -08:00
Ryan Lee
0f2509d74a Add the overlayfs regression test to the Makefile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 1cb11f5a89)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-21 11:12:20 -08:00
Ryan Lee
24e7b806cc Add a basic overlayfs regression test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 2fdb5c799c)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-21 11:12:20 -08:00
Christian Boltz
db93b6c639 Merge postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/incoming/.

Similar to 3c2aae3.

Example error:

```
type=AVC msg=audit(1737094364.337:12023): apparmor="DENIED" operation="open" profile="postfix-showq" name="/var/spool/postfix/incoming/B7E4C12C784A" pid=17879 comm="showq" requested_mask="r" denied_mask="r" fsuid=91 ouid=91FSUID="postfix" OUID="postfix"
```

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1489
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>


(cherry picked from commit 817d5eed1d)

ba765e0e postfix-showq profile fix

Co-authored-by: Christian Boltz <apparmor@cboltz.de>
2025-01-18 13:10:03 +00:00
Christian Boltz
4c849a9c9e Merge Add support for lastlog2 to get last login
lastlog2 is the 2038-safe replacement for wtmp, and in the meantime
became part of util-linux.

Adjust get_last_login_timestamp() to use the lastlog2 database
(/var/lib/lastlog/lastlog2.db) if it exists, and adjust
get_last_login_timestamp_lastlog2() to actually do that.

(If lastlog2.db doesn't exist, aa-notify will read wtmp as usual.)

Unfortunately lastlog2 doesn't have a way to get machine-readable output
(for example json), therefore - after trying and failing to parse the
lastlog2 output - directly read from lastlog2.db. Let's hope the format
never changes ;-)

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1228378

Fixes: https://bugzilla.opensuse.org/show_bug.cgi?id=1216660

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/372

I propose this patch for 4.0 and master.

Closes #372
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1282
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>


(cherry picked from commit 692e6850ba)

7d537efc Rename get_last_login_timestamp to get_last_login_timestamp_wtmp
371a9ff9 Add support for lastlog2 to get last login
45e4c27c Add support for lastlog2 to get last login

Co-authored-by: Christian Boltz <apparmor@cboltz.de>
2025-01-14 19:13:15 +00:00
Christian Boltz
6de66daba4 Merge Support unloading profiles in kill and prompt mode
... in aa-teardown (actually everything that uses rc.apparmor.functions)
and aa-remove-unknown.

Fixes: https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2093797

I propose this fix for 3.0..master, since the apparmor.d manpage in all these branches mentions the `kill` flag.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1484
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>


(cherry picked from commit 9629bc8b6f)

1c2d79de Support unloading profiles in kill and prompt mode

Co-authored-by: Christian Boltz <apparmor@cboltz.de>
2025-01-14 18:24:57 +00:00
Ryan Lee
2e41f447d2 Merge Make libaalogparse fully reentrant by removing its globals
Tested by using Valgrind's Helgrind and DRD against the reentrancy test that I wrote: they both report no errors with the changes while reporting many errors with the old versions.

Commits "Inline _parse_yacc in libaalogparse" and "Make parse_record take a const char pointer since it never modified str anyways" have a tiny potential to be backwards-incompatible changes: I have justified why they shouldn't be in the commit messages, but it's worth looking over in case I was mistaken and we need to back those out.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1322
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 37cac653d1)
2025-01-09 11:16:07 -08:00
Ryan Lee
97051875d0 Remove remnants of comments regarding old apparmor log format
The entry AA_RECORD_SYNTAX_V1 is only there for API compatibility reasons.
If we wanted to remove it, we could just renumber the other two entries
to preserve ABI compatibility. However, it seems easier to just delete the
entry if we ever break backcompat with a libapparmor2.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 79670745d6)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-09 11:16:07 -08:00
Ryan Lee
43c759afc6 Make parse_record take a const char pointer since it never modified str anyways
This shouldn't be a breaking change because it's fine to pass a
non-const pointer to a function taking a const pointer, but not the other way round

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 78f138c37f)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-09 11:16:07 -08:00
Ryan Lee
a6b9fc49d2 Add an aalogparse reentrancy test for simultaneous log parsing from different threads
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 66e1439293)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-09 11:16:07 -08:00
Ryan Lee
fcbfaa29b2 Inline _parse_yacc in libaalogparse
This function was only ever called once inside libaalogparse.c, and it looks
simple enough to not need to be split out into its own helper function.

As this function was never exposed publicly in installed header files, removing it
is not a breaking API change.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 6a55fb5613)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-09 11:16:07 -08:00
Ryan Lee
08cd2271ed Remove manual YYDEBUG define in grammar.y
The generated grammar.h already sets the correct YYDEBUG value regardless
of whether parse.trace is defined

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 7ff045583d)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-09 11:16:07 -08:00
Ryan Lee
2571d5bbc0 Also make the bison parser of libaalogparse fully reentrant
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit dba7669443)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-09 11:16:07 -08:00
Ryan Lee
dde841575e Silence -Wyacc because we rely on GNU bison extensions to yacc
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit c5c7565357)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-09 11:16:07 -08:00
Ryan Lee
4b290a922a Make libaalogparse lexer fully reentrant by removing its globals
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit e0504e697a)
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
2025-01-09 11:16:07 -08:00
John Johansen
8d9a061a45 Prepare for 4.1.0~beta3 release
- bump version

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 02:47:26 -08:00
John Johansen
94ea0f00b1 Merge parser: convert uint to unsigned int
As reported in https://gitlab.com/apparmor/apparmor/-/merge_requests/1475
uint requires the inclusion of sys/types.h for use in musl libc.
Including that would be fine but since it is only used for the
cast for the owner type comparison, just convert to use a more
standard type.

Reported-by: @fossd <fossdd@pwned.life>
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1478
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit cd8b75abc0)
2025-01-09 02:46:06 -08:00
John Johansen
99e919c288 parser: convert uint to unsigned int
As reported in https://gitlab.com/apparmor/apparmor/-/merge_requests/1475
uint requires the inclusion of sys/types.h for use in musl libc.
Including that would be fine but since it is only used for the
cast for the owner type comparison, just convert to use a more
standard type.

Reported-by: @fossd <fossdd@pwned.life>
Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit ff03702fde)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 02:46:06 -08:00
John Johansen
d805b5c3f8 Merge cupsd: Add /etc/paperspecs and convert to @etc_ro/rw
I had this message in my log

```
Dez 30 08:14:46 kernel: audit: type=1400 audit(1735542886.787:307): apparmor="DENIED" operation="open" class="file" profile="/usr/sbin/cupsd" name="/etc/paperspecs" pid=317509 comm="cupsd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
```

If the second commit is bad, I can drop it.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1472
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit e5a960a685)
2025-01-09 02:26:24 -08:00
Jörg Sommer
2aa7fe4659 cupsd: convert profile to @etc_ro/rw
While cups itself writes to /etc the others require only read-only access
and might therefore live in /usr/etc.

(cherry picked from commit c3af6228fd)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 02:26:24 -08:00
Jörg Sommer
c456101ebb cupsd: Add /etc/paperspecs read access
Cups uses libpaper which accesses /etc/paperspecs.

ce42216e2e/lib/libpaper.c.in.in (L419)
(cherry picked from commit 97d7fa3f5f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 02:26:24 -08:00
John Johansen
9875ba19ef Merge Allow write access to /run/user/*/dconf/user
Gtk applications like Firefox request write access to the file
`/run/user/1000/dconf/user`. The code in `dconf_shm_open` opens the file
with `O_RDWR | O_CREAT`.

4057f8c84f/shm/dconf-shm.c (L68)

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1471
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 0eca26c6c2)
2025-01-09 02:26:07 -08:00
Jörg Sommer
ab15e29654 Allow write access to /run/user/*/dconf/user
Gtk applications like Firefox request write access to the file
`/run/user/1000/dconf/user`. The code in `dconf_shm_open` opens the file
with `O_RDWR | O_CREAT`.

4057f8c84f/shm/dconf-shm.c (L68)
(cherry picked from commit 318fb30446)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 02:26:07 -08:00
John Johansen
320a2a5155 Merge parser: fix priority for file rules.
Fix priority for file rules, and the ability to dump the dfa at different stages, and update and fix the equality tests.

This in particular adds the ability to better debug the equality tests. Instead of just piping the parser output into the hash it creates a tmp dir and drops the binary files there so they can be manually examined. It adds new options particularly the -r option making so the tests will exit on first failure to make it easier to isolate and examine a failure.

Eg.
```
./equality.sh -r -d -v
Equality Tests:
................................................................................................................................................................................................................................
Binary inequality 'priority=-1'x'priority=-1' change_hat rules automatically inserted
FAIL: Hash values match
parser: ./../apparmor_parser -QKSq --features-file=./features_files/features.all
known-good (ee4f926922ecd341f1389a79dd155879) == profile-under-test (ee4f926922ecd341f1389a79dd155879) for the following profiles:
known-good         /t { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current a, ^test { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current a, /f r, }}
profile-under-test /t { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current w, ^test { priority=-1 owner /proc/[0-9]*/attr/{apparmor/,}current w, /f r, }}

  files retained in "/tmp/eq.3240859-deHu10/"
```

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1455
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 40e9b2a961)
2025-01-09 01:44:18 -08:00
John Johansen
00dc6794f5 parser: equality tests: convert to using sha256sum for the hashes
There is a general industry wide effort to move off of md5 and even
sha1 (see recent kernel changes). While in this particular use case it
doesn't make a difference (besides slightly lowering the chance of a
collision) switch to sha256sum to make sure our code doesn't depend on
tools that are deprecated and there is an effort to remove.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 027b508da8)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
958a77a2db parser: equality tests: fix r carve out tests
Similar to the deny x permission tests, the tests that test carving
out r permissions need to be updated to be conditional on what
priority is being used on the rule.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit bf7b80c478)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
b4aa2cfde4 parser: equality tests: update deny x perm carve out test
With priority rules, deny does not carve out permissions from the
higher priority rule. Technically it doesn't from lower priority either
as it completely overrides them, but that case already results in
an inequality so does not cause the tests to fail.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 25f16b239d)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
86273b746a parser: equality tests: fix cx specified profile transition
cx rules using a specified profile transition, may be emulated by
using px and a hierarchical profile name. That is

  cx -> b

may be transformed into

  px -> profile//b

which will generate an xtable entry of

  profile//b

which means the previous patch using

  pivot_root -> b,

to reliably add b to the xtable will not cover this case.

transition to using two pivot_root rules to provide the xtable entries
  pivot_root /a -> b,
  pivot_root /c -> /t//b,

the paths /a and /c are irrelavent as long as they don't have an
overlap with the generic globbing expression in the test, Two table
entries will be generated. We guarantee no overlap by converting the

  /** to /f**

Also the xtable reserving rules are moved to the end of the profile so
the table order can be reliably created. A follow on MR around xtable
improvements should add reliability to xtable order.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 369029dc07)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
6a26d1f58c parser: equality tests: fix equality failure due to xtable
exec rules that specify an specific target profile generate an entry
in the xtable. The test entries containing " -> b" are an example of
this.

Currently the parser allocates the xtable entry before priorities are
applied in the backend, or minimization is done. Further more the
parser does not ref count the xtable entry to know what it is no
longer referenced.

The equality tests generate rules that are designed to completely
override and remove a lower priority rule, and remove it. Eg.

  /t { priority=1 /* ux, /f px -> b, }

and then compares the generated profile to the functionaly equivalent
profile eg.

  /t { priority=1 /* ux, }

To verify the overridden rule has been completely removed.
Unfortunately the compilation is not removing the unused xtable entry
for the specified transition, causing the equality comparison to fail.

Ideally the parser should be fixed so unused xtable entries are removed,
but that should be done in a different MR, and have its own test.

To fix the current tests, and another rule that adds an xtable entry
to the same target that can not be overriden by the x rule using
pivot_root. The parser will dedup the xtable entry resulting in the
known and test profile both having the same xtable. So the test will
pass and meet the original goal of verifying the x rule being overriden
and eliminated.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 84650beb2f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
17d3545d07 parser: equality tests: rework and add debug features
Failed equality tests can be hard to debug. The profiles aren't always
enough to figure out what is going on. Add several options that will
help in debugging, and developing new tests.

Add switches and arg parsing.

Add the ability to run tests individually

Add a -r flag to allow retaining the test and output
similar to the regression tests, so the exact output from the
tests can be examined.

Add a -d flag to dump dfa build information.

Allow overriding the parser, features, and description for a given
test run.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit cca842b897)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
640c3dde26 parser: equality tests: wrap test run in function
In preparation for some additional abilities wrap the current tests in
a function.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 05ddc61246)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
380a5c8a72 parser: equality tests: consitently dump error output to stderr
printf of failure/error info should be going to stderr. Unfortunately
the test has a mix of 2>&1 and 1>&2. Having a mix is just wrong, we
could standardize on either but since the info is error info 1>&2
seems to be the better choice.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 31e60baab2)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
f26f577742 parser: equality tests: fix failing overlapping x rule tests
The test was passing because the file priority was always zero bug
resulting in the priority rule always being correctly combined
with the specific match x rule, instead of overriding it.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 57c57f198c)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
2700e58755 parser: equality tests: fix change_hat priority test
The test was passing because the file priority always being zero bug,
the supplied rule always had the same priority as the implied
rule. Resulting in binary_equality always passing even though the
specified priority should have resulted in a failure.

Fix this by checking if the priorities are equal to the implied
rule other wise it should result in an inequality.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 4b410b67f1)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
427a895288 parser: equality tests: output parser, config and features info
When there is a failure output the exact call info used to invoke the
parser. To facilitate manually recreating the test.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit d275dfdd42)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
dc0a9dc599 parser: equality tests: convert xequality tests to equality
With the file priority fix the xequality (expected equal but known
failure) tests are now passing. So convert them to regular equality
tests.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit fcee32a37e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
74219b34dc parser: add some new dfa dump options.
The dfa goes through several stages during the build. Allow dumping it
at the various stages instead of only at the end.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 5d2a38e816)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
5aaa45e4ce parser: fix priority for file rules.
File rules could drop priority info when rule matched a rule
that was the same except for having different priority. For now
fix this by treating them as a different rule.

The priority was also be dropped when add_prefix was used to
add the priority during the parse resulting in file rules always
getting a default priority of 0.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 9d5b86bc9d)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:44:18 -08:00
John Johansen
0c02c8afe1 Merge Allow python cache under the @{HOME}/.cache/ dir
Starting with Python 3.8, you can use the PYTHONPYCACHEPREFIX environment
variable to define a cache directory for Python [1]. I think most people would set
this dir to @{HOME}/.cache/python/ , so the python abstraction should allow
writing to this location.

[1]: https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPYCACHEPREFIX

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1467
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 8c799f4eec)
2025-01-09 01:43:42 -08:00
Mikhail Morfikov
70ed8d6f38 Allow python cache under the @{HOME}/.cache/ dir
Starting with Python 3.8, you can use the PYTHONPYCACHEPREFIX environment
variable to define a cache directory for Python [1]. I think most people would set
this dir to @{HOME}/.cache/python/ , so the python abstraction should allow
writing to this location.

[1]: https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPYCACHEPREFIX

(cherry picked from commit 03b5a29b05)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:43:42 -08:00
John Johansen
5751614928 Merge regression tests: make loop device size more generous
Depending on the system, copying echo to the loop device fails because the echo binary is too large.
Especially on systems that have echo be just a symlink to coreutils (e.g. busybox) (as opposed to echo being its own binary) 16k is just not enough.
2M seems fine on my system, but this might need yet a higher value depending on what coreutils other people actually run.

The crash in question:
```
cp: error writing '/tmp/sdtest.3937422-31490-Bxvi6g/mount_target/echo': No space left on device
Fatal Error (file_unbindable_mount): Unexpected shell error. Run with -x to debug
rm: cannot remove '/tmp/sdtest.3937422-31490-Bxvi6g/mount_target': Device or resource busy
```

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1469
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 8e431ebcd9)
2025-01-09 01:43:11 -08:00
Grimmauld
73842b54f7 regression tests: make loop device size more generous
Depending on the system, copying echo to the loop device fails because the echo binary is too large.
Especially on systems that have echo be just a symlink to coreutils (e.g. busybox) 16k is just not enough.
2M seems fine on my system, but this might need yet a higher value depending on what coreutils other people actually run.
The actual loop device needs to be larger to properly fit the allocated file size. Testing shows 4M is sufficient, but this is basically arbitrary.

(cherry picked from commit 1cc2a3bd86)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:43:11 -08:00
John Johansen
54f1cf8dca Merge Write a regression test for mediating file access in private mounts
This test, as is, emits an execname warning which is due to a bug in the `prologue.inc` infrastructure (see !1450 for a fix to this issue).

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1448
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit ba60bfff85)
2025-01-09 01:42:41 -08:00
Ryan Lee
2de3b84de2 Shellcheck fix pass over file_unbindable_mount test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit fa58d3611a)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:42:41 -08:00
Ryan Lee
9fc848be81 Add file_unbindable_mount to regression task.yaml
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit c768a7dc79)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:42:41 -08:00
Ryan Lee
fefbf514f7 Add file_unbindable_mount to regression test Makefile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 049b35dff0)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:42:41 -08:00
Ryan Lee
ae0c588acb Write a regression test for mediating file access in unbindable mounts
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit f249c6d58f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:42:41 -08:00
John Johansen
0af8c5e26f Merge aa-status: fix json generation
- previously, aa-status --json --show profiles would return non-standard json
- adding the --pretty flag would crash completely
- closes #470

Things done:
- removed trailing ", " in json generation
- generate json seperator (", ") for each new json field
  (profiles/processes) after the header if json is enabled

Tested on NixOS and apparmor 4.0.3 base, but should work on any version the patch applies on.

Closes #470
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1451
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit c489631770)
2025-01-09 01:42:18 -08:00
Grimmauld
f4deae6759 aa-status: fix json output with --count flag
(cherry picked from commit 9967ba9873)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:42:18 -08:00
Grimmauld
0691cfcf3c aa-status: fix json generation
- previously, aa-status --json --show profiles would return non-standard json
- adding the --pretty flag would crash completely
- closes #470

Things done:
- removed trailing ", " in json generation
- generate json seperator (", ") for each new json field
  (profiles/processes) after the header if json is enabled

Tested on NixOS and apparmor 4.0.3 base, but should work on any version the patch applies on.

(cherry picked from commit 4f006a660c)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:42:18 -08:00
John Johansen
760ddaeb80 Merge fixes on the testing infrastructure
This MR is meant to resolve warnings such as "Warning: execname '/home/username/Documents/apparmor/tests/regression/apparmor/file_unbindable_mount': no such file or directory" when running tests like the one in the current version of !1448.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1450
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 59957aa1d8)
2025-01-09 01:42:03 -08:00
Georgia Garcia
4e46df38cf tests: fix profile name when wrapper is specified
When settest was called with two parameters, one for the test name and
the other for the test wrapper/binary, the profile created with
genprofile would show the test name, causing an error if the file
didn't exist.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit b4adff2ce0)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:42:03 -08:00
Georgia Garcia
e9858c0c43 tests: add option to append a profile to a profile already generated
Some of the tests using the --stdin option of mkprofile.pl are adding
more than one profile at a time. Whenever a profile is created in the
test, its name is added to the file profile.names so the test
infrastructure can tell if the profile is loaded or removed when
appropriately. The issue is that the name of the second profile
created by --stdin is not added, so these checks are not applied.

This patch adds the option of appending a second profile (not rules).
The option --append was used instead of a short -A because the short
options are arguments of mkprofile.pl, which --append is not.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit 0307619ed9)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:42:03 -08:00
Georgia Garcia
0e59b99623 tests: remove outdated restriction on image name specification
Due to how the tests were implemented in the past, permissions could
be passed along with the image name, and the permission part would be
discarded. The issue is that permissions are usually separated by ':',
but namespaces also contain ':', which would cause a conflict.

Since permissions are no longer passed as part of the image name,
remove that description so profile names in namespaces can be
supported.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit 9cc40e2dca)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:42:03 -08:00
John Johansen
9a2f0ff702 Merge profiles: transmission-gtk needs attach_disconnected
From LP: #2085377, when using ip netns to torrent traffic through a
VPN, attach_disconnected is needed by the policy because ip netns sets
up a mount namespace.

Fixes: https://bugs.launchpad.net/bugs/2085377
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1395
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 50f260df51)
2025-01-09 01:41:36 -08:00
Georgia Garcia
c153a6916f profiles: transmission-gtk needs attach_disconnected
From LP: #2085377, when using ip netns to torrent traffic through a
VPN, attach_disconnected is needed by the policy because ip netns sets
up a mount namespace.

Fixes: https://bugs.launchpad.net/bugs/2085377
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit f9edc7d4c1)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:41:35 -08:00
John Johansen
2316ad42d4 Merge Allow make-* flags with remount operations
While the mount syscall documentation disallows this, the kernel silently
ignores make-* flags when doing a remount, and real applications were
passing this conflicting set of flags. Because changing the kernel to
reject this combination would break userspace, we should allow them
instead.

For an example: see https://bugs.launchpad.net/apparmor/+bug/2091424.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1466
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 3ed5adb665)
2025-01-09 01:41:22 -08:00
Ryan Lee
e46ca918a2 Add a regression test for allowing rprivate with conflicting options
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 83270fcf68)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:41:22 -08:00
Ryan Lee
610d383de2 Allow make-* flags with remount operations
While the mount syscall documentation disallows this, the kernel silently
ignores make-* flags when doing a remount, and real applications were
passing this conflicting set of flags. Because changing the kernel to
reject this combination would break userspace, we should allow them
instead.

For an example: see https://bugs.launchpad.net/apparmor/+bug/2091424.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 52babe8054)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:41:22 -08:00
John Johansen
5ae6f202f8 Merge Add separator between mount flags in dump_flags
The previous code would concatenate all of them together without spacing.
While dump_flags and the corresponding operator<< function aren't currently used,
this will help for when dump_flags is used to debug parser problems.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1465
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit 67ee5f8b39)
2025-01-09 01:40:41 -08:00
Ryan Lee
d96d69a60c Add separator between mount flags in dump_flags
The previous code would concatenate all of them together without spacing.
While dump_flags and the corresponding operator<< function aren't currently used,
this will help for when dump_flags is used to debug parser problems.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 96718ea4d1)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:40:41 -08:00
John Johansen
164526d16a Merge Update fs type comment in swap regression test
As per https://gitlab.com/apparmor/apparmor/-/merge_requests/1463#note_2259888640: this really should have been a part of !1463, except that cboltz only pointed this out after the MR was already merged. Better late than never, nevertheless.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1464
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>

(cherry picked from commit f2c398405b)
2025-01-09 01:39:20 -08:00
Ryan Lee
5267a7eb14 Update fs type comment in swap regression test
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 5cd3362a81)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:39:20 -08:00
John Johansen
fd24c230c9 Merge Fix swap regression test on btrfs
As per !1462 it turns out that the swap regression test on btrfs also needs special casing in order to work properly. This is an analogous patch to check for btrfs.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1463
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit 6d7b5df947)
2025-01-09 01:39:00 -08:00
Ryan Lee
14933dc768 Fix swap regression test on btrfs
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 90c7af69c5)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:39:00 -08:00
John Johansen
9bf91bbe40 Merge fix swap test on zfs file system
Swap on ZFS is *weird*. Getting it working needs some special casing, see e.g. https://askubuntu.com/questions/1198903/can-not-use-swap-file-on-zfs-files-with-holes

Currently, the swap regression test fails on my system (with /tmp in zfs):
```bash
tests/regression/apparmor ❯ ./swap.sh
Error: swap failed. Test 'SWAPON (unconfined)' was expected to 'pass'. Reason for failure 'FAIL: swapon /tmp/sdtest.872368-19048-kN4FN2/swapfile failed - Invalid argument'
Error: swap failed. Test 'SWAPOFF (unconfined)' was expected to 'pass'. Reason for failure 'FAIL: swapoff /tmp/sdtest.872368-19048-kN4FN2/swapfile failed - Invalid argument'
swapon: /tmp/sdtest.872368-19048-kN4FN2/swapfile: skipping - it appears to have holes.
Fatal Error (swap): Unexpected shell error. Run with -x to debug
```

However, just doing a file mount does make the test work on zfs, similar to how it is done with tmpfs. This means we don't need any special-casing for zfs beyond what is already there for working around (similar) tmpfs limitations.

Also, while researching this, it is possible a similar patch is needed for btrfs, but i currently don't have an easy way to test that.
This is non-breaking for anyone *not* using zfs, and it is currently broken with zfs anyways.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1462
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit e8f1ac4791)
2025-01-09 01:38:43 -08:00
Grimmauld
8597b04aac fix swap test on zfs file system
(cherry picked from commit 9a1b538298)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:38:43 -08:00
John Johansen
28537ff8ec Merge limit buildpath.py setuptools version check to the relevant bits
previously, this check would fail if the setuptools version would contain non-integers.
On my system, that is the case: `setuptools.__version__` is `'75.1.0.post0'`
I believe it is entirely fair to just check the relevant bits and refuse  to continue if those can not be checked properly.
Having some extra slug on the version should not immediately cause issues (e.g. the `post0` here, or slugs like `beta`, `alpha` and the likes).
Probably only very few systems are running setuptools with weird version info, but supporting this is a simple one-line change i figured i might as well MR.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1460
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>

(cherry picked from commit b3de4ef022)
2025-01-09 01:38:28 -08:00
Grimmauld
f90a041921 limit buildpath.py setuptools version check to the relevant bits
previously, this check would fail if the setuptools version would contain non-integers.
On my system, that is the case: `setuptools.__version__` is `'75.1.0.post0'`
I believe it is entirely fair to just check the relevant bits and refuse  to continue if those can not be checked properly.
But haviong something extra on the version should not immediately cause issues (e.g. the `post0` here, or slugs like `beta`, `alpha` and the likes).
Probably only very few systems are running setuptools with weird version info, but supporting this doesn't cost much, i believe.

(cherry picked from commit 3302ae98e4)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:38:27 -08:00
John Johansen
ae0d1aafda Merge postfix-smtp profile fix
Allow locking for /var/spool/postfix/pid/unix.relay.

Example log entry: `type=AVC msg=audit(1733851239.685:8882): apparmor="DENIED" operation="file_lock" profile="postfix-smtp" name="/var/spool/postfix/pid/unix.relay" pid=14222 comm="smtp" requested_mask="k" denied_mask="k" fsuid=91 ouid=0FSUID="postfix" OUID="root"`

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1459
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit 8a6eb170e1)
2025-01-09 01:37:32 -08:00
pyllyukko
403b3cad10 postfix-smtp profile fix
Allow locking for /var/spool/postfix/pid/unix.relay.

(cherry picked from commit 76dcf46d4f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:37:32 -08:00
John Johansen
851f6013f6 Merge Use MS_SYNCHRONOUS instead of MS_SYNC
MS_SYNC is a flag for msync(2) while MS_SYNCHRONOUS is a flag for mount(2).
The header used to define MS_SYNC but IMO this is confusing since that's an
unrelated flag.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1458
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 60f1b55ab5)
2025-01-09 01:36:14 -08:00
Zygmunt Krynicki
0838496c32 Use MS_SYNCHRONOUS instead of MS_SYNC
MS_SYNC is a flag for msync(2) while MS_SYNCHRONOUS is a flag for mount(2).
The header used to define MS_SYNC but IMO this is confusing since that's an
unrelated flag.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit d164e877f5)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:36:14 -08:00
John Johansen
191f01b749 Merge Allow spread to use locally-provided kernel
By placing a bzImage into the top level of the AppArmor git repository one can
instruct spread and image-garden to use that image instead of booting
traditionally with an EFI / full disk image pair.

In addition, make error handling in qemu more robust, so failures are both
surfaced and do not cause endless attempts to allocate.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1452
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 239ae21b69)
2025-01-09 01:35:52 -08:00
Zygmunt Krynicki
788d29aacb Allow spread to use locally-provided kernel
By placing a bzImage into the top level of the AppArmor git repository one can
instruct spread and image-garden to use that image instead of booting
traditionally with an EFI / full disk image pair.

In addition, make error handling in qemu more robust, so failures are both
surfaced and do not cause endless attempts to allocate.

Please update image-garden to at least 5a00ead9964df6463e19432ae50e7760fc6da755

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 7031b5aeee)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:35:52 -08:00
John Johansen
1b70d1e9c2 Merge tests: add regression tests for snapd mount-control
The test adds a very small and simple smoke test that shows that a mount rule
with both fstype and options allows mounts to be performed on a real running
kernel.

The test is structured in a way that should make it easy to extend with new
variants (flags, fstype) in the future.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1445
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit 11d121409d)
2025-01-09 01:35:27 -08:00
Zygmunt Krynicki
00a5c07db5 tests: add regression tests for snapd mount-control
The test adds a very small and simple smoke test that shows that a mount rule
with both fstype and options allows mounts to be performed on a real running
kernel.

The test is structured in a way that should make it easy to extend with new
variants (flags, fstype) in the future.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 1f60021979)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:35:27 -08:00
John Johansen
6876448a24 Merge Allow running tests with spread
Spread is a full-system, or integration test suite runner initially developed
to test snapd. Over time it has spread to other projects where it provides a
structured way to organize, run and debug complex full-system interactions.
Spread is documented on https://github.com/canonical/spread and is used in
production since late 2016.

Spread has a notion of backends which are responsible for allocating and
discarding test machines. For the purpose of running AppArmor regression tests,
I've combined spread with my own tool, image garden. The tool provides
off-the-shelf images, constructed on-the-fly from freely available images, and
makes them easily available to spread.

The reason for doing it this way is so that using non-free cloud systems is not
required and anyone can repeat the test process locally, on their own computer.
Vanilla spread is somewhat limited to x86-64 systems but the way I've used it
here makes it equally possible to test x86_64 *and* aarch64 systems. I've done
most of the development on an ARM single-board-computer running on my desk.

Spread requires a top-level spread.yaml file and a collection of task.yaml
files that describe individual tasks (for us, those are just tests). Tasks have
no implied dependency except that to reach a given task, spread will run all
the _prepare_ statements leading to that task, starting from the project, test
suite and then task. With proper care one can then run a specific individual
test with a one-line command, for example:

```
spread -v garden:ubuntu-cloud-24.04:tests/regression/apparmor:at_secure
```

This will prepare a fresh ubuntu-cloud-24.04 system (matching the CPU
architecture of the host), copy the project tree into the test machine, install
all the build dependencies, build all the parts of apparmor and then run one
specific variant of the regression test, namely the at_secure program.
Importantly the same test can also run on, say debian-cloud-13 (Debian Trixie),
but also, if you have a Google cloud account, on Google Compute Engine or in
one of the other backends either built into spread or available as a fork of
spread or as a helper for ad-hoc backend. Spread can also create more than one
worker per system and distribute the tests to all of the available instances.
In no way are we locking ourselves out of the ability to run our test suite on
our target of choice.

Spread has other useful switches, such as:
- `-reuse` for keeping machines around until discarded with -discard
- `-resend` for re-sending updated copy of the project (useful for -reuse)
- `-debug` for starting an interactive shell on any failure
- `-shell` for starting an interactive shell instead of the `execute` phase

This first patch contains just the spread elements, assuming that both spread
and image-garden are externally installed. A GitLab continuous integration
installing everything required and running a subset of tests will follow
shortly.

I've expanded the initial selection of systems to allow running all the tests
on several versions of Ubuntu, Debian and openSUSE, mainly as a sanity check
but also to showcase how practical spread is at covering real-world systems.

A number of tests are currently failing:

    - garden:debian-cloud-12:tests/regression/apparmor:attach_disconnected
    - garden:debian-cloud-12:tests/regression/apparmor:deleted
    - garden:debian-cloud-12:tests/regression/apparmor:unix_fd_server
    - garden:debian-cloud-12:tests/regression/apparmor:unix_socket_pathname
    - garden:debian-cloud-13:tests/regression/apparmor:attach_disconnected
    - garden:debian-cloud-13:tests/regression/apparmor:deleted
    - garden:debian-cloud-13:tests/regression/apparmor:unix_fd_server
    - garden:debian-cloud-13:tests/regression/apparmor:unix_socket_pathname
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:attach_disconnected
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:deleted
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:e2e
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_fd_server
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_socket_pathname
    - garden:opensuse-cloud-15.6:tests/regression/apparmor:xattrs_profile

In addition, only on openSUSE, I've skipped the entire test suite of the utils
directory, as it requires python3 ttk themes, which I cannot find in packaged
form.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1432
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit d9304c7653)
2025-01-09 01:31:39 -08:00
Zygmunt Krynicki
297cd44aff Document spread tests in README.md
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit d27377a62f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:31:39 -08:00
Zygmunt Krynicki
32da740f1b Third iteration of spread support
- Tests defined in utils/test are now described by a task.yaml in the same
  directory and can run concurrently across many machines.
- Tests for utils/ are now executed on openSUSE Tumbleweed since ttk themes is
  no longer a hard dependency in master.
- Tests no longer run on openSUSE Leap 15.6 due to the age of default
  Python (3.6) and gcc/g++. The tight integration with SWIG which does
  not seem to support other Python versions very well. Perl hard-codes
  old GCC for extension modules. The upcoming openSUSE Leap 16 should be
  a viable target. In the meantime we can still test everything through
  rolling-release Tumbleweed.
- Formatting of YAML files is now more uniform, at four spaces per tab.
- The run-spread.sh script is now in the root of the tree. The script allows
  running all spread tests sequentially on one system, while collecting logs
  and artifacts for convenient analysis after the fact.
- All systems are adjusted to run _four_ workers in parallel with _two_ virtual
  cores each and equipped with 1.5GB of virtual memory. This aims to best
  utilize the capacity of a typical CI worker with two to four cores and about
  8GB of available memory.
- Failing tests are marked as such, so that as a whole the entire spread suite
  can pass and be useful at catching regressions.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 1df91e2c8c)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:31:39 -08:00
Zygmunt Krynicki
b24f0bbfa8 Second iteration of spread support
Compared to v1 the following improvements have been made:

- The cost of installing packages have been shifted from each startup to image
  preparation phase, thanks to the integration of custom cloud-init profiles
  into image-garden. This has dramatic impact on iteration time while also
  entirely removing requirement to be online to run once a prepared image is
  available.

- Support for running on Google Compute Engine has been removed since it would
  not be able to use cloud-init the same way would currently only complicate
  setup.

- The number of workers have been tuned for local iteration, aiming for
  comfortable work with 16GB of memory on the host. Once CI/CD pipeline
  support is introduced I will add a dedicated entry so that resources are
  utilized well both locally and when running in CI.

- The set of regression tests listed in tests/regression/apparmor/task.yaml is
  now cross-checked so introduction of a new test to the makefile there is
  automatically flagged and causes spread to fail with a clear message.

- The task tests/unit/utils has been improved to generate profiles. Thanks to
  Christian Boltz for explaining this relationship between tests.

- A number of comments have been improved and cleaned up for readability,
  accuracy and sometimes better grammar.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit c95ac4d350)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:31:39 -08:00
Zygmunt Krynicki
5a4ddbeaeb Allow running tests with spread
Spread is a full-system, or integration test suite runner initially developed
to test snapd. Over time it has spread to other projects where it provides a
structured way to organize, run and debug complex full-system interactions.
Spread is documented on https://github.com/canonical/spread and is used in
production since late 2016.

Spread has a notion of backends which are responsible for allocating and
discarding test machines. For the purpose of running AppArmor regression tests,
I've combined spread with my own tool, image garden. The tool provides
off-the-shelf images, constructed on-the-fly from freely available images, and
makes them easily available to spread.

The reason for doing it this way is so that using non-free cloud systems is not
required and anyone can repeat the test process locally, on their own computer.
Vanilla spread is somewhat limited to x86-64 systems but the way I've used it
here makes it equally possible to test x86_64 *and* aarch64 systems. I've done
most of the development on an ARM single-board-computer running on my desk.

Spread requires a top-level spread.yaml file and a collection of task.yaml
files that describe individual tasks (for us, those are just tests). Tasks have
no implied dependency except that to reach a given task, spread will run all
the _prepare_ statements leading to that task, starting from the project, test
suite and then task. With proper care one can then run a specific individual
test with a one-line command, for example:

```
spread -v garden:ubuntu-cloud-24.04:tests/regression/apparmor:at_secure
```

This will prepare a fresh ubuntu-cloud-24.04 system (matching the CPU
architecture of the host), copy the project tree into the test machine, install
all the build dependencies, build all the parts of apparmor and then run one
specific variant of the regression test, namely the at_secure program.
Importantly the same test can also run on, say debian-cloud-13 (Debian Trixie),
but also, if you have a Google cloud account, on Google Compute Engine or in
one of the other backends either built into spread or available as a fork of
spread or as a helper for ad-hoc backend. Spread can also create more than one
worker per system and distribute the tests to all of the available instances.
In no way are we locking ourselves out of the ability to run our test suite on
our target of choice.

Spread has other useful switches, such as:
- `-reuse` for keeping machines around until discarded with -discard
- `-resend` for re-sending updated copy of the project (useful for -reuse)
- `-debug` for starting an interactive shell on any failure
- `-shell` for starting an interactive shell instead of the `execute` phase

This first patch contains just the spread elements, assuming that both spread
and image-garden are externally installed. A GitLab continuous integration
installing everything required and running a subset of tests will follow
shortly.

I've expanded the initial selection of systems to allow running all the tests
on several versions of Ubuntu, Debian and openSUSE, mainly as a sanity check
but also to showcase how practical spread is at covering real-world systems.

A number of systems and tests are currently failing:

- garden:debian-cloud-12:tests/regression/apparmor:attach_disconnected
- garden:debian-cloud-12:tests/regression/apparmor:deleted
- garden:debian-cloud-12:tests/regression/apparmor:unix_fd_server
- garden:debian-cloud-12:tests/regression/apparmor:unix_socket_pathname
- garden:debian-cloud-13:tests/regression/apparmor:attach_disconnected
- garden:debian-cloud-13:tests/regression/apparmor:deleted
- garden:debian-cloud-13:tests/regression/apparmor:unix_fd_server
- garden:debian-cloud-13:tests/regression/apparmor:unix_socket_pathname
- garden:opensuse-cloud-15.6:tests/regression/apparmor:attach_disconnected
- garden:opensuse-cloud-15.6:tests/regression/apparmor:deleted
- garden:opensuse-cloud-15.6:tests/regression/apparmor:e2e
- garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_fd_server
- garden:opensuse-cloud-15.6:tests/regression/apparmor:unix_socket_pathname
- garden:opensuse-cloud-15.6:tests/regression/apparmor:xattrs_profile
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:attach_disconnected
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:deleted
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:unix_fd_server
- garden:opensuse-cloud-tumbleweed:tests/regression/apparmor:unix_socket_pathname
- garden:ubuntu-cloud-22.04:tests/regression/apparmor:attach_disconnected

In addition, only on openSUSE, I've skipped the entire test suite of the utils
directory, as it requires python3 ttk themes, which I cannot find in packaged
form.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit cc04181578)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:31:39 -08:00
Zygmunt Krynicki
fd253d1c31 Allow running exactly one test in utils/test
The new check-one-test-% pattern rule allows running individual test scripts.
This allows them to be tested in parallel across many Make worker threads or
across many distinct machines with spread.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 9588b06e0f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:31:39 -08:00
John Johansen
505faeff10 Merge Add explicit test for parser priority-based carveouts
Tests #466 but is marked as expected fail due to that bug not being resolved.

Depends on !1441 which adds the xfail infrastructure to the parser equality testing framework, and should be rebased on top of master once that MR is merged.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1443
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit e1d8bf1888)
2025-01-09 01:30:12 -08:00
Ryan Lee
3d5346b48e parser equality tests: print both profiles upon test failure
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit b925d8acff)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:30:12 -08:00
Ryan Lee
1d9e28df35 Add explicit test for parser priority-based carveouts
These are marked as expected fail due to a bug in the parser's priority
handling.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit 7b5f4c0d6f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:30:12 -08:00
John Johansen
9995e36347 Merge parser: equality tests: add the ability have tests that are a known problem
currently the equality tests require the tests to PASS as known equality
or inequality. Add the ability to add tests that are a known problem
and are expected to fail the equality, or inequality test.

This is done by using

   verify_binary_xequality
   verify_binary_xinequality

This allows new tests to be added to document a known issue, without
having to develop the fix for the issue. The use of this facility
is expected to be temporary, so any test marked as xequality or
xinequality will be noisy but not fail the other tests until they
are fixed, at which point they will cause the tests to fail to
force them to be updated to the correct equality or inequality
test.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1441
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 53e322b755)
2025-01-09 01:27:50 -08:00
John Johansen
08aeeedc69 parser: equality tests: add the ability have tests that are a known problem
currently the equality tests require the tests to PASS as known equality
or inequality. Add the ability to add tests that are a known problem
and are expected to fail the equality, or inequality test.

This is done by using

   verify_binary_xequality
   verify_binary_xinequality

This allows new tests to be added to document a known issue, without
having to develop the fix for the issue. The use of this facility
is expected to be temporary, so any test marked as xequality or
xinequality will be noisy but not fail the other tests until they
are fixed, at which point they will cause the tests to fail to
force them to be updated to the correct equality or inequality
test.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit b81ea65c1c)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:27:50 -08:00
John Johansen
d426129baf Merge profiles: update bwrap profile
Update the bwrap profile so that it will attach to application profiles
if present.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1435
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 662a26d133)
2025-01-09 01:26:51 -08:00
John Johansen
cda7af8561 profiles: update bwrap profile
Update the bwrap profile so that it will attach to application profiles
if present.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 1979af7710)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:26:51 -08:00
John Johansen
f5844dc267 Merge regression tests: Add test to check for DAC permissions to the testsuite
The regression test suite uses root with capabilities restricted in
several tests. This can cause the test suite to fail in weird and
confusing ways.

Add a test to check for DAC permissiosns from / to the testsuite
and abort running the tests with an error message if DAC permissions
are going to cause the test suite to fail.

Currently the test is pretty basic, but is better than nothing.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1411
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit c5b17d85ea)
2025-01-09 01:21:11 -08:00
John Johansen
52b83aeac4 regression tests: Add test to check for DAC permissions to the testsuite
The regression test suite uses root with capabilities restricted in
several tests. This can cause the test suite to fail in weird and
confusing ways.

Add a test to check for DAC permissiosns from / to the testsuite
and abort running the tests with an error message if DAC permissions
are going to cause the test suite to fail.

Currently the test is pretty basic, but is better than nothing.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 82e4b4ba00)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:21:11 -08:00
John Johansen
89fd37abbf Merge Assorted fixes for test suite portability
I've been working on improved end-to-end testing of AppArmor on a number
of popular Linux distributions. My first run contains Debian, Ubuntu and openSUSE.

This branch contains three small fixes that, mainly, allow running more tests on
openSUSE Tumbleweed.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1431
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>

(cherry picked from commit 6f5cdb7b44)
2025-01-09 01:20:44 -08:00
Zygmunt Krynicki
ffff25e21b On openSUSE 15.6 make fails to find awk
Using this version of make:
```
GNU Make 4.2.1
Built for x86_64-suse-linux-gnu
```
I'm not entirely sure why but the alternative syntax I've used works correctly.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 4caf0aff81)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:20:44 -08:00
Zygmunt Krynicki
0e7e509ba8 Use larger loop device in mult_mount.sh test
This fixes the test to pass on openSUSE Tumbleweed, where the small size
prevented alloction of an inode for the `lost+found` directory:

```
garden:opensuse-cloud-tumbleweed .../tests/regression/apparmor# mkfs.ext2 -F -m 0 -N 10 /tmp/sdtest.32929-21402-6x826m/image.ext3
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 512 1k blocks and 8 inodes

Allocating group tables: done
Writing inode tables: done
ext2fs_mkdir: Could not allocate inode in ext2 filesystem while creating /lost+found
```

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 32ee85cef8)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:20:44 -08:00
Zygmunt Krynicki
4722ff8e65 Quote trailing backslash in test case
This fixes an error with Python 3.11:

```
test/test-parser-simple-tests.py:420:21: E502 the backslash is redundant between brackets
```

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 92fcdcab9e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:20:44 -08:00
Zygmunt Krynicki
d88c6d3bca Use command -v rather than which
Which is technically not POSIX and command -v works everywhere. This fixes
building and running the test suite on openSUSE Tumbleweed.

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit 4b0adc63f5)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:20:44 -08:00
Zygmunt Krynicki
37ea52db0c parser: quote BISON_MAJOR in case it is empty
On a test system without bison installed, make setup fails with:

  /bin/sh: 1: bison: not found
  /bin/sh: 1: test: -ge: unexpected operator

Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
(cherry picked from commit f58fe9cd52)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:20:44 -08:00
John Johansen
da9c59ab09 Merge regression tests: check for setfattr binary used by xattrs_profile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1412
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 0828ab67b2)
2025-01-09 01:20:26 -08:00
Ryan Lee
8fde25d828 regression tests: check for setfattr binary used by xattrs_profile
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit b39a535cb9)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-09 01:20:26 -08:00
John Johansen
5aa7d046db Merge Write basic file complain-mode regression tests
The test "Complain mode profile (file exec cx permission entry)" currently will only pass on a Ubuntu Oracular system due to a kernel bugfix patch that has not yet been upstreamed or backported.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1415
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 926929da16)
2025-01-08 23:06:13 -08:00
Ryan Lee
9c3ac976ec Write basic file complain-mode regression tests
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit cb110eaf98)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 23:06:13 -08:00
John Johansen
f433acb219 Merge Dovecot profile: Allow reading of /proc/sys/kernel/core_pattern
See <https://dovecot.org/bugreport.html>

(the link describes how Dovecot requires access to `core_pattern`)

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1331
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>

(cherry picked from commit 2b45586fa9)
2025-01-08 23:01:29 -08:00
pyllyukko
5ee3c03101 Dovecot profile: Allow reading of /proc/sys/kernel/core_pattern
See <https://dovecot.org/bugreport.html>

(cherry picked from commit 0a5a9c465f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 23:01:29 -08:00
John Johansen
6fdc08a5a5 Merge tests: fix incorrect setfattr call in xattrs_profile
The file was quoted with the following space, making the test broken.

Signed-off-by: Zygmunt Krynicki <me@zygoon.pl>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1429
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>

(cherry picked from commit a2d52fedb2)
2025-01-08 23:01:06 -08:00
Zygmunt Krynicki
e931449ffc tests: fix incorrect setfattr call in xattrs_profile
The file was quoted with the following space, making the test broken.

Signed-off-by: Zygmunt Krynicki <me@zygoon.pl>
(cherry picked from commit 8c16fb2700)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 23:01:06 -08:00
John Johansen
203b4994e9 Merge Quote some variables in regression test suite to allow for spaces
This is not a complete fix for the spaces issue, but it is the next simple step that can be taken before the more difficult work of finding the remaining bugs in each shell script.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1424
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>

(cherry picked from commit e27b0ad2b6)
2025-01-08 23:00:44 -08:00
Ryan Lee
7b53763f92 Quote some variables in regression test suite to allow for spaces
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
(cherry picked from commit e7ec01f075)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 23:00:44 -08:00
John Johansen
ddb33d348c Merge parser: fix expr MatchFlag dump
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.

rule: \d2  ->  \x2 priority=1001 (0x4/0)< 0x4>

rule: \d7  ->  \a priority=3e9 (0x4/0)< 0x4>

rule: \d10  ->  \n priority=3e9 (0x4/0)< 0x4>

rule: \d9  ->  \t priority=3e9 (0x4/0)< 0x4>

rule: \d14  ->  \xe priority=1001 (0x4/0)< 0x4>

where priority=3e9 is the hex encoded priority 1001.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1419
Approved-by: Maxime Bélair <maxime.belair@canonical.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 2e77129e15)
2025-01-08 22:59:44 -08:00
John Johansen
8274bff547 parser: fix expr MatchFlag dump
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.

rule: \d2  ->  \x2 priority=1001 (0x4/0)< 0x4>

rule: \d7  ->  \a priority=3e9 (0x4/0)< 0x4>

rule: \d10  ->  \n priority=3e9 (0x4/0)< 0x4>

rule: \d9  ->  \t priority=3e9 (0x4/0)< 0x4>

rule: \d14  ->  \xe priority=1001 (0x4/0)< 0x4>

where priority=3e9 is the hex encoded priority 1001.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit a31343c5f7)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 22:59:44 -08:00
John Johansen
b17750163b Merge Partial fix for regression tests if parent directory contains spaces
Most `tests/regression/apparmor/*.sh` scripts contain

    . $bin/prologue.inc

This will explode if one of the parent directories contains a space.

Minimized reproducer:

```
# cat test.sh
pwd=`dirname $0`
pwd=`cd $pwd ; /bin/pwd`
bin=$pwd
echo "pwd: $bin"
. $bin/prologue.inc
# ./test.sh
pwd: /tmp/foo bar
./test.sh: line 9: /tmp/foo: No such file or directory
```

Notice that test.sh tries to source `/tmp/foo` instead of `/tmp/foo bar/prologue.inc`.

The fix is to quote the prologue.inc path:

    . "$bin/prologue.inc"

While on it, also fix other uses of $bin - directly and indirectly - by quoting them.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1418
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit a422d2ea17)
2025-01-08 22:58:38 -08:00
Christian Boltz
9c229d1452 Quote indirect uses of $bin and ${bin}
... to avoid issues with spaces in a parent directory's name.

"Indirect uses" means usage of $bin via another variable, for example
`foo=$bin/whatever`

(cherry picked from commit 55db4af979)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 22:58:38 -08:00
Christian Boltz
702f2863a4 Quote all uses of $bin and ${bin}
... to avoid issues with spaces in a parent directory's name.

(cherry picked from commit 22cf88b7c7)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 22:58:38 -08:00
Christian Boltz
989bf0b3ed Fix sourcing prologue.inc if parent directory contains spaces
Most `tests/regression/apparmor/*.sh` scripts contain

    . $bin/prologue.inc

This will explode if one of the parent directories contains a space.

Minimized reproducer:

```
pwd=`dirname $0`
pwd=`cd $pwd ; /bin/pwd`
bin=$pwd
echo "pwd: $bin"
. $bin/prologue.inc
pwd: /tmp/foo bar
./test.sh: line 9: /tmp/foo: No such file or directory
```

Notice that test.sh tries to source `/tmp/foo` instead of `/tmp/foo bar/prologue.inc`.

The fix - as done in this commit - is to quote the prologue.inc path:

    . "$bin/prologue.inc"

(cherry picked from commit e1972eb22f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 22:58:38 -08:00
John Johansen
9b1d0ea3d8 Merge parser: improve libapparmor_re build and dump info
Fix libapparmor_re/Makefile so it works correctly with rebuilds and
improve state machine dump information, to aid with debugging of
permission handling during the compile.

Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1410
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit 015b41aeb4)
2025-01-08 12:24:33 -08:00
John Johansen
a577d92c7b parser: add the abilitiy to dump the permissions table
Instead of encoding permissions in the accept and accept2 tables
extended perms uses a permissions table and accept becomes an index
into the table.

Add the ability to dump the permissions table so that it can be
compared and debugged.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 45964d34e7)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 12:22:58 -08:00
John Johansen
a16aff8e20 parser: add the accept2 table entry to the chfa dump
The chfa dump is missing information about the accept2 entry. The
accept2 information is necessary to help with debugging state machine
builds as accept2 is used to store quiet and audit information in the
old format or conditional information in the extended perms format.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 00dedf10ad)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 12:22:45 -08:00
John Johansen
4099bf6574 parser: fix and cleanup libapparmor_re/Makefile
The Makefile is missing some of its .h depenedncies causing compiles
to either fail or worse result in bad builds when rebuilding in an
already built tree.

Move the header dependencies into a variable and use it for each
target. While some targets don't need every include as a dependency
and this will result in unnecessary rebuilds in some cases, it makes
the Makefile cleaner, easier to maintain and makes sure a dependency
isn't accidentally missed.

Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 7cc7f47424)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 12:22:28 -08:00
John Johansen
a102e9dc55 Merge parser: fix mapping of AA_CONT_MATCH for policydb compat entries
The mapping of AA_CONT_MATCH was being dropped resulting in the
tcp tests failing because they would only match up to the first conditional
match check in the layout.

Bug: https://gitlab.com/apparmor/apparmor/-/issues/462
Fixes: e29f5ce5f ("parser: if extended perms are supported by the kernel build a permstable")
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1409
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit f24fc4841f)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 11:44:22 -08:00
John Johansen
7c1eff3867 Prepare for 4.1.0~beta2 release
- bump version

Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
John Johansen
d69d4d3ddf Merge parser: bug fix do not change auditing information when applying deny
The parser recently changed how/where deny information is applied.
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed the implicit filtering of explicit denies during
the minimization pass. The implicit clear allowed the explicit
information to be carried into the minimization pass and merged with
implicit denies. The end result being a minimized dfa with the explicit
deny information available to be applied post minimization, and
then dropped later at permission encoding in the accept entries.

Extended permission however enable carrying explicit deny information
into the kernel to fix certain bugs like complain mode not being
able to distinguish between implicit and explicit deny rules (ie.
deny rules get ignored in complain mode). However keeping explicit
deny information when unnecessary result in a larger state machine
than necessary and slower compiles.

commit 179c1c1ba ("parser: fix minimization check for filtering_deny")
Moved the explicit apply_and_clear_deny() pass to before minimization
to restore mnimization's ability to create a minimized dfa with
explicit and implicit deny information merged but this also cleared
the explicit deny information that used to be carried through
minimization. This meant that when the deny information was applied
post minimization it resulted in the audit and quiet information
being cleared.

This resulted in the query_label tests failing as they are checking
for the expected audit infomation in the permissions.

Fixes: 179c1c1ba ("parser: fix minimization check for filtering_deny")
Bug: https://gitlab.com/apparmor/apparmor/-/issues/461
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1408
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit eb365b374d)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
John Johansen
5c04b791d2 Merge aa-mergeprof: prevent backtrace if file not found
If a user specifies a non-existing file to merge into the profiles
(`aa-mergeprof /file/not/found`), this results in a backtrace showing an
AppArmorBug because that file unsurprisingly doesn't end up in the
active_profiles filelist.

Handle this more gracefully by adding a read_error_fatal parameter to
read_profile() that, if set, forwards the exception. With that,
aa-mergeprof doesn't try to list the profiles in this non-existing file.

Note that all other callers of read_profile() continue to ignore read
errors, because aborting just because a single file in /etc/apparmor.d/
(for example a broken symlink) isn't readable would be a bad idea.

This bug was introduced in 4e09f315c3, therefore I propose this patch for 3.0..master

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1403
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit 5ebbe788ea)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
John Johansen
be8d85603e Merge php-fpm: widen allowed socket paths
It is common for packaged PHP applications to ship a PHP-FPM
configuration using a scheme of "$app.sock" or or "$app.socket" instead
of using a generic FPM socket.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1406
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit bfa9147182)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
Georgia Garcia
450813869a profiles: update dconf abstraction to use @{etc_ro}
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit cbe8d295a5)
Signed-off-by: John Johansen <john.johansen@canonical.com>
(cherry picked from commit 740d7ddae14b69568eaede1b05fe560ae53762df)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
John Johansen
65c41b7fac Merge Misc small fixes to resolve some compiler warnings in regression test suite
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1407
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit 14f54f3df2)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
Georgia Garcia
d19e5e8990 Merge Improvements to Postfix profiles
* Support /usr/libexec/postfix/ path
* Added abstractions/{nameservice,postfix-common} to postfix-postscreen
* Added postfix-tlsproxy, postscreen & spawn to postfix-master
    * Added missing postfix-tlsproxy profile
* Added postscreen cache map (see <https://www.postfix.org/postconf.5.html#postscreen_cache_map>)
* Added /{var/spool/postfix/,}pid/pass.smtpd to postfix-smtpd

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1330
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit f7b5d0e783)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
Giampaolo Fresi Roglia
c11ad3e675 abstractions/nameservice: tighten libnss_libvirt file access
(cherry picked from commit 5be4295b5a)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
Georgia Garcia
de2bb16ad6 Merge Support name resolution via libnss-libvirt
Add support for hostname resolution via libnss-libvirt. This change has been tested against the latest oracular version 10.6.0-1ubuntu3.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1362
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit a522e11129)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
Georg Pfuetzenreuter
4f1d2ac549 zgrep: deny passwd access
Bash will try to read the passwd database to find the shell of a user if
$SHELL is not set. This causes zgrep to trigger

```
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
```

if called in a sanitized environment. As the functionality of zgrep is
not impacted by a limited Bash environment, add deny rules to avoid the
potentially misleading AVC messages.

Signed-off-by: Georg Pfuetzenreuter <mail@georg-pfuetzenreuter.net>
(cherry picked from commit 48483f2ff8)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
Christian Boltz
cc9f0ed538 ProfileStorage: store correct name
Instead of always storing the name of the main profile, store the child
profile/hat name if we are in a child profile or hat.

As a result, we always get the correct "profile xy" header even for
child profiles when dumping the ProfileStorage object.

Also extend the tests to check that the name gets stored correctly.

(cherry picked from commit cb943e4efc)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
Steve Beattie
1b8afda407 .gitignore: add mod_apparmor and pam_apparmor files
... that are generated during `make`

I propose this patch for 3.x..master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1374
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: Steve Beattie <steve+gitlab@nxnw.org>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
(cherry picked from commit 3478558904)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
Christian Boltz
9aae96356e Merge apparmor.vim: Add missing units for rlimit cpu and rttime
... and allow whitespace between the number and the unit.

I propose this patch for 3.x, 4.0 and master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1336
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
(cherry picked from commit 247bdd5deb)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
John Johansen
b5db2361f3 Merge profiles: add support for ArchLinux php-legacy package to php-fpm
ArchLinux ships a secondary PHP package called php-legacy with different
paths. As of now, the php-fpm profile will cover this binary but
inadequately restrict it.

Fixes: #454

Closes #454
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1401
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 3d1a3493af)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
John Johansen
5db4a1e7ca Merge abstractions/nameservice: include nameservice-strict
... and drop all rules it contains from abstractions/nameservice.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1373
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit 4fe3e30abc)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2025-01-08 10:58:20 -08:00
Christian Boltz
230a975916 Merge smbd: allow capability chown
This is neeed for "inherit owner = yes" in smb.conf.

From man smb.conf:

    inherit owner (S)

    The ownership of new files and directories is normally governed by
    effective uid of the connected user. This option allows the Samba
    administrator to specify that the ownership for new files and
    directories should be controlled by the ownership of the parent
    directory.

Fixes: https://bugzilla.suse.com/show_bug.cgi?id=1234327

I propose this fix for 3.x, 4.x and master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1456
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>


(cherry picked from commit a315d89a2b)

d3050285 smbd: allow capability chown

Co-authored-by: John Johansen <john@jjmx.net>
2024-12-10 12:50:24 +00:00
Christian Boltz
aa3592a57e Merge postfix-showq profile fix
Allow reading queue ID files from /var/spool/postfix/hold/.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1454
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>


(cherry picked from commit dfe771602d)

3c2aae3a postfix-showq profile fix

Co-authored-by: Christian Boltz <apparmor@cboltz.de>
2024-12-09 18:56:41 +00:00
Christian Boltz
a9aef6f37b Merge python 3.13 fixes/workarounds
Fixes/workarounds for python 3.13 support.

fail.py: handle missing cgitb - workaround for https://gitlab.com/apparmor/apparmor/-/issues/447

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1439
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>


(cherry picked from commit 5fb91616e3)

434e34bb fail.py: handle missing cgitb

Co-authored-by: Christian Boltz <apparmor@cboltz.de>
2024-12-05 17:36:32 +00:00
Georgia Garcia
c71c486313 Merge Remove match statements in utils for older Python compatibility
Somehow the use of new match statements slipped by review despite our commitment to supporting older Python versions. Replace them with an unfortunately-needed if-elif chain.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1440
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit 36ae21e3fa)
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
2024-12-03 10:00:25 -03:00
Christian Boltz
4213dcc586 Merge aa-remove-unknown: fix readability check [upstreaming]
I am upstreaming this patch that is part of the nix package of apparmor for close to a year now.
This fixes the issue at https://github.com/NixOS/nixpkgs/issues/273164 for more distros than just NixOS.
The original merge Request on the nix side patching this was https://github.com/NixOS/nixpkgs/pull/285915.
However, people had issues with gitlab, so this never hit apparmor upstream until now. This does however also mean this patch has seen production and seems to work quite well.

## Original reasoning/message of the patch author:

This check is intended for ensuring that the profiles file can actually
be opened.  The *actual* check is performed by the shell, not the read
utility, which won't even be executed if the input redirection (and
hence the test) fails.

If the test succeeds, though, using `read` here might actually
jeopardize the test result if there are no profiles loaded and the file
is empty.

This commit fixes that case by simply using `true` instead of `read`.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1438
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: Christian Boltz <apparmor@cboltz.de>


(cherry picked from commit 93c7035148)

b4aa00de aa-remove-unknown: fix readability check

Co-authored-by: Christian Boltz <apparmor@cboltz.de>
2024-12-01 16:11:25 +00:00
Christian Boltz
ff7f0ff0ea Merge test-logprof: Increase timeout once more
Builds for risc64 are much slower than on other architectures (4-5
seconds with qemu-user or on Litchi Pi 4A).

Since the timeout is only meant as a safety net, increase it generously,
and hopefully for the last time.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/463

I propose this patch for 4.0 and master.

Closes #463
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1417
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>


(cherry picked from commit 4c32ad8fb7)

508ace45 test-logprof: Increase timeout once more

Co-authored-by: John Johansen <john@jjmx.net>
2024-11-10 16:34:48 +00:00
John Johansen
9da0f6d3db Merge zgrep: allow reading /etc/nsswitch.conf and /etc/passwd
Seen on various VMs, my guess is that bash wants to translate a uid to a
username.

Log events (slightly shortened)

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/nsswitch.conf" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

apparmor="DENIED" operation="open" class="file" profile="zgrep" name="/etc/passwd" comm="zgrep" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

I propose this patch for 3.0..master

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1357
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit ab16377838)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-30 02:29:38 -07:00
John Johansen
0c1c186267 Merge Fix memory leak in aare_rules UniquePermsCache
When the find fails but the insertion also fails, we leak the new node
that we generated. Delete the new node in this case to avoid leaking
memory.

The question remains, however, as to whether we should implement `operator==` in addition to `operator<` so that they are consistent with each other and `find` works correctly.

Signed-off-by: Ryan Lee <ryan.lee@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1399
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit 99261bad11)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-30 02:26:54 -07:00
Steve Beattie
bfd2a0e014 profiles: transmission-daemon needs attach_disconnected
Systemd's PrivateTmp= in transmission service is causing mount namespaces to be used leading to disconnected paths

[395201.414562] audit: type=1400 audit(1727277774.392:573): apparmor="ALLOWED" operation="sendmsg" class="file" info="Failed name lookup - disconnected path" error=-13 profile="transmission-daemon" name="run/systemd/notify" pid=193060 comm="transmission-da" requested_mask="w" denied_mask="w" fsuid=114 ouid=0

Fixes: https://bugs.launchpad.net/bugs/2083548
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1355
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Steve Beattie <steve+gitlab@nxnw.org>
(cherry picked from commit 4d3b094d9e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-30 02:24:50 -07:00
Georgia Garcia
c87fb0a8c1 Merge abstraction: add nameservice-strict.
As I have read multiple MR mentioning the `nameservice-strict`. Therefore, I thought it would make sense to directly import it here.

To give some context, this abstraction is probably the most commonly included abstraction (after `base`). In `apparmor.d`, it is used by over 700 profiles (only counting direct import). Therefore, adding new rules can have an important impact over a lot of profiles.

Note: the abstraction is a direct import from https://gitlab.com/roddhjav/apparmor.d. The license is the same, I obviously kept Morfikov copyright line. However, I am not sure either or not the SPDX identifier can be used here.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1368
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Approved-by: Christian Boltz <apparmor@cboltz.de>
Approved-by: Ryan Lee <rlee287@yahoo.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit 68376e7fee)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-30 02:19:06 -07:00
Georgia Garcia
05debdb2d8 Merge Support name resolution via libnss-libvirt
Add support for hostname resolution via libnss-libvirt. This change has been tested against the latest oracular version 10.6.0-1ubuntu3.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1362
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit a522e11129)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-30 02:18:32 -07:00
Christian Boltz
8b06f61bea Merge ping: allow reading /proc/sys/net/ipv6/conf/all/disable_ipv6
Fixes: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1082190

I propose this patch for 3.0..master.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1340
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
(cherry picked from commit 4b6df10fe3)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-30 02:16:50 -07:00
Christian Boltz
34a706f566 Merge abstractions/mesa: allow ~/.cache/mesa_shader_cache_db/
... which is used by Mesa 24.2.2

Reported by darix.

Fixes: https://bugs.launchpad.net/bugs/2081692

I propose this addition for 3.x, 4.0 and master

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1333
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: Christian Boltz <apparmor@cboltz.de>
(cherry picked from commit 62ff290c02)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-30 02:16:30 -07:00
Georgia Garcia
a31cbd07aa profiles: enable php-fpm in /usr/bin and /usr/sbin
To enable the profile in distros that merge sbin into bin.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/421
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit 2083994513)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-30 02:09:01 -07:00
Akihiro Suda
c4c020cdc0 profiles: slirp4netns: allow pivot_root
`pivot_root` is required for running `slirp4netns --enable-sandbox` inside LXD.
- https://github.com/rootless-containers/slirp4netns/issues/348
- https://github.com/rootless-containers/slirp4netns/blob/v1.3.1/sandbox.c#L101-L234

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit bf5db67284)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-30 02:08:47 -07:00
John Johansen
3c00ed7c85 Merge parser: fix minimization check for filtering deny
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed implicit filtering of explicit denies in the
minimization pass (the information was ignored in building the set of
final accept states).

The filtering of explicit denies reduces the size of the produced
dfa. Since we need to be smarter about when explicit denies are
kept (eg. during complain mode), and most dfas are limited to 65k
states we currently need to filter explicit deny perms by default.

To compensate commit 2737cb2c2 ("parser: minimization - remove
unnecessary second minimization pass") moved the
apply_and_clear_deny() to before minimization. However its check to
apply removal denials before minimization is broken. Remove minimization
triggering apply_and_clear_deny() and just set the FILTER_DENY flag
by default, until we have better selection of rules/conditions where
explicit deny information should be carried through to the backend.

Fixes: 2737cb2c2 ("parser: minimization - remove unnecessary second minimization pass")
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1397
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit e9d6e0ba14)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:58:49 -07:00
John Johansen
bf8fd8cfac Merge parser: fix integer overflow bug in rule priority comparisons
There is an integer overflow when comparing priorities when cmp is
used because it uses subtraction to find lessthan, equal, and greater
than in one operation.

But INT_MAX and INT_MIN are being used by priorities and this results
in INT_MAX - INT_MIN and INT_MIN - INT_MAX which are both overflows
causing an incorrect comparison result and selection of the wrong
rule permission.

Closes: https://gitlab.com/apparmor/apparmor/-/issues/452
Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")
Signed-off-by: John Johansen <john.johansen@canonical.com>

Closes #452
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1396
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit a5da9d5b5d)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:58:35 -07:00
John Johansen
2ca7d30590 Merge utils: catch TypeError exception for binary logs
When a log like system.journal is passed on to aa-genprof, for
example, the user receives a TypeError exception: in method
'parse_record', argument 1 of type 'char *'

This patch catches that exception and displays a more meaningful
message.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/436
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

Closes #436
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1354
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit cb0f84e101)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:57:50 -07:00
John Johansen
5a39ae82fe Merge Fix ABI break for aa_log_record
Commit 3c825eb001 adds a field called `execpath` to the `aa_log_record` struct. This field was added in the middle of the struct instead of the end, causing an ABI break in libapparmor without a corresponding major version number bump.
Bug report: https://bugs.launchpad.net/apparmor/+bug/2083435
This is fixed by simply moving execpath at the end of the struct.

Signed-off-by: Maxime Bélair <maxime.belair@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1345
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit 07fe0e9a1b)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:56:24 -07:00
John Johansen
c7ce54bcbf Merge aa-notify: Simplify user interfaces and update man page
aa-notify: Simplify user interfaces and update man page

In notifications, Clicking on "allow" now directly adds the rule without
intermediate window, leading to a smoother UX.
Aligning man page with notify.conf.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1313
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit b5af7d5492)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:55:20 -07:00
John Johansen
2318ba598c Merge utils: fixes when handling owner file rules
Fixes: https://gitlab.com/apparmor/apparmor/-/issues/429
Fixes: https://gitlab.com/apparmor/apparmor/-/issues/430

Closes #429 and #430
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1320
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit 1940b1b7cd)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:55:00 -07:00
John Johansen
3d403fe2a7 Merge parser: add port range support on network policy
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1321
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit ebeb89cbce)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:53:20 -07:00
John Johansen
9074b20a95 Merge utils: ignore peer when parsing logs for non-peer access modes
utils: ignore peer when parsing logs for non-peer access modes

Some access modes (create, setopt, getopt, bind, shutdown, listen,
getattr, setattr) cannot be used with a peer in network rules.

Due to how auditing is implemented in the kernel, the peer information
might be available in the log (as faddr= but not daddr=), which causes
a failure in log parsing.

When parsing the log, check if that's the case and ignore the peer,
avoiding the exception on the NetworkRule constructor.

Fixes: https://gitlab.com/apparmor/apparmor/-/issues/427

Reported-by: Evan Caville <evan.caville@canonical.com>

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

Closes #427
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1314
Approved-by: Christian Boltz <apparmor@cboltz.de>
Merged-by: John Johansen <john@jjmx.net>

(cherry picked from commit ab5f180b08)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:52:21 -07:00
John Johansen
7949339b93 Merge parser: fix rule priority destroying rule permissions for some classes
io_uring and userns mediation are encoding permissions on the class
byte. This is a mistake that should never have been allowed.

With the addition of rule priorities the class byte mediates rule,
that ensure the kernel can determine a class is being mediated is
given the highest priority possible, to ensure class mediation can not
be removed by a deny rule. See
  61b7568e1 ("parser: bug fix mediates_X stub rules.")
for details.

Unfortunately this breaks rule classes that encode permissions on the
class byte, because those rules will always have a lower priority and
the class mediates rule will always be selected over them resulting in
only the class mediates permission being on the rule class state.

Fix this by adding the mediaties class rules for these rule classes
with the lowest priority possible. This means that any rule mediating
the class will wipe out the mediates class rule. So add a new mediates
class rule at the same priority, as the rule being added.

This is a naive implementation and does result in more mediates rules
being added than necessary. The rule class could keep track of the
highest priority rule that had been added, and use that to reduce the
number of mediates rules it adds for the class.

Technically we could also get away with not adding the rules for allow
rules, as the kernel doesn't actually check the encoded permission but
whether the class state is not the trap state. But it is required with
deny rules to ensure the deny rule doesn't result in permissions being
removed from the class, resulting in the kernel thinking it is
unmediated. We also want to ensure that mediation is encoded for other
rule types like prompt, and in the future the kernel could check the
permission so we do want to guarantee that the class state has the
MAY_READ permission on it.

Note: there is another set of classes (file, mqueue, dbus, ...) which
encodes a default rule permission as

  class .* <perm>

this encoding is unfortunate in that it will also add the permission
to the class byte, but also sets up following states with the permission.
thankfully, this accespt anything, including nothing generally isn't
valid in the nothing case (eg. a file without any absolute name). For
this set of classes, the high priority mediates rule just ensures
that the null match case does not have permission.

Fixes: 61b7568e1 parser: bug fix mediates_X stub rules.
Signed-off-by: John Johansen <john.johansen@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1307
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit b6e9df3495)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:51:48 -07:00
Georgia Garcia
80b6e4ddff Merge libapparmor: make af_protos.h consistent in different archs
af_protos.h is a generated table of the protocols created by looking
for definitions of IPPROTO_* in netinet/in.h. Depending on the
architecture, the order of the table may change when using -dM in the
compiler during the extraction of the defines.

This causes an issue because there is more than one IPPROTO defined
by the value 0: IPPROTO_IP and IPPROTO_HOPOPTS which is a header
extension used by IPv6. So if IPPROTO_HOPOPTS was first in the table,
then protocol=0 in the audit logs would be translated to hopopts.

This caused a failure in arm 32bit:

Output doesn't match expected data:
--- ./test_multi/testcase_unix_01.out	2024-08-15 01:47:53.000000000 +0000
+++ ./test_multi/out/testcase_unix_01.out	2024-08-15 23:42:10.187416392 +0000
@@ -12,7 +12,7 @@
 Peer Addr: @test_abstract_socket
 Network family: unix
 Socket type: stream
-Protocol: ip
+Protocol: hopopts
 Class: net
 Epoch: 1711454639
 Audit subid: 322

By the time protocol is resolved in grammar.y, we don't have have
access to the net family to check if it's inet6. Instead of making
protocol dependent on the net family, make the order of the
af_protos.h table consistent between architectures using -dD.

Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1309
Approved-by: John Johansen <john@jjmx.net>
Merged-by: Georgia Garcia <georgia.garcia@canonical.com>
(cherry picked from commit 0ec0e2b035)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:51:23 -07:00
John Johansen
d824adcf93 Merge utils: change os.mkdir to self.mkpath to create intermediary dirs
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1306
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
(cherry picked from commit 4c8a27457e)
Signed-off-by: John Johansen <john.johansen@canonical.com>
2024-10-28 04:50:24 -07:00
112 changed files with 2658 additions and 2817 deletions

3
.gitignore vendored
View File

@@ -1,4 +1,4 @@
apparmor-*
apparmor-
cscope.*
binutils/aa-enabled
binutils/aa-enabled.1
@@ -203,6 +203,7 @@ utils/apparmor/*.pyc
utils/apparmor/rule/*.pyc
utils/apparmor.egg-info/
utils/build/
!utils/emacs/apparmor-mode.el
utils/htmlcov/
utils/test/common_test.pyc
utils/test/.coverage

View File

@@ -13,6 +13,7 @@ workflow:
stages:
- build
- test
- spread
.ubuntu-common:
before_script:
@@ -126,19 +127,6 @@ test-profiles:
- make -C profiles check-abstractions.d
- make -C profiles check-local
# Build the regression tests (don't run them because that needs kernel access)
test-build-regression:
stage: test
needs: ["build-all"]
extends:
- .ubuntu-common
script:
# Additional dependencies required by regression tests
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" install_extra_deps "Installing additional dependencies..."
- apt-get install --no-install-recommends -y attr fuse-overlayfs libdbus-1-dev liburing-dev
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" install_extra_deps
- make -C tests/regression/apparmor -j $(nproc)
shellcheck:
stage: test
needs: []
@@ -196,3 +184,123 @@ coverity:
- "apparmor-*.tar.gz"
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $CI_PROJECT_PATH == "apparmor/apparmor"
.image-garden-x86_64:
stage: spread
# TODO: use tagged release once container tagging is improved upstream.
image: registry.gitlab.com/zygoon/image-garden:latest
tags:
- linux
- x86_64
- kvm
variables:
ARCH: x86_64
GARDEN_DL_DIR: dl
CACHE_POLICY: pull-push
CACHE_COMPRESSION_LEVEL: fastest
before_script:
# Prepare the image in dry-run mode. This helps in debugging cache misses
# when files are not cached correctly by the runner, causing the build section
# below to always do hevy-duty work.
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" prepare_image_dry_run "Prepare image (dry run)"
- image-garden make --dry-run --debug "$GARDEN_SYSTEM.$ARCH.run" "$GARDEN_SYSTEM.$ARCH.qcow2" "$GARDEN_SYSTEM.seed.iso" "$GARDEN_SYSTEM.user-data" "$GARDEN_SYSTEM.meta-data"
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" prepare_image_dry_run
script:
# Prepare the image, for real.
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" prepare_image "Prepare image"
- image-garden make "$GARDEN_SYSTEM.$ARCH.run" "$GARDEN_SYSTEM.$ARCH.qcow2" "$GARDEN_SYSTEM.seed.iso" "$GARDEN_SYSTEM.user-data" "$GARDEN_SYSTEM.meta-data"
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" prepare_image
cache:
# Cache the base image (pre-customization).
- key: image-garden-base-${GARDEN_SYSTEM}.${ARCH}
policy: $CACHE_POLICY
when: always
paths:
- $GARDEN_DL_DIR
# Those are never mutated so they are safe to share.
- efi-code.*.img
- efi-vars.*.img
# Cache the customized system. This cache depends on .image-garden.mk file
# so that any customization updates are immediately acted upon.
- key:
prefix: image-garden-custom-${GARDEN_SYSTEM}.${ARCH}-
files:
- .image-garden.mk
policy: $CACHE_POLICY
when: always
paths:
- $GARDEN_SYSTEM.*
- $GARDEN_SYSTEM.seed.iso
- $GARDEN_SYSTEM.meta-data
- $GARDEN_SYSTEM.user-data
# This job builds and caches the image that the job below looks at.
image-ubuntu-cloud-24.04-x86_64:
extends: .image-garden-x86_64
variables:
GARDEN_SYSTEM: ubuntu-cloud-24.04
needs: []
dependencies: []
rules:
- if: $CI_COMMIT_TAG
- if: $CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH
changes:
paths:
- .image-garden.mk
- .gitlab-ci.yml
compare_to: "refs/heads/master"
.spread-x86_64:
extends: .image-garden-x86_64
variables:
# GitLab project identifier of zygoon/spread-dist can be seen on
# https://gitlab.com/zygoon/spread-dist, under the three-dot menu on
# top-right.
SPREAD_GITLAB_PROJECT_ID: "65375371"
# Git revision of spread to install.
# This must have been built via spread-dist.
# TODO: switch to upstream 1.0 release when available.
SPREAD_REV: 413817eda7bec07a3885e0717c178b965f8924e1
# Run all the tasks for a given system.
SPREAD_ARGS: "garden:$GARDEN_SYSTEM:"
SPREAD_GOARCH: amd64
before_script:
# Prepare the image in dry-run mode. This helps in debugging cache misses
# when files are not cached correctly by the runner, causing the build section
# below to always do hevy-duty work.
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" prepare_image_dry_run "Prepare image (dry run)"
- image-garden make --dry-run --debug "$GARDEN_SYSTEM.$ARCH.run" "$GARDEN_SYSTEM.$ARCH.qcow2" "$GARDEN_SYSTEM.seed.iso" "$GARDEN_SYSTEM.user-data" "$GARDEN_SYSTEM.meta-data"
- stat .image-garden.mk "$GARDEN_SYSTEM".* || true
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" prepare_image_dry_run
# Install the selected revision of spread.
- printf '\e[0K%s:%s:%s[collapsed=true]\r\e[0K%s\n' section_start "$(date +%s)" install_spread "Installing spread..."
# Install pre-built spread from https://gitlab.com/zygoon/spread-dist generic package repository.
- |
curl --header "JOB-TOKEN: ${CI_JOB_TOKEN}" --location --output spread "${CI_API_V4_URL}/projects/${SPREAD_GITLAB_PROJECT_ID}/packages/generic/spread/${SPREAD_REV}/spread.${SPREAD_GOARCH}"
- chmod +x spread
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" install_spread
script:
- printf '\e[0K%s:%s:%s\r\e[0K%s\n' section_start "$(date +%s)" run_spread "Running spread for $GARDEN_SYSTEM..."
# TODO: transform to inject ^...$ to properly select jobs to run.
- mkdir -p spread-logs spread-artifacts
- ./spread -list $SPREAD_ARGS |
split --number=l/"${CI_NODE_INDEX:-1}"/"${CI_NODE_TOTAL:-1}" |
xargs --verbose ./spread -v -artifacts ./spread-artifacts -v | tee spread-logs/"$GARDEN_SYSTEM".log
- printf '\e[0K%s:%s:%s\r\e[0K\n' section_end "$(date +%s)" run_spread
artifacts:
paths:
- spread-logs
- spread-artifacts
when: always
spread-ubuntu-cloud-24.04-x86_64:
extends: .spread-x86_64
variables:
GARDEN_SYSTEM: ubuntu-cloud-24.04
SPREAD_ARGS: garden:$GARDEN_SYSTEM:tests/regression/ garden:$GARDEN_SYSTEM:tests/profiles/
CACHE_POLICY: pull
dependencies: []
needs:
- job: image-ubuntu-cloud-24.04-x86_64
optional: true
parallel: 4

View File

@@ -111,21 +111,13 @@ $ export PYTHON_VERSION=3
$ export PYTHON_VERSIONS=python3
```
Note that, in general, the build steps can be run in parallel, while the test
steps do not gain much speedup from being run in parallel. This is because the
test steps spawn a handful of long-lived test runner processes that mostly
run their tests sequentially and do not use `make`'s jobserver. Moreover,
process spawning overhead constitutes a significant part of test runtime, so
reworking the test harnesses to add parallelism (which would be a major undertaking
for the harnesses that do not have it already) would not produce much of a speedup.
### libapparmor:
```
$ cd ./libraries/libapparmor
$ sh ./autogen.sh
$ sh ./configure --prefix=/usr --with-perl --with-python # see below
$ make -j $(nproc)
$ make
$ make check
$ make install
```
@@ -138,7 +130,7 @@ generate Ruby bindings to libapparmor.]
```
$ cd binutils
$ make -j $(nproc)
$ make
$ make check
$ make install
```
@@ -147,8 +139,7 @@ $ make install
```
$ cd parser
$ make -j $(nproc) # depends on libapparmor having been built first
$ make -j $(nproc) tst_binaries # a build step of make check that can be parallelized
$ make # depends on libapparmor having been built first
$ make check
$ make install
```
@@ -158,7 +149,7 @@ $ make install
```
$ cd utils
$ make -j $(nproc)
$ make
$ make check PYFLAKES=/usr/bin/pyflakes3
$ make install
```
@@ -167,7 +158,7 @@ $ make install
```
$ cd changehat/mod_apparmor
$ make -j $(nproc) # depends on libapparmor having been built first
$ make # depends on libapparmor having been built first
$ make install
```
@@ -176,7 +167,7 @@ $ make install
```
$ cd changehat/pam_apparmor
$ make -j $(nproc) # depends on libapparmor having been built first
$ make # depends on libapparmor having been built first
$ make install
```
@@ -243,7 +234,7 @@ To run:
### Regression tests - using apparmor userspace installed on host
```
$ cd tests/regression/apparmor (requires root)
$ make -j $(nproc) USE_SYSTEM=1
$ make USE_SYSTEM=1
$ sudo make tests USE_SYSTEM=1
$ sudo bash open.sh -r # runs and saves the last testcase from open.sh
```
@@ -256,7 +247,7 @@ $ sudo bash open.sh -r # runs and saves the last testcase from open.sh
```
$ cd tests/regression/apparmor (requires root)
$ make -j $(nproc)
$ make
$ sudo make tests
$ sudo bash open.sh -r # runs and saves the last testcase from open.sh
```

View File

@@ -20,8 +20,6 @@
#include <ctype.h>
#include <dirent.h>
#include <regex.h>
#include <libintl.h>
#define _(s) gettext(s)
#include <sys/apparmor.h>
#include <sys/apparmor_private.h>
@@ -133,7 +131,7 @@ const char *process_statuses[] = {"enforce", "complain", "prompt", "kill", "unco
#define eprintf(...) \
do { \
if (!quiet) \
fprintf(stderr, __VA_ARGS__); \
fprintf(stderr, __VA_ARGS__); \
} while (0)
#define dprintf(...) \
@@ -158,14 +156,14 @@ static int open_profiles(FILE **fp)
ret = stat("/sys/module/apparmor", &st);
if (ret != 0) {
eprintf(_("apparmor not present.\n"));
eprintf("apparmor not present.\n");
return AA_EXIT_DISABLED;
}
dprintf(_("apparmor module is loaded.\n"));
dprintf("apparmor module is loaded.\n");
ret = aa_find_mountpoint(&apparmorfs);
if (ret == -1) {
eprintf(_("apparmor filesystem is not mounted.\n"));
eprintf("apparmor filesystem is not mounted.\n");
return AA_EXIT_NO_CONTROL;
}
@@ -178,9 +176,9 @@ static int open_profiles(FILE **fp)
*fp = fopen(apparmor_profiles, "r");
if (*fp == NULL) {
if (errno == EACCES) {
eprintf(_("You do not have enough privilege to read the profile set.\n"));
eprintf("You do not have enough privilege to read the profile set.\n");
} else {
eprintf(_("Could not open %s: %s"), apparmor_profiles, strerror(errno));
eprintf("Could not open %s: %s", apparmor_profiles, strerror(errno));
}
return AA_EXIT_NO_PERM;
}
@@ -353,7 +351,7 @@ static int get_processes(struct profile *profiles,
continue;
} else if (rc == -1 ||
asprintf(&exe, "/proc/%s/exe", entry->d_name) == -1) {
eprintf(_("ERROR: Failed to allocate memory\n"));
eprintf("ERROR: Failed to allocate memory\n");
ret = AA_EXIT_INTERNAL_ERROR;
goto exit;
} else if (mode) {
@@ -376,7 +374,7 @@ static int get_processes(struct profile *profiles,
// ensure enough space for NUL terminator
real_exe = calloc(PATH_MAX + 1, sizeof(char));
if (real_exe == NULL) {
eprintf(_("ERROR: Failed to allocate memory\n"));
eprintf("ERROR: Failed to allocate memory\n");
ret = AA_EXIT_INTERNAL_ERROR;
goto exit;
}
@@ -600,7 +598,7 @@ static int detailed_profiles(FILE *outf, filters_t *filters, bool json,
*/
subfilters.mode = &mode_filter;
if (regcomp(&mode_filter, profile_statuses[i], REG_NOSUB) != 0) {
eprintf(_("Error: failed to compile sub filter '%s'\n"),
eprintf("Error: failed to compile sub filter '%s'\n",
profile_statuses[i]);
return AA_EXIT_INTERNAL_ERROR;
}
@@ -666,7 +664,7 @@ static int detailed_processes(FILE *outf, filters_t *filters, bool json,
*/
subfilters.mode = &mode_filter;
if (regcomp(&mode_filter, process_statuses[i], REG_NOSUB) != 0) {
eprintf(_("Error: failed to compile sub filter '%s'\n"),
eprintf("Error: failed to compile sub filter '%s'\n",
profile_statuses[i]);
return AA_EXIT_INTERNAL_ERROR;
}
@@ -728,7 +726,7 @@ exit:
static int print_legacy(const char *command)
{
printf(_("Usage: %s [OPTIONS]\n"
printf("Usage: %s [OPTIONS]\n"
"Legacy options and their equivalent command\n"
" --profiled --count --profiles\n"
" --enforced --count --profiles --mode=enforced\n"
@@ -736,8 +734,8 @@ static int print_legacy(const char *command)
" --kill --count --profiles --mode=kill\n"
" --prompt --count --profiles --mode=prompt\n"
" --special-unconfined --count --profiles --mode=unconfined\n"
" --process-mixed --count --ps --mode=mixed\n"),
command);
" --process-mixed --count --ps --mode=mixed\n",
command);
exit(0);
return 0;
@@ -747,7 +745,7 @@ static int usage_filters(void)
{
long unsigned int i;
printf(_("Usage of filters\n"
printf("Usage of filters\n"
"Filters are used to reduce the output of information to only\n"
"those entries that will match the filter. Filters use posix\n"
"regular expression syntax. The possible values for exes that\n"
@@ -757,7 +755,7 @@ static int usage_filters(void)
" --filter.profiles: regular expression to match displayed profile names\n"
" --filter.pid: regular expression to match displayed processes pids\n"
" --filter.exe: regular expression to match executable\n"
));
);
for (i = 0; i < ARRAY_SIZE(process_statuses); i++) {
printf("%s%s", i ? ", " : "", process_statuses[i]);
}
@@ -775,7 +773,7 @@ static int print_usage(const char *command, bool error)
status = EXIT_FAILURE;
}
printf(_("Usage: %s [OPTIONS]\n"
printf("Usage: %s [OPTIONS]\n"
"Displays various information about the currently loaded AppArmor policy.\n"
"Default if no options given\n"
" --show=all\n\n"
@@ -792,8 +790,8 @@ static int print_usage(const char *command, bool error)
" --verbose (default) displays data points about loaded policy set\n"
" --quiet don't output error messages\n"
" -h[(legacy|filters)] this message, or info on the specified option\n"
" --help[=(legacy|filters)] this message, or info on the specified option\n"),
command);
" --help[=(legacy|filters)] this message, or info on the specified option\n",
command);
exit(status);
@@ -869,7 +867,7 @@ static int parse_args(int argc, char **argv)
} else if (strcmp(optarg, "filters") == 0) {
usage_filters();
} else {
eprintf(_("Error: Invalid --help option '%s'.\n"), optarg);
eprintf("Error: Invalid --help option '%s'.\n", optarg);
print_usage(argv[0], true);
break;
}
@@ -937,7 +935,7 @@ static int parse_args(int argc, char **argv)
} else if (strcmp(optarg, "processes") == 0) {
opt_show = SHOW_PROCESSES;
} else {
eprintf(_("Error: Invalid --show option '%s'.\n"), optarg);
eprintf("Error: Invalid --show option '%s'.\n", optarg);
print_usage(argv[0], true);
break;
}
@@ -959,7 +957,7 @@ static int parse_args(int argc, char **argv)
break;
default:
eprintf(_("Error: Invalid command.\n"));
eprintf("Error: Invalid command.\n");
print_usage(argv[0], true);
break;
}
@@ -984,7 +982,7 @@ int main(int argc, char **argv)
if (argc > 1) {
int pos = parse_args(argc, argv);
if (pos < argc) {
eprintf(_("Error: Unknown options.\n"));
eprintf("Error: Unknown options.\n");
print_usage(progname, true);
}
} else {
@@ -996,24 +994,24 @@ int main(int argc, char **argv)
init_filters(&filters, &filter_set);
if (regcomp(filters.mode, opt_mode, REG_NOSUB) != 0) {
eprintf(_("Error: failed to compile mode filter '%s'\n"),
eprintf("Error: failed to compile mode filter '%s'\n",
opt_mode);
return AA_EXIT_INTERNAL_ERROR;
}
if (regcomp(filters.profile, opt_profiles, REG_NOSUB) != 0) {
eprintf(_("Error: failed to compile profiles filter '%s'\n"),
eprintf("Error: failed to compile profiles filter '%s'\n",
opt_profiles);
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}
if (regcomp(filters.pid, opt_pid, REG_NOSUB) != 0) {
eprintf(_("Error: failed to compile ps filter '%s'\n"),
eprintf("Error: failed to compile ps filter '%s'\n",
opt_pid);
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}
if (regcomp(filters.exe, opt_exe, REG_NOSUB) != 0) {
eprintf(_("Error: failed to compile exe filter '%s'\n"),
eprintf("Error: failed to compile exe filter '%s'\n",
opt_exe);
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
@@ -1028,7 +1026,7 @@ int main(int argc, char **argv)
outf_save = outf;
outf = open_memstream(&buffer, &buffer_size);
if (!outf) {
eprintf(_("Failed to open memstream: %m\n"));
eprintf("Failed to open memstream: %m\n");
return AA_EXIT_INTERNAL_ERROR;
}
}
@@ -1039,7 +1037,7 @@ int main(int argc, char **argv)
*/
ret = get_profiles(fp, &profiles, &nprofiles);
if (ret != 0) {
eprintf(_("Failed to get profiles: %d....\n"), ret);
eprintf("Failed to get profiles: %d....\n", ret);
goto out;
}
@@ -1068,7 +1066,7 @@ int main(int argc, char **argv)
ret = get_processes(profiles, nprofiles, &processes, &nprocesses);
if (ret != 0) {
eprintf(_("Failed to get processes: %d....\n"), ret);
eprintf("Failed to get processes: %d....\n", ret);
} else if (opt_count) {
ret = simple_filtered_process_count(outf, &filters, opt_json,
processes, nprocesses);
@@ -1094,14 +1092,14 @@ int main(int argc, char **argv)
outf = outf_save;
json = cJSON_Parse(buffer);
if (!json) {
eprintf(_("Failed to parse json output"));
eprintf("Failed to parse json output");
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}
pretty = cJSON_Print(json);
if (!pretty) {
eprintf(_("Failed to print pretty json"));
eprintf("Failed to print pretty json");
ret = AA_EXIT_INTERNAL_ERROR;
goto out;
}

View File

@@ -1,14 +1,14 @@
# Translations for aa_enabled
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2020.
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"POT-Creation-Date: 2020-10-14 03:52-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

View File

@@ -1,14 +1,14 @@
# Translations for aa_exec
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2020.
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"POT-Creation-Date: 2020-10-14 03:52-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

View File

@@ -1,14 +1,14 @@
# Translations for aa_features_abi
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2011.
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"POT-Creation-Date: 2020-10-14 03:52-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

View File

@@ -1,14 +1,14 @@
# Translations for aa_load
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2020.
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2024-08-31 15:59-0700\n"
"POT-Creation-Date: 2025-02-18 07:37-0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"

View File

@@ -1,165 +0,0 @@
# Translations for aa_status
# Copyright (C) 2024 Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2024.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2024-08-31 17:49-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"Language: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=CHARSET\n"
"Content-Transfer-Encoding: 8bit\n"
#: ../aa_status.c:161
msgid "apparmor not present.\n"
msgstr ""
#: ../aa_status.c:164
msgid "apparmor module is loaded.\n"
msgstr ""
#: ../aa_status.c:168
msgid "apparmor filesystem is not mounted.\n"
msgstr ""
#: ../aa_status.c:181
msgid "You do not have enough privilege to read the profile set.\n"
msgstr ""
#: ../aa_status.c:183
#, c-format
msgid "Could not open %s: %s"
msgstr ""
#: ../aa_status.c:356 ../aa_status.c:379
msgid "ERROR: Failed to allocate memory\n"
msgstr ""
#: ../aa_status.c:587 ../aa_status.c:653
#, c-format
msgid "Error: failed to compile sub filter '%s'\n"
msgstr ""
#: ../aa_status.c:715
#, c-format
msgid ""
"Usage: %s [OPTIONS]\n"
"Legacy options and their equivalent command\n"
" --profiled --count --profiles\n"
" --enforced --count --profiles --mode=enforced\n"
" --complaining --count --profiles --mode=complain\n"
" --kill --count --profiles --mode=kill\n"
" --prompt --count --profiles --mode=prompt\n"
" --special-unconfined --count --profiles --mode=unconfined\n"
" --process-mixed --count --ps --mode=mixed\n"
msgstr ""
#: ../aa_status.c:734
#, c-format
msgid ""
"Usage of filters\n"
"Filters are used to reduce the output of information to only\n"
"those entries that will match the filter. Filters use posix\n"
"regular expression syntax. The possible values for exes that\n"
"support filters are below\n"
"\n"
" --filter.mode: regular expression to match the profile "
"mode modes: enforce, complain, kill, unconfined, mixed\n"
" --filter.profiles: regular expression to match displayed profile names\n"
" --filter.pid: regular expression to match displayed processes pids\n"
" --filter.exe: regular expression to match executable\n"
msgstr ""
#: ../aa_status.c:762
#, c-format
msgid ""
"Usage: %s [OPTIONS]\n"
"Displays various information about the currently loaded AppArmor policy.\n"
"Default if no options given\n"
" --show=all\n"
"\n"
"OPTIONS (one only):\n"
" --enabled returns error code if AppArmor not enabled\n"
" --show=X What information to show. {profiles,processes,all}\n"
" --count print the number of entries. Implies --quiet\n"
" --filter.mode=filter see filters\n"
" --filter.profiles=filter see filters\n"
" --filter.pid=filter see filters\n"
" --filter.exe=filter see filters\n"
" --json displays multiple data points in machine-readable JSON "
"format\n"
" --pretty-json same data as --json, formatted for human consumption as "
"well\n"
" --verbose (default) displays data points about loaded policy set\n"
" --quiet don't output error messages\n"
" -h[(legacy|filters)] this message, or info on the specified option\n"
" --help[=(legacy|filters)] this message, or info on the specified option\n"
msgstr ""
#: ../aa_status.c:856
#, c-format
msgid "Error: Invalid --help option '%s'.\n"
msgstr ""
#: ../aa_status.c:924
#, c-format
msgid "Error: Invalid --show option '%s'.\n"
msgstr ""
#: ../aa_status.c:946
msgid "Error: Invalid command.\n"
msgstr ""
#: ../aa_status.c:971
msgid "Error: Unknown options.\n"
msgstr ""
#: ../aa_status.c:983
#, c-format
msgid "Error: failed to compile mode filter '%s'\n"
msgstr ""
#: ../aa_status.c:988
#, c-format
msgid "Error: failed to compile profiles filter '%s'\n"
msgstr ""
#: ../aa_status.c:994
#, c-format
msgid "Error: failed to compile ps filter '%s'\n"
msgstr ""
#: ../aa_status.c:1000
#, c-format
msgid "Error: failed to compile exe filter '%s'\n"
msgstr ""
#: ../aa_status.c:1015
#, c-format
msgid "Failed to open memstream: %m\n"
msgstr ""
#: ../aa_status.c:1026
#, c-format
msgid "Failed to get profiles: %d....\n"
msgstr ""
#: ../aa_status.c:1050
#, c-format
msgid "Failed to get processes: %d....\n"
msgstr ""
#: ../aa_status.c:1076
msgid "Failed to parse json output"
msgstr ""
#: ../aa_status.c:1083
msgid "Failed to print pretty json"
msgstr ""

View File

@@ -1 +1 @@
4.1.0~beta1
4.1.0~beta5

View File

@@ -22,15 +22,15 @@
=head1 NAME
aa_change_hat - change to or from a "hat" within a AppArmor profile
aa_change_hat - change to or from a "hat" within a AppArmor profile
=head1 SYNOPSIS
B<#include E<lt>sys/apparmor.hE<gt>>
B<int aa_change_hat (const char *subprofile, unsigned long magic_token);>
B<int aa_change_hat (char *subprofile, unsigned long magic_token);>
B<int aa_change_hatv (const char *subprofiles[], unsigned long magic_token);>
B<int aa_change_hatv (char *subprofiles[], unsigned long magic_token);>
B<int aa_change_hat_vargs (unsigned long magic_token, ...);>

View File

@@ -22,7 +22,7 @@
=head1 NAME
aa_change_profile, aa_change_onexec - change a task's profile
aa_change_profile, aa_change_onexec - change a tasks profile
=head1 SYNOPSIS
@@ -58,8 +58,8 @@ The aa_change_onexec() function is like the aa_change_profile() function
except it specifies that the profile transition should take place on the
next exec instead of immediately. The delayed profile change takes
precedence over any exec transition rules within the confining profile.
Delaying the profile boundary has a couple of advantages: it removes the
need for stub transition profiles, and the exec boundary is a natural security
Delaying the profile boundary has a couple of advantages, it removes the
need for stub transition profiles and the exec boundary is a natural security
layer where potentially sensitive memory is unmapped.
=head1 RETURN VALUE

View File

@@ -54,7 +54,7 @@ B<typedef struct aa_features aa_features;>
B<int aa_features_new(aa_features **features, int dirfd, const char *path);>
B<int aa_features_new_from_file(aa_features **features, int file);>
B<int aa_features_new_from_file(aa_features **features, int fd);>
B<int aa_features_new_from_string(aa_features **features, const char *string, size_t size);>

View File

@@ -58,9 +58,6 @@ appropriately.
=head1 ERRORS
# podchecker warns about duplicate link targets for EACCES, EBUSY, ENOENT,
# and ENOMEM, but this is a warning that is safe to ignore.
B<aa_is_enabled>
=over 4

View File

@@ -41,7 +41,7 @@ result is an intersection of all profiles which are stacked. Stacking profiles
together is desirable when wanting to ensure that confinement will never become
more permissive. When changing between two profiles, as performed with
aa_change_profile(2), there is always the possibility that the new profile is
more permissive than the old profile, but that possibility is eliminated when
more permissive than the old profile but that possibility is eliminated when
using aa_stack_profile().
To stack a profile with the current confinement context, a task can use the
@@ -68,7 +68,7 @@ The aa_stack_onexec() function is like the aa_stack_profile() function
except it specifies that the stacking should take place on the next exec
instead of immediately. The delayed profile change takes precedence over any
exec transition rules within the confining profile. Delaying the stacking
boundary has a couple of advantages: it removes the need for stub transition
boundary has a couple of advantages, it removes the need for stub transition
profiles and the exec boundary is a natural security layer where potentially
sensitive memory is unmapped.

View File

@@ -32,10 +32,10 @@ INCLUDES = $(all_includes)
#
# After changing the AA_LIB_* variables, also update EXPECTED_SO_NAME.
AA_LIB_CURRENT = 20
AA_LIB_REVISION = 0
AA_LIB_AGE = 19
EXPECTED_SO_NAME = libapparmor.so.1.19.0
AA_LIB_CURRENT = 25
AA_LIB_REVISION = 1
AA_LIB_AGE = 24
EXPECTED_SO_NAME = libapparmor.so.1.24.1
SUFFIXES = .pc.in .pc

View File

@@ -1,3 +1,3 @@
SUBDIRS = perl python ruby
EXTRA_DIST = SWIG/*.i
EXTRA_DIST = SWIG/*.i java/Makefile.am

View File

@@ -258,7 +258,13 @@ extern int aa_is_enabled(void);
* allocation uninitialized (0) != SWIG_NEWOBJ
*/
%#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L
static_assert(SWIG_NEWOBJ != 0);
/*
* Some older versions of SWIG place this right after a goto label
* This would then be a label followed by a declaration, a C23 extension (!)
* To ensure this works for older SWIG versions and older compilers,
* make this a block element with curly braces.
*/
{static_assert(SWIG_NEWOBJ != 0, "SWIG_NEWOBJ is 0");}
%#endif
if ($1 != NULL && alloc_tracking$argnum != NULL) {
for (Py_ssize_t i=0; i<seq_len$argnum; i++) {
@@ -315,10 +321,17 @@ extern int aa_stack_onexec(const char *profile);
* We can't use "typedef int pid_t" because we still support systems
* with 16-bit PIDs and SWIG can't find sys/types.h
*
* Capture the passed-in value as an intmax_t because pid_t is guaranteed
* to be a signed integer
* Capture the passed-in value as a long because pid_t is guaranteed
* to be a signed integer and because the aalogparse struct uses
* (unsigned) longs to store pid values. While intmax_t would be more
* technically correct, if sizeof(pid_t) > sizeof(long) then aalogparse
* itself would also need fixing.
*/
%typemap(in,noblock=1,fragment="SWIG_AsVal_long") pid_t (int conv_pid, intmax_t pid_large) {
%typemap(in,noblock=1,fragment="SWIG_AsVal_long") pid_t (int conv_pid, long pid_large) {
%#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L
static_assert(sizeof(pid_t) <= sizeof(long),
"pid_t type is too large to be stored in a long");
%#endif
conv_pid = SWIG_AsVal_long($input, &pid_large);
if (!SWIG_IsOK(conv_pid)) {
%argument_fail(conv_pid, "pid_t", $symname, $argnum);
@@ -328,7 +341,7 @@ extern int aa_stack_onexec(const char *profile);
* Technically this is implementation-defined behaviour but we should be fine
*/
$1 = (pid_t) pid_large;
if ((intmax_t) $1 != pid_large) {
if ((long) $1 != pid_large) {
SWIG_exception_fail(SWIG_OverflowError, "pid_t is too large");
}
}

View File

@@ -0,0 +1,21 @@
WRAPPERFILES = apparmorlogparse_wrap.c
BUILT_SOURCES = apparmorlogparse_wrap.c
all-local: apparmorlogparse_wrap.o
$(CC) -module apparmorlogparse_wrap.o -o libaalogparse.so
apparmorlogparse_wrap.o: apparmorlogparse_wrap.c
$(CC) -c apparmorlogparse_wrap.c $(CFLAGS) -I../../src -I/usr/include/classpath -fno-strict-aliasing -o apparmorlogparse_wrap.o
clean-local:
rm -rf org
apparmorlogparse_wrap.c: org/aalogparse ../SWIG/*.i
$(SWIG) -java -I../SWIG -I../../src -outdir org/aalogparse \
-package org.aalogparse -o apparmorlogparse_wrap.c libaalogparse.i
org/aalogparse:
mkdir -p org/aalogparse
EXTRA_DIST = $(BUILT_SOURCES)

View File

@@ -1,4 +1,2 @@
/home/cb/bin/hello.sh {
/usr/bin/rm mrix,
}

View File

@@ -1,4 +1,2 @@
/usr/bin/wireshark {
/usr/lib64/wireshark/extcap/androiddump mrix,
}

View File

@@ -1,4 +1,4 @@
/bin/ping {
/bin/ping mrix,
ping2 ix,
}

View File

@@ -1,4 +1,4 @@
/bin/ping {
/bin/ping mrix,
/bin/ping ix,
}

View File

@@ -1,4 +1,4 @@
/bin/ping {
/bin/ping mrix,
/bin/ping ix,
}

View File

@@ -1,4 +0,0 @@
/home/steve/aa-regression-tests/link {
/tmp/sdtest.8236-29816-IN8243/target l,
}

View File

@@ -1,4 +1,3 @@
/tmp/apparmor-2.8.0/tests/regression/apparmor/dbus_service {
dbus send bus=system path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=LookupDynamicUserByName peer=(label=unconfined),
dbus send bus=system path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=LookupDynamicUserByName peer=( name=org.freedesktop.systemd1, label=unconfined),
}

View File

@@ -388,7 +388,7 @@ aa_change_hat(2) can take advantage of subprofiles to run under different
confinements, dependent on program logic. Several aa_change_hat(2)-aware
applications exist, including an Apache module, mod_apparmor(5); a PAM
module, pam_apparmor; and a Tomcat valve, tomcat_apparmor. Applications
written or modified to use aa_change_profile(2) transition permanently to the
written or modified to use change_profile(2) transition permanently to the
specified profile. libvirt is one such application.
=head2 Profile Head
@@ -604,7 +604,7 @@ modes:
=item B<Ux>
- unconfined execute -- use ld.so(8) secure-execution mode
- unconfined execute -- scrub the environment
=item B<px>
@@ -612,7 +612,7 @@ modes:
=item B<Px>
- discrete profile execute -- use ld.so(8) secure-execution mode
- discrete profile execute -- scrub the environment
=item B<cx>
@@ -620,7 +620,7 @@ modes:
=item B<Cx>
- transition to subprofile on execute -- use ld.so(8) secure-execution mode
- transition to subprofile on execute -- scrub the environment
=item B<ix>
@@ -632,7 +632,7 @@ modes:
=item B<Pix>
- discrete profile execute with inherit fallback -- use ld.so(8) secure-execution mode
- discrete profile execute with inherit fallback -- scrub the environment
=item B<cix>
@@ -640,7 +640,7 @@ modes:
=item B<Cix>
- transition to subprofile on execute with inherit fallback -- use ld.so(8) secure-execution mode
- transition to subprofile on execute with inherit fallback -- scrub the environment
=item B<pux>
@@ -648,7 +648,7 @@ modes:
=item B<PUx>
- discrete profile execute with fallback to unconfined -- use ld.so(8) secure-execution mode
- discrete profile execute with fallback to unconfined -- scrub the environment
=item B<cux>
@@ -656,7 +656,7 @@ modes:
=item B<CUx>
- transition to subprofile on execute with fallback to unconfined -- use ld.so(8) secure-execution mode
- transition to subprofile on execute with fallback to unconfined -- scrub the environment
=item B<deny x>
@@ -715,20 +715,20 @@ constrained, see the apparmor(7) man page.
B<WARNING> 'ux' should only be used in very special cases. It enables the
designated child processes to be run without any AppArmor protection.
'ux' does not use ld.so(8) secure-execution mode to clear variables such as
LD_PRELOAD; as a result, the calling domain may have an undue amount of
influence over the callee. Use this mode only if the child absolutely must be
'ux' does not scrub the environment of variables such as LD_PRELOAD;
as a result, the calling domain may have an undue amount of influence
over the callee. Use this mode only if the child absolutely must be
run unconfined and LD_PRELOAD must be used. Any profile using this mode
provides negligible security. Use at your own risk.
Incompatible with other exec transition modes and the deny qualifier.
=item B<Ux - unconfined execute -- use ld.so(8) secure-execution mode>
=item B<Ux - unconfined execute -- scrub the environment>
'Ux' allows the named program to run in 'ux' mode, but AppArmor
will invoke the Linux Kernel's B<unsafe_exec> routines to set ld.so(8)
secure-execution mode and clear environment variables such as LD_PRELOAD,
similar to setuid programs. (See ld.so(8) for more information.)
will invoke the Linux Kernel's B<unsafe_exec> routines to scrub
the environment, similar to setuid programs. (See ld.so(8) for some
information on setuid/setgid environment scrubbing.)
B<WARNING> 'Ux' should only be used in very special cases. It enables the
designated child processes to be run without any AppArmor protection.
@@ -743,18 +743,18 @@ This mode requires that a discrete security profile is defined for a
program executed and forces an AppArmor domain transition. If there is
no profile defined then the access will be denied.
B<WARNING> 'px' does not use ld.so(8) secure-execution mode to clear variables
such as LD_PRELOAD; as a result, the calling domain may have an undue amount of
B<WARNING> 'px' does not scrub the environment of variables such as
LD_PRELOAD; as a result, the calling domain may have an undue amount of
influence over the callee.
Incompatible with other exec transition modes and the deny qualifier.
=item B<Px - Discrete Profile execute mode -- use ld.so(8) secure-execution mode>
=item B<Px - Discrete Profile execute mode -- scrub the environment>
'Px' allows the named program to run in 'px' mode, but AppArmor
will invoke the Linux Kernel's B<unsafe_exec> routines to set ld.so(8)
secure-execution mode and clear environment variables such as LD_PRELOAD,
similar to setuid programs. (See ld.so(8) for more information.)
will invoke the Linux Kernel's B<unsafe_exec> routines to scrub
the environment, similar to setuid programs. (See ld.so(8) for some
information on setuid/setgid environment scrubbing.)
Incompatible with other exec transition modes and the deny qualifier.
@@ -764,18 +764,18 @@ This mode requires that a local security profile is defined and forces an
AppArmor domain transition to the named profile. If there is no profile
defined then the access will be denied.
B<WARNING> 'cx' does not use ld.so(8) secure-execution mode to clear variables
such as LD_PRELOAD; as a result, the calling domain may have an undue amount of
B<WARNING> 'cx' does not scrub the environment of variables such as
LD_PRELOAD; as a result, the calling domain may have an undue amount of
influence over the callee.
Incompatible with other exec transition modes and the deny qualifier.
=item B<Cx - Transition to Subprofile execute mode -- use ld.so(8) secure-execution mode>
=item B<Cx - Transition to Subprofile execute mode -- scrub the environment>
'Cx' allows the named program to run in 'cx' mode, but AppArmor
will invoke the Linux Kernel's B<unsafe_exec> routines to set ld.so(8)
secure-execution mode and clear environment variables such as LD_PRELOAD,
similar to setuid programs. (See ld.so(8) for more information.)
will invoke the Linux Kernel's B<unsafe_exec> routines to scrub
the environment, similar to setuid programs. (See ld.so(8) for some
information on setuid/setgid environment scrubbing.)
Incompatible with other exec transition modes and the deny qualifier.
@@ -788,7 +788,7 @@ will inherit the current profile.
This mode is useful when a confined program needs to call another
confined program without gaining the permissions of the target's
profile, or losing the permissions of the current profile. There is no
version to set secure-execution mode because 'ix' executions don't change
version to scrub the environment because 'ix' executions don't change
privileges.
Incompatible with other exec transition modes and the deny qualifier.
@@ -1690,11 +1690,11 @@ rule set. Eg.
change_profile /bin/bash -> {new_profile1,new_profile2,new_profile3},
The exec mode dictates whether or not the Linux Kernel's B<unsafe_exec>
routines should be used to set ld.so(8) secure-execution mode and clear
environment variables such as LD_PRELOAD, similar to setuid programs.
(See ld.so(8) for more information.) The B<safe> mode sets up secure-execution
mode for the new application, and B<unsafe> mode disables AppArmor's
requirement for it (the kernel and/or libc may still turn it on). An
routines should be used to scrub the environment, similar to setuid programs.
(See ld.so(8) for some information on setuid/setgid environment scrubbing.) The
B<safe> mode sets up environment scrubbing to occur when the new application is
executed and B<unsafe> mode disables AppArmor's requirement for environment
scrubbing (the kernel and/or libc may still require environment scrubbing). An
exec mode can only be specified when an exec condition is present.
change_profile safe /bin/bash -> new_profile,
@@ -1796,6 +1796,61 @@ F</etc/apparmor.d/tunables/xdg-user-dirs.d> for B<@{XDG_*}>.
The special B<@{profile_name}> variable is set to the profile name and may be
used in all policy.
=head3 Notes on variable expansion and the / character
It is important to note that how AppArmor performs variable expansion
depends on the context where a variable is used. When a variable is
expanded it can result in a string with multiple path characters
next to each other, in a way that is not evident when looking at
policy.
Eg.
=over 4
Given the following variable definition and rule
@{HOME}=/home/*/
file rw @{HOME}/*,
The variable expansion results in a rule of
file rw /home/*//*.
=back
When this occurs in a context where a path is expected, AppArmor will
canonicalize the path by collapsing consecutive / characters into
a single character. For the above example, this would be
file rw /home/*/*,
There is one exception to this rule, when the consecutive / characters
are at the beginning of a path, this indicates a posix namespace
and the characters will not be collapsed.
Eg.
=over 4
@{HOME}=/home/*/
file rw /@{HOME}/*,
will result in an expansion of
file rw //home/*//*,
which is collapsed to
file rw //home/*/*,
Note: that the leading // in the above example is not collapsed to a
single /. However the second // (that was also seen in the first
example) is collapsed.
=back
=head2 Alias rules
AppArmor also provides alias rules for remapping paths for site-specific
@@ -2097,7 +2152,7 @@ An example AppArmor profile:
/usr/lib/** r,
/tmp/foo.pid wr,
/tmp/foo.* lrw,
/@{HOME}/.foo_file rw,
@{HOME}/.foo_file rw,
/usr/bin/baz Cx -> baz,
# a comment about foo's hat (subprofile), bar.
@@ -2159,7 +2214,7 @@ negative values match when specifying one or the other. Eg, 'rw' matches when
=head1 SEE ALSO
apparmor(7), apparmor_parser(8), apparmor_xattrs(7), aa-complain(1),
aa-enforce(1), aa_change_hat(2), aa_change_profile(2), mod_apparmor(5), and
aa-enforce(1), aa_change_hat(2), mod_apparmor(5), and
L<https://wiki.apparmor.net>.
=cut

View File

@@ -206,8 +206,8 @@ which can help debugging profiles.
=head2 Enable debug mode
When debug mode is enabled, AppArmor will log a few extra messages to
dmesg (not via the audit subsystem). For example, the logs will state when
ld.so(8) secure-execution mode has been applied in a profile transition.
dmesg (not via the audit subsystem). For example, the logs will tell
whether environment scrubbing has been applied.
To enable debug mode, run:

View File

@@ -63,6 +63,7 @@ typedef enum capability_flags {
} capability_flags;
int name_to_capability(const char *keyword);
void capabilities_init(void);
void __debug_capabilities(uint64_t capset, const char *name);
bool add_cap_feature_mask(struct aa_features *features, capability_flags flags);
void clear_cap_flag(capability_flags flags);

View File

@@ -10,199 +10,6 @@ aare_rules.{h,cc} - code to that binds parse -> expr-tree -> hfa generation
-> chfa generation into a basic interface for converting
rules to a runtime ready state machine.
Notes on the compiler pipeline order
============================================
Front End: Program driver logic and policy text parsing into an
abstract syntax tree.
Middle Layer: Transforms and operations on the abstract syntax tree.
Converts syntax tree into expression tree for back end.
Back End: transforms of syntax tree, and creation of policy HFA from
expression trees and HFAs.
Basic order of the backend of the compiler pipe line and where the
dump information occurs in the pipeline.
===== Front End (parse -> AST ================
|
v
yyparse
|
+--->--+-->-+
| |
| +-->---- +---------------------------<-----------------------+
| | | |
| | v |
| | yylex |
| | | |
| ^ token match |
| | | |
| | +----------------------------+ |
| | | | ^
| | v v |
| +-<- rule match? preprocess |
| | | |
| early var expansion +----------+-----------+ |
| | | | | |
^ v v v v |
| new rule() / new ent include variable conditional |
| | | | | |
| v +---->-----+----->-----+----->----+
| new rule semantic check
| |
+-----<-----+
|
----------- | ------ End of Parse --------------------
|
v
post_parse_profile semantic check
|
v
post_process
|
v
add implied rules()
|
v
process_profile_variables()
|
v
rule->expand_variables()
|
+--------+
|
v
replace aliases (to be moved to backend rewrite)
|
v
merge rules
|
v
profile->merge_rules()
|
v
+-->--rule->is_mergeable()
| |
^ v
| add to table
| |
+-------+--------+
|
v
sort->cmp()/oper<()
|
rule->merge()
|
+------------+
|
v
process_profile_rules
|
v
rule->gen_policy_re()
|
v
===== Mid layer (AST -> expr tree) =================
|
+-> add_rule() (aare_rules.{h,cc})
| |
| v
| rule parse (parse.y)
| | |
| | v
| | expr tree (expr-tree.{h,cc})
| | |
| v |
| unique perms | (aare_rules.{h,cc})
| | |
| +------ +
| |
| v
| add to rules expr tree (aare_rules.{h,c})
| |
+------+
|
+------------------+
|
v
create_dfablob()
|
v
expr tree
|
v
create_chfa() (aare_rules.cc)
|
v
expr normalization (expr-tree.{h,cc})
|
v
expr simplification (expr-tree.{h,c})
|
+- D expr-tree
|
+- D expr-simplified
|
==== Back End - Create cHFA out of expr tree and other HFAs ====
v
hfa creation (hfa.{h,cc})
|
+- D dfa-node-map
|
+- D dfa-uniq-perms
|
+- D dfa-states-initial
|
v
hfa rewrite (not yet implemented)
|
v
filter deny (hfa.{h,cc})
|
+- D dfa-states-post-filter
|
v
minimization (hfa.{h,cc})
|
+- D dfa-minimize-partitions
|
+- D dfa-minimize-uniq-perms
|
+- D dfa-states-post-minimize
|
v
unreachable state removal (hfa.{h,cc})
|
+- D dfa-states-post-unreachable
|
+- D dfa-states constructed hfa
|
+- D dfa-graph
|
v
equivalence class construction
|
+- D equiv
|
diff encode (hfa.{h,cc})
|
+- D diff-encode
|
compute perms table
|
+- D compressed-dfa == perm table dump
|
compressed hfa (chfa.{h,cc}
|
+- D compressed-dfa == transition tables
|
+- D dfa-compressed-states - compress HFA in state form
|
v
Return to Mid Layer
Notes on the compress hfa file format (chfa)
==============================================

View File

@@ -25,8 +25,6 @@
#include <iostream>
#include <fstream>
#include <limits>
#include <arpa/inet.h>
#include <stdio.h>
#include <string.h>
@@ -594,11 +592,10 @@ void CHFA::weld_file_to_policy(CHFA &file_chfa, size_t &new_start,
// to repeat
assert(accept.size() == old_base_size);
accept.resize(accept.size() + file_chfa.accept.size());
assert(policy_perms.size() < std::numeric_limits<ssize_t>::max());
ssize_t size = (ssize_t) policy_perms.size();
size_t size = policy_perms.size();
policy_perms.resize(size*2 + file_perms.size());
// shift and double the policy perms
for (ssize_t i = size - 1; i >= 0; i--) {
for (size_t i = size - 1; size >= 0; i--) {
policy_perms[i*2] = policy_perms[i];
policy_perms[i*2 + 1] = policy_perms[i];
}

View File

@@ -558,14 +558,6 @@ void DFA::dump_uniq_perms(const char *s)
//TODO: add prompt
}
// make sure work_queue and reachable insertion are always done together
static void push_reachable(set<State *> &reachable, list<State *> &work_queue,
State *state)
{
work_queue.push_back(state);
reachable.insert(state);
}
/* Remove dead or unreachable states */
void DFA::remove_unreachable(optflags const &opts)
{
@@ -573,18 +565,19 @@ void DFA::remove_unreachable(optflags const &opts)
/* find the set of reachable states */
reachable.insert(nonmatching);
push_reachable(reachable, work_queue, start);
work_queue.push_back(start);
while (!work_queue.empty()) {
State *from = work_queue.front();
work_queue.pop_front();
reachable.insert(from);
if (from->otherwise != nonmatching &&
reachable.find(from->otherwise) == reachable.end())
push_reachable(reachable, work_queue, from->otherwise);
work_queue.push_back(from->otherwise);
for (StateTrans::iterator j = from->trans.begin(); j != from->trans.end(); j++) {
if (reachable.find(j->second) == reachable.end())
push_reachable(reachable, work_queue, j->second);
work_queue.push_back(j->second);
}
}
@@ -1539,9 +1532,7 @@ int accept_perms(optflags const &opts, NodeVec *state, perms_t &perms,
{
int error = 0;
perms_t exact;
// size of vector needs to be number of bits in the data type
// being used for the permission set.
std::vector<int> priority(sizeof(perm32_t)*8, MIN_INTERNAL_PRIORITY);
std::vector<int> priority(sizeof(perm32_t)*8, MIN_INTERNAL_PRIORITY); // 32 but wan't tied to perm32_t
perms.clear();
if (!state)

View File

@@ -275,7 +275,7 @@ public:
ostream &dump(ostream &os)
{
os << *this << "\n";
cerr << *this << "\n";
for (StateTrans::iterator i = trans.begin(); i != trans.end(); i++) {
os << " " << i->first.c << " -> " << *i->second << "\n";
}

View File

@@ -355,8 +355,7 @@ int is_valid_mnt_cond(const char *name, int src)
static unsigned int extract_flags(struct value_list **list, unsigned int *inv)
{
unsigned int flags = 0, invflags = 0;
if (inv)
*inv = 0;
*inv = 0;
struct value_list *entry, *tmp, *prev = NULL;
list_for_each_safe(*list, entry, tmp) {
@@ -369,7 +368,11 @@ static unsigned int extract_flags(struct value_list **list, unsigned int *inv)
" => req: 0x%x inv: 0x%x\n",
entry->value, mnt_opts_table[i].set,
mnt_opts_table[i].clear, flags, invflags);
list_remove_at(*list, prev, entry);
if (prev)
prev->next = tmp;
if (entry == *list)
*list = tmp;
entry->next = NULL;
free_value_list(entry);
} else
prev = entry;
@@ -680,7 +683,7 @@ int mnt_rule::cmp(rule_t const &rhs) const {
return cmp_vec_int(opt_flagsv, rhs_mnt.opt_flagsv);
}
static bool build_mnt_flags(char *buffer, int size, unsigned int flags,
static int build_mnt_flags(char *buffer, int size, unsigned int flags,
unsigned int opt_flags)
{
char *p = buffer;
@@ -690,8 +693,8 @@ static bool build_mnt_flags(char *buffer, int size, unsigned int flags,
/* all flags are optional */
len = snprintf(p, size, "%s", default_match_pattern);
if (len < 0 || len >= size)
return false;
return true;
return FALSE;
return TRUE;
}
for (i = 0; i <= 31; ++i) {
if ((opt_flags) & (1 << i))
@@ -702,7 +705,7 @@ static bool build_mnt_flags(char *buffer, int size, unsigned int flags,
continue;
if (len < 0 || len >= size)
return false;
return FALSE;
p += len;
size -= len;
}
@@ -713,15 +716,15 @@ static bool build_mnt_flags(char *buffer, int size, unsigned int flags,
* like the empty string
*/
if (size < 9)
return false;
return FALSE;
strcpy(p, "(\\xfe|)");
}
return true;
return TRUE;
}
static bool build_mnt_opts(std::string& buffer, struct value_list *opts)
static int build_mnt_opts(std::string& buffer, struct value_list *opts)
{
struct value_list *ent;
pattern_t ptype;
@@ -729,19 +732,19 @@ static bool build_mnt_opts(std::string& buffer, struct value_list *opts)
if (!opts) {
buffer.append(default_match_pattern);
return true;
return TRUE;
}
list_for_each(opts, ent) {
ptype = convert_aaregex_to_pcre(ent->value, 0, glob_default, buffer, &pos);
if (ptype == ePatternInvalid)
return false;
return FALSE;
if (ent->next)
buffer.append(",");
}
return true;
return TRUE;
}
void mnt_rule::warn_once(const char *name)

View File

@@ -179,6 +179,8 @@ struct var_string {
#define OPTION_STDOUT 4
#define OPTION_OFILE 5
#define BOOL int
extern int preprocess_only;
#define PATH_CHROOT_REL 0x1
@@ -211,6 +213,13 @@ do { \
errno = perror_error; \
} while (0)
#ifndef TRUE
#define TRUE (1)
#endif
#ifndef FALSE
#define FALSE (0)
#endif
#define MIN_PORT 0
#define MAX_PORT 65535
@@ -242,6 +251,17 @@ do { \
len; \
})
#define list_find_prev(LIST, ENTRY) \
({ \
typeof(ENTRY) tmp, prev = NULL; \
list_for_each((LIST), tmp) { \
if (tmp == (ENTRY)) \
break; \
prev = tmp; \
} \
prev; \
})
#define list_pop(LIST) \
({ \
typeof(LIST) _entry = (LIST); \
@@ -259,6 +279,12 @@ do { \
(LIST) = (ENTRY)->next; \
(ENTRY)->next = NULL; \
#define list_remove(LIST, ENTRY) \
do { \
typeof(ENTRY) prev = list_find_prev((LIST), (ENTRY)); \
list_remove_at((LIST), prev, (ENTRY)); \
} while (0)
#define DUP_STRING(orig, new, field, fail_target) \
do { \
@@ -397,10 +423,10 @@ extern const char *basedir;
#define glob_null 1
extern pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
std::string& pcre, int *first_re_pos);
extern bool build_list_val_expr(std::string& buffer, struct value_list *list);
extern bool convert_entry(std::string& buffer, char *entry);
extern int build_list_val_expr(std::string& buffer, struct value_list *list);
extern int convert_entry(std::string& buffer, char *entry);
extern int clear_and_convert_entry(std::string& buffer, char *entry);
extern bool convert_range(std::string& buffer, bignum start, bignum end);
extern int convert_range(std::string& buffer, bignum start, bignum end);
extern int process_regex(Profile *prof);
extern int post_process_entry(struct cod_entry *entry);
@@ -419,6 +445,7 @@ extern void free_var_string(struct var_string *var);
extern void warn_uppercase(void);
extern int is_blacklisted(const char *name, const char *path);
extern struct value_list *new_value_list(char *value);
extern struct value_list *dup_value_list(struct value_list *list);
extern void free_value_list(struct value_list *list);
extern void print_value_list(struct value_list *list);
extern struct cond_entry *new_cond_entry(char *name, int eq, struct value_list *list);

View File

@@ -142,10 +142,8 @@ static void process_entries(const void *nodep, VISIT value, int level unused)
}
if (dup) {
dup->alias_ignore = true;
/* The original entry->next is in dup->next, so we don't lose
* any of the original elements of the linked list. Also, by
* setting dup->alias_ignore, we trigger the check at the start
* of the loop, skipping the new entry we just inserted.
/* adds to the front of the list, list iteratition
* will skip it
*/
entry->next = dup;

View File

@@ -202,7 +202,7 @@ static void start_include_position(const char *filename)
current_lineno = 1;
}
void push_include_stack(const char *filename)
void push_include_stack(char *filename)
{
struct include_stack_t *include = NULL;

View File

@@ -29,7 +29,7 @@ extern void parse_default_paths(void);
extern int do_include_preprocessing(char *profilename);
FILE *search_path(char *filename, char **fullpath, bool *skip);
extern void push_include_stack(const char *filename);
extern void push_include_stack(char *filename);
extern void pop_include_stack(void);
extern void reset_include_stack(const char *filename);

View File

@@ -245,7 +245,7 @@ static inline void sd_write_uint64(std::ostringstream &buf, u64 b)
static inline void sd_write_name(std::ostringstream &buf, const char *name)
{
PDEBUG("Writing name '%s'\n", name ? name : "(null)");
PDEBUG("Writing name '%s'\n", name);
if (name) {
sd_write8(buf, SD_NAME);
sd_write16(buf, strlen(name) + 1);

View File

@@ -1620,6 +1620,7 @@ int main(int argc, char *argv[])
progname = argv[0];
init_base_dir();
capabilities_init();
process_early_args(argc, argv);
process_config_file(config_file);

View File

@@ -35,7 +35,6 @@
#include <sys/apparmor_private.h>
#include <algorithm>
#include <unordered_map>
#include "capability.h"
#include "lib.h"
@@ -62,10 +61,6 @@ void *reallocarray(void *ptr, size_t nmemb, size_t size)
}
#endif
#ifndef NULL
#define NULL nullptr
#endif
int is_blacklisted(const char *name, const char *path)
{
int retval = _aa_is_blacklisted(name);
@@ -76,7 +71,12 @@ int is_blacklisted(const char *name, const char *path)
return !retval ? 0 : 1;
}
static const unordered_map<string, int> keyword_table = {
struct keyword_table {
const char *keyword;
unsigned int token;
};
static struct keyword_table keyword_table[] = {
/* network */
{"network", TOK_NETWORK},
{"unix", TOK_UNIX},
@@ -132,9 +132,11 @@ static const unordered_map<string, int> keyword_table = {
{"sqpoll", TOK_SQPOLL},
{"all", TOK_ALL},
{"priority", TOK_PRIORITY},
/* terminate */
{NULL, 0}
};
static const unordered_map<string, int> rlimit_table = {
static struct keyword_table rlimit_table[] = {
{"cpu", RLIMIT_CPU},
{"fsize", RLIMIT_FSIZE},
{"data", RLIMIT_DATA},
@@ -160,33 +162,37 @@ static const unordered_map<string, int> rlimit_table = {
#ifdef RLIMIT_RTTIME
{"rttime", RLIMIT_RTTIME},
#endif
/* terminate */
{NULL, 0}
};
/* for alpha matches, check for keywords */
static int get_table_token(const char *name unused, const unordered_map<string, int> &table,
const string &keyword)
static int get_table_token(const char *name unused, struct keyword_table *table,
const char *keyword)
{
auto token_entry = table.find(keyword);
if (token_entry == table.end()) {
PDEBUG("Unable to find %s %s\n", name, keyword.c_str());
return -1;
} else {
PDEBUG("Found %s %s\n", name, keyword.c_str());
return token_entry->second;
int i;
for (i = 0; table[i].keyword; i++) {
PDEBUG("Checking %s %s\n", name, table[i].keyword);
if (strcmp(keyword, table[i].keyword) == 0) {
PDEBUG("Found %s %s\n", name, table[i].keyword);
return table[i].token;
}
}
PDEBUG("Unable to find %s %s\n", name, keyword);
return -1;
}
/* for alpha matches, check for keywords */
int get_keyword_token(const char *keyword)
{
// Can't use string_view because that requires C++17
return get_table_token("keyword", keyword_table, string(keyword));
return get_table_token("keyword", keyword_table, keyword);
}
int get_rlimit(const char *name)
{
// Can't use string_view because that requires C++17
return get_table_token("rlimit", rlimit_table, string(name));
return get_table_token("rlimit", rlimit_table, name);
}
@@ -202,164 +208,55 @@ struct capability_table {
capability_flags flags;
};
/*
* Enum for the results of adding a capability, with values assigned to match
* the int values returned by the old capable_add_cap function:
*
* -1: error
* 0: no change - capability already in table
* 1: added flag to capability in table
* 2: added new capability
*/
enum add_cap_result {
ERROR = -1, // Was only used for OOM conditions
ALREADY_EXISTS = 0,
FLAG_ADDED = 1,
CAP_ADDED = 2
};
static struct capability_table base_capability_table[] = {
/* capabilities */
#include "cap_names.h"
};
static const size_t BASE_CAP_TABLE_SIZE = sizeof(base_capability_table)/sizeof(struct capability_table);
class capability_lookup {
vector<capability_table> cap_table;
// Use unordered_map to avoid pulling in two map implementations
// We may want to switch to boost::multiindex to avoid duplication
unordered_map<string, capability_table&> name_cap_map;
unordered_map<unsigned int, capability_table&> int_cap_map;
private:
void add_capability_table_entry_raw(capability_table entry) {
cap_table.push_back(entry);
capability_table &entry_ref = cap_table.back();
name_cap_map.emplace(string(entry_ref.name), entry_ref);
int_cap_map.emplace(entry_ref.cap, entry_ref);
}
public:
capability_lookup() :
cap_table(vector<capability_table>()),
name_cap_map(unordered_map<string, capability_table&>(BASE_CAP_TABLE_SIZE)),
int_cap_map(unordered_map<unsigned int, capability_table&>(BASE_CAP_TABLE_SIZE)) {
cap_table.reserve(BASE_CAP_TABLE_SIZE);
for (size_t i=0; i<BASE_CAP_TABLE_SIZE; i++) {
add_capability_table_entry_raw(base_capability_table[i]);
}
}
capability_table* find_cap_entry_by_name(string const & name) const {
auto map_entry = this->name_cap_map.find(name);
if (map_entry == this->name_cap_map.end()) {
return NULL;
} else {
PDEBUG("Found %s %s\n", name.c_str(), map_entry->second.name);
return &map_entry->second;
}
}
capability_table* find_cap_entry_by_num(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return NULL;
} else {
PDEBUG("Found %d %d\n", cap, map_entry->second.cap);
return &map_entry->second;
}
}
int name_to_capability(string const &cap) const {
auto map_entry = this->name_cap_map.find(cap);
if (map_entry == this->name_cap_map.end()) {
PDEBUG("Unable to find %s %s\n", "capability", cap.c_str());
return -1;
} else {
return map_entry->second.cap;
}
}
const char *capability_to_name(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return "invalid-capability";
} else {
return map_entry->second.name;
}
}
int capability_backmap(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return NO_BACKMAP_CAP;
} else {
return map_entry->second.backmap;
}
}
bool capability_in_kernel(unsigned int cap) const {
auto map_entry = this->int_cap_map.find(cap);
if (map_entry == this->int_cap_map.end()) {
return false;
} else {
return map_entry->second.flags & CAPFLAG_KERNEL_FEATURE;
}
}
void __debug_capabilities(uint64_t capset, const char *name) const {
printf("%s:", name);
for (auto it = this->cap_table.cbegin(); it != this->cap_table.cend(); it++) {
if ((1ull << it->cap) & capset)
printf (" %s", it->name);
}
printf("\n");
}
add_cap_result capable_add_cap(string const & str, unsigned int cap,
capability_flags flag) {
struct capability_table *ent = this->find_cap_entry_by_name(str);
if (ent) {
if (ent->cap != cap) {
pwarn(WARN_UNEXPECTED, "feature capability '%s:%d' does not equal expected %d. Ignoring ...\n", str.c_str(), cap, ent->cap);
/* TODO: make warn to error config */
return add_cap_result::ALREADY_EXISTS;
}
if (ent->flags & flag)
return add_cap_result::ALREADY_EXISTS;
ent->flags = (capability_flags) (ent->flags | flag);
return add_cap_result::FLAG_ADDED;
} else {
struct capability_table new_entry;
new_entry.name = strdup(str.c_str());
if (!new_entry.name) {
yyerror(_("Out of memory"));
return add_cap_result::ERROR;
}
new_entry.cap = cap;
new_entry.backmap = 0;
new_entry.flags = flag;
try {
this->add_capability_table_entry_raw(new_entry);
} catch (const std::bad_alloc &_e) {
yyerror(_("Out of memory"));
return add_cap_result::ERROR;
}
// TODO: exception catching for causes other than OOM
return add_cap_result::CAP_ADDED;
}
}
void clear_cap_flag(capability_flags flags)
{
for (auto it = this->cap_table.begin(); it != this->cap_table.end(); it++) {
PDEBUG("Clearing capability flag for capability \"%s\"\n", it->name);
it->flags = (capability_flags) (it->flags & ~flags);
}
}
/* terminate */
{NULL, 0, 0, CAPFLAGS_CLEAR}
};
static capability_lookup cap_table;
static struct capability_table *cap_table;
static int cap_table_size;
void capabilities_init(void)
{
cap_table = (struct capability_table *) malloc(sizeof(base_capability_table));
if (!cap_table)
yyerror(_("Memory allocation error."));
memcpy(cap_table, base_capability_table, sizeof(base_capability_table));
cap_table_size = sizeof(base_capability_table)/sizeof(struct capability_table);
}
struct capability_table *find_cap_entry_by_name(const char *name)
{
int i;
for (i = 0; cap_table[i].name; i++) {
PDEBUG("Checking %s %s\n", name, cap_table[i].name);
if (strcmp(name, cap_table[i].name) == 0) {
PDEBUG("Found %s %s\n", name, cap_table[i].name);
return &cap_table[i];
}
}
return NULL;
}
struct capability_table *find_cap_entry_by_num(unsigned int cap)
{
int i;
for (i = 0; cap_table[i].name; i++) {
PDEBUG("Checking %d %d\n", cap, cap_table[i].cap);
if (cap == cap_table[i].cap) {
PDEBUG("Found %d %d\n", cap, cap_table[i].cap);
return &cap_table[i];
}
}
return NULL;
}
/* don't mark up str with \0 */
static const char *strn_token(const char *str, size_t &len)
@@ -397,6 +294,59 @@ bool strcomp (const char *lhs, const char *rhs)
return null_strcmp(lhs, rhs) < 0;
}
/*
* Returns: -1: error
* 0: no change - capability already in table
* 1: added flag to capability in table
* 2: added new capability
*/
static int capable_add_cap(const char *str, int len, unsigned int cap,
capability_flags flag)
{
/* extract name from str so we can treat as a string */
autofree char *name = strndup(str, len);
if (!name) {
yyerror(_("Out of memory"));
return -1;
}
struct capability_table *ent = find_cap_entry_by_name(name);
if (ent) {
if (ent->cap != cap) {
pwarn(WARN_UNEXPECTED, "feature capability '%s:%d' does not equal expected %d. Ignoring ...\n", name, cap, ent->cap);
/* TODO: make warn to error config */
return 0;
}
if (ent->flags & flag)
return 0; /* no change */
ent->flags = (capability_flags) (ent->flags | flag);
return 1; /* modified */
} else {
struct capability_table *tmp;
tmp = (struct capability_table *) reallocarray(cap_table, sizeof(struct capability_table), cap_table_size+1);
if (!tmp) {
yyerror(_("Out of memory"));
/* TODO: change away from yyerror */
return -1;
}
cap_table = tmp;
ent = &cap_table[cap_table_size - 1]; /* overwrite null */
ent->name = strndup(name, len);
if (!ent->name) {
/* TODO: change away from yyerror */
yyerror(_("Out of memory"));
return -1;
}
ent->cap = cap;
ent->flags = flag;
cap_table[cap_table_size].name = NULL; /* new null */
cap_table_size++;
}
return 2; /* added */
}
bool add_cap_feature_mask(struct aa_features *features, capability_flags flags)
{
autofree char *value = NULL;
@@ -413,8 +363,7 @@ bool add_cap_feature_mask(struct aa_features *features, capability_flags flags)
for (capstr = strn_token(value, len);
capstr;
capstr = strn_token(capstr + len, len)) {
string capstr_as_str = string(capstr, len);
if (cap_table.capable_add_cap(capstr_as_str, n, flags) < 0)
if (capable_add_cap(capstr, len, n, flags) < 0)
return false;
n++;
if (len > valuelen) {
@@ -430,32 +379,70 @@ bool add_cap_feature_mask(struct aa_features *features, capability_flags flags)
void clear_cap_flag(capability_flags flags)
{
cap_table.clear_cap_flag(flags);
int i;
for (i = 0; cap_table[i].name; i++) {
PDEBUG("Clearing capability flag for capability \"%s\"\n", cap_table[i].name);
cap_table[i].flags = (capability_flags) (cap_table[i].flags & ~flags);
}
}
int name_to_capability(const char *cap)
{
return cap_table.name_to_capability(string(cap));
struct capability_table *ent;
ent = find_cap_entry_by_name(cap);
if (ent)
return ent->cap;
PDEBUG("Unable to find %s %s\n", "capability", cap);
return -1;
}
const char *capability_to_name(unsigned int cap)
{
return cap_table.capability_to_name(cap);
struct capability_table *ent;
ent = find_cap_entry_by_num(cap);
if (ent)
return ent->name;
return "invalid-capability";
}
int capability_backmap(unsigned int cap)
{
return cap_table.capability_backmap(cap);
struct capability_table *ent;
ent = find_cap_entry_by_num(cap);
if (ent)
return ent->backmap;
return NO_BACKMAP_CAP;
}
bool capability_in_kernel(unsigned int cap)
{
return cap_table.capability_in_kernel(cap);
struct capability_table *ent;
ent = find_cap_entry_by_num(cap);
if (ent)
return ent->flags & CAPFLAG_KERNEL_FEATURE;
return false;
}
void __debug_capabilities(uint64_t capset, const char *name)
{
cap_table.__debug_capabilities(capset, name);
unsigned int i;
printf("%s:", name);
for (i = 0; cap_table[i].name; i++) {
if ((1ull << cap_table[i].cap) & capset)
printf (" %s", cap_table[i].name);
}
printf("\n");
}
char *processunquoted(const char *string, int len)
@@ -1213,6 +1200,37 @@ void free_value_list(struct value_list *list)
}
}
struct value_list *dup_value_list(struct value_list *list)
{
struct value_list *entry, *dup, *head = NULL;
char *value;
list_for_each(list, entry) {
value = NULL;
if (list->value) {
value = strdup(list->value);
if (!value)
goto fail2;
}
dup = new_value_list(value);
if (!dup)
goto fail;
if (head)
list_append(head, dup);
else
head = dup;
}
return head;
fail:
free(value);
fail2:
free_value_list(head);
return NULL;
}
void print_value_list(struct value_list *list)
{
struct value_list *entry;

View File

@@ -50,7 +50,7 @@ enum error_type {
void filter_slashes(char *path)
{
char *sptr, *dptr;
bool seen_slash = false;
BOOL seen_slash = 0;
if (!path || (strlen(path) < 2))
return;
@@ -69,7 +69,7 @@ void filter_slashes(char *path)
++sptr;
} else {
*dptr++ = *sptr++;
seen_slash = true;
seen_slash = TRUE;
}
} else {
seen_slash = 0;
@@ -111,14 +111,14 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
#define MAX_ALT_DEPTH 50
*first_re_pos = 0;
int ret = 1;
int ret = TRUE;
/* flag to indicate input error */
enum error_type error;
const char *sptr;
pattern_t ptype;
bool bEscape = false; /* flag to indicate escape */
BOOL bEscape = 0; /* flag to indicate escape */
int ingrouping = 0; /* flag to indicate {} context */
int incharclass = 0; /* flag to indicate [ ] context */
int grouping_count[MAX_ALT_DEPTH] = {0};
@@ -150,7 +150,7 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
if (bEscape) {
pcre.append("\\\\");
} else {
bEscape = true;
bEscape = TRUE;
++sptr;
continue; /*skip turning bEscape off */
} /* bEscape */
@@ -393,7 +393,7 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
break;
} /* switch (*sptr) */
bEscape = false;
bEscape = FALSE;
++sptr;
} /* while error == e_no_error && *sptr) */
@@ -419,12 +419,12 @@ pattern_t convert_aaregex_to_pcre(const char *aare, int anchor, int glob,
PERROR(_("%s: Unable to parse input line '%s'\n"),
progname, aare);
ret = 0;
ret = FALSE;
goto out;
}
out:
if (ret == 0)
if (ret == FALSE)
ptype = ePatternInvalid;
if (parseopts.dump & DUMP_DFA_RULE_EXPR)
@@ -464,7 +464,7 @@ static void warn_once_xattr(const char *name)
common_warn_once(name, "xattr attachment conditional ignored", &warned_name);
}
static bool process_profile_name_xmatch(Profile *prof)
static int process_profile_name_xmatch(Profile *prof)
{
std::string tbuf;
pattern_t ptype;
@@ -479,7 +479,7 @@ static bool process_profile_name_xmatch(Profile *prof)
/* don't filter_slashes for profile names, do on attachment */
name = strdup(local_name(prof->name));
if (!name)
return false;
return FALSE;
}
filter_slashes(name);
ptype = convert_aaregex_to_pcre(name, 0, glob_default, tbuf,
@@ -491,7 +491,7 @@ static bool process_profile_name_xmatch(Profile *prof)
PERROR(_("%s: Invalid profile name '%s' - bad regular expression\n"), progname, name);
if (!prof->attachment)
free(name);
return false;
return FALSE;
}
if (!prof->attachment)
@@ -506,11 +506,11 @@ static bool process_profile_name_xmatch(Profile *prof)
/* build a dfa */
aare_rules *rules = new aare_rules();
if (!rules)
return false;
return FALSE;
if (!rules->add_rule(tbuf.c_str(), 0, RULE_ALLOW,
AA_MAY_EXEC, 0, parseopts)) {
delete rules;
return false;
return FALSE;
}
if (prof->altnames) {
struct alt_name *alt;
@@ -525,7 +525,7 @@ static bool process_profile_name_xmatch(Profile *prof)
RULE_ALLOW, AA_MAY_EXEC,
0, parseopts)) {
delete rules;
return false;
return FALSE;
}
}
}
@@ -567,7 +567,7 @@ static bool process_profile_name_xmatch(Profile *prof)
&len);
if (!rules->append_rule(tbuf.c_str(), true, true, parseopts)) {
delete rules;
return false;
return FALSE;
}
}
}
@@ -581,10 +581,10 @@ build:
prof->xmatch = rules->create_dfablob(&prof->xmatch_size, &prof->xmatch_len, prof->xmatch_perms_table, parseopts, false, false, false);
delete rules;
if (!prof->xmatch)
return false;
return FALSE;
}
return true;
return TRUE;
}
static int warn_change_profile = 1;
@@ -606,21 +606,21 @@ static bool is_change_profile_perms(perm32_t perms)
return perms & AA_CHANGE_PROFILE;
}
static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
static int process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
{
std::string tbuf;
pattern_t ptype;
int pos;
if (!entry) /* shouldn't happen */
return false;
return TRUE;
if (!is_change_profile_perms(entry->perms))
filter_slashes(entry->name);
ptype = convert_aaregex_to_pcre(entry->name, 0, glob_default, tbuf, &pos);
if (ptype == ePatternInvalid)
return false;
return FALSE;
entry->pattern_type = ptype;
@@ -649,13 +649,13 @@ static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
entry->perms & ~(AA_LINK_BITS | AA_CHANGE_PROFILE),
entry->audit == AUDIT_FORCE ? entry->perms & ~(AA_LINK_BITS | AA_CHANGE_PROFILE) : 0,
parseopts))
return false;
return FALSE;
} else if (!is_change_profile_perms(entry->perms)) {
if (!dfarules->add_rule(tbuf.c_str(), entry->priority,
entry->rule_mode, entry->perms,
entry->audit == AUDIT_FORCE ? entry->perms : 0,
parseopts))
return false;
return FALSE;
}
if (entry->perms & (AA_LINK_BITS)) {
@@ -669,7 +669,7 @@ static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
filter_slashes(entry->link_name);
ptype = convert_aaregex_to_pcre(entry->link_name, 0, glob_default, lbuf, &pos);
if (ptype == ePatternInvalid)
return false;
return FALSE;
if (entry->subset)
perms |= LINK_TO_LINK_SUBSET(perms);
vec[1] = lbuf.c_str();
@@ -681,7 +681,7 @@ static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
entry->rule_mode, perms,
entry->audit == AUDIT_FORCE ? perms & AA_LINK_BITS : 0,
2, vec, parseopts, false))
return false;
return FALSE;
}
if (is_change_profile_perms(entry->perms)) {
const char *vec[3];
@@ -702,7 +702,7 @@ static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (entry->onexec) {
ptype = convert_aaregex_to_pcre(entry->onexec, 0, glob_default, xbuf, &pos);
if (ptype == ePatternInvalid)
return false;
return FALSE;
vec[0] = xbuf.c_str();
} else
/* allow change_profile for all execs */
@@ -713,14 +713,14 @@ static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (!parse_label(&stack, &ns, &name,
tbuf.c_str(), false)) {
return false;
return FALSE;
}
if (stack) {
fprintf(stderr,
_("The current kernel does not support stacking of named transitions: %s\n"),
tbuf.c_str());
return false;
return FALSE;
}
if (ns)
@@ -734,13 +734,13 @@ static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (!dfarules->add_rule_vec(entry->priority, entry->rule_mode,
AA_CHANGE_PROFILE | onexec_perms,
0, index - 1, &vec[1], parseopts, false))
return false;
return FALSE;
/* onexec rules - both rules are needed for onexec */
if (!dfarules->add_rule_vec(entry->priority, entry->rule_mode,
onexec_perms,
0, 1, vec, parseopts, false))
return false;
return FALSE;
/**
* pick up any exec bits, from the frontend parser, related to
@@ -750,19 +750,19 @@ static bool process_dfa_entry(aare_rules *dfarules, struct cod_entry *entry)
if (!dfarules->add_rule_vec(entry->priority, entry->rule_mode,
onexec_perms, 0, index, vec,
parseopts, false))
return false;
return FALSE;
}
return true;
return TRUE;
}
bool post_process_entries(Profile *prof)
int post_process_entries(Profile *prof)
{
int ret = true;
int ret = TRUE;
struct cod_entry *entry;
list_for_each(prof->entries, entry) {
if (!process_dfa_entry(prof->dfa.rules, entry))
ret = false;
ret = FALSE;
}
return ret;
@@ -815,7 +815,7 @@ out:
return error;
}
bool build_list_val_expr(std::string& buffer, struct value_list *list)
int build_list_val_expr(std::string& buffer, struct value_list *list)
{
struct value_list *ent;
pattern_t ptype;
@@ -823,7 +823,7 @@ bool build_list_val_expr(std::string& buffer, struct value_list *list)
if (!list) {
buffer.append(default_match_pattern);
return true;
return TRUE;
}
buffer.append("(");
@@ -840,12 +840,12 @@ bool build_list_val_expr(std::string& buffer, struct value_list *list)
}
buffer.append(")");
return true;
return TRUE;
fail:
return false;
return FALSE;
}
bool convert_entry(std::string& buffer, char *entry)
int convert_entry(std::string& buffer, char *entry)
{
pattern_t ptype;
int pos;
@@ -853,12 +853,12 @@ bool convert_entry(std::string& buffer, char *entry)
if (entry) {
ptype = convert_aaregex_to_pcre(entry, 0, glob_default, buffer, &pos);
if (ptype == ePatternInvalid)
return false;
return FALSE;
} else {
buffer.append(default_match_pattern);
}
return true;
return TRUE;
}
int clear_and_convert_entry(std::string& buffer, char *entry)
@@ -959,7 +959,7 @@ static std::string generate_regex_range(bignum start, bignum end)
return result.str();
}
bool convert_range(std::string& buffer, bignum start, bignum end)
int convert_range(std::string& buffer, bignum start, bignum end)
{
pattern_t ptype;
int pos;
@@ -969,24 +969,24 @@ bool convert_range(std::string& buffer, bignum start, bignum end)
if (!regex_range.empty()) {
ptype = convert_aaregex_to_pcre(regex_range.c_str(), 0, glob_default, buffer, &pos);
if (ptype == ePatternInvalid)
return false;
return FALSE;
} else {
buffer.append(default_match_pattern);
}
return true;
return TRUE;
}
bool post_process_policydb_ents(Profile *prof)
int post_process_policydb_ents(Profile *prof)
{
for (RuleList::iterator i = prof->rule_ents.begin(); i != prof->rule_ents.end(); i++) {
if ((*i)->skip())
continue;
if ((*i)->gen_policy_re(*prof) == RULE_ERROR)
return false;
return FALSE;
}
return true;
return TRUE;
}

View File

@@ -79,7 +79,7 @@ struct var_string *split_out_var(const char *string)
{
struct var_string *n = NULL;
const char *sptr;
bool bEscape = false; /* flag to indicate escape */
BOOL bEscape = 0; /* flag to indicate escape */
if (!string) /* shouldn't happen */
return NULL;
@@ -89,11 +89,15 @@ struct var_string *split_out_var(const char *string)
while (!n && *sptr) {
switch (*sptr) {
case '\\':
bEscape = !bEscape;
if (bEscape) {
bEscape = FALSE;
} else {
bEscape = TRUE;
}
break;
case '@':
if (bEscape) {
bEscape = false;
bEscape = FALSE;
} else if (*(sptr + 1) == '{') {
const char *eptr = get_var_end(sptr + 2);
if (!eptr)
@@ -107,7 +111,8 @@ struct var_string *split_out_var(const char *string)
}
break;
default:
bEscape = false;
if (bEscape)
bEscape = FALSE;
}
sptr++;
}

View File

@@ -704,7 +704,7 @@ rules: rules opt_prefix block
if (($2).priority != 0) {
yyerror(_("priority is not allowed on rule blocks"));
}
PDEBUG("matched: %s%s%s%sblock\n",
PDEBUG("matched: %s%s%sblock\n",
$2.audit == AUDIT_FORCE ? "audit " : "",
$2.rule_mode == RULE_DENY ? "deny " : "",
$2.rule_mode == RULE_PROMPT ? "prompt " : "",

View File

@@ -1,14 +1,14 @@
# Translations for apparmor_parser
# Copyright (C) 2024 YEAR Canonical Ltd
# This file is distributed under the same license as the AppArmor package.
# John Johansen <john.johansen@canonical.com>, 2011.
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Canonical Ltd
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: apparmor@lists.ubuntu.com\n"
"POT-Creation-Date: 2024-08-31 15:55-0700\n"
"POT-Creation-Date: 2025-02-18 07:32-0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
@@ -326,7 +326,7 @@ msgstr ""
#: parser_yacc.y:744 parser_yacc.y:1073 parser_yacc.y:1160 parser_yacc.y:1169
#: parser_yacc.y:1173 parser_yacc.y:1183 parser_yacc.y:1193 parser_yacc.y:1287
#: parser_yacc.y:1365 parser_yacc.y:1561 parser_yacc.y:1569 parser_yacc.y:1619
#: parser_yacc.y:1624 parser_yacc.y:1701 parser_yacc.y:1750 ../network.cc:899
#: parser_yacc.y:1624 parser_yacc.y:1701 parser_yacc.y:1750 ../network.cc:945
#: ../af_unix.cc:197 ../all_rule.cc:102 ../all_rule.cc:131
msgid "Memory allocation error."
msgstr ""
@@ -411,12 +411,12 @@ msgid "AppArmor parser error: %s\n"
msgstr ""
#: ../parser_merge.c:92 ../parser_merge.c:91 ../parser_merge.c:83
#: ../parser_merge.c:71 ../parser_merge.c:74
#: ../parser_merge.c:71 ../parser_merge.c:77
msgid "Couldn't merge entries. Out of Memory\n"
msgstr ""
#: ../parser_merge.c:111 ../parser_merge.c:113 ../parser_merge.c:105
#: ../parser_merge.c:93 ../parser_merge.c:97
#: ../parser_merge.c:93 ../parser_merge.c:100
#, c-format
msgid "profile %s: has merged rule %s with conflicting x modifiers\n"
msgstr ""
@@ -542,7 +542,7 @@ msgstr ""
#: parser_yacc.y:975 parser_yacc.y:985 parser_yacc.y:1057 parser_yacc.y:1067
#: parser_yacc.y:1145 parser_yacc.y:1155 parser_yacc.y:1234 parser_yacc.y:1244
#: ../network.cc:484
#: ../network.cc:515
msgid "Invalid network entry."
msgstr ""
@@ -830,16 +830,16 @@ msgstr ""
msgid "%s: Regex error: trailing '\\' escape character\n"
msgstr ""
#: ../parser_common.c:112 ../parser_common.c:134
#: ../parser_common.c:112 ../parser_common.c:139
#, c-format
msgid "%s from %s (%s%sline %d): %s"
msgstr ""
#: ../parser_common.c:113 ../parser_common.c:135
#: ../parser_common.c:113 ../parser_common.c:140
msgid "Warning converted to Error"
msgstr ""
#: ../parser_common.c:113 ../parser_common.c:135
#: ../parser_common.c:113 ../parser_common.c:140
msgid "Warning"
msgstr ""
@@ -1051,13 +1051,13 @@ msgstr ""
msgid "Internal: unexpected %s perms character '%c' in input"
msgstr ""
#: ../parser_misc.c:1098
#: ../parser_misc.c:1100
msgid ""
"Invalid perms, in deny rules 'x' must not be preceded by exec qualifier 'i', "
"'p', or 'u'"
msgstr ""
#: ../parser_misc.c:1102
#: ../parser_misc.c:1104
msgid "Invalid perms, 'x' must be preceded by exec qualifier 'i', 'p', or 'u'"
msgstr ""
@@ -1091,16 +1091,16 @@ msgstr ""
msgid "attach_disconnected_path value must begin with a /"
msgstr ""
#: ../mount.cc:897
#: ../mount.cc:903
msgid ""
"The use of source as mount point for propagation type flags is deprecated.\n"
msgstr ""
#: ../network.h:200
#: ../network.h:202
msgid "priority prefix not allowed on network rules"
msgstr ""
#: ../network.h:204
#: ../network.h:206
msgid "owner prefix not allowed on network rules"
msgstr ""

View File

@@ -226,13 +226,13 @@ static bool add_proc_access(Profile *prof, const char *rule)
char *buffer = strdup("/proc/*/attr/apparmor/");
if (!buffer) {
PERROR("Memory allocation error\n");
return false;
return FALSE;
}
new_ent = new_entry(buffer, AA_MAY_READ, NULL);
if (!new_ent) {
free(buffer);
PERROR("Memory allocation error\n");
return false;
return FALSE;
}
add_entry_to_policy(prof, new_ent);
@@ -240,13 +240,13 @@ static bool add_proc_access(Profile *prof, const char *rule)
buffer = strdup("/sys/module/apparmor/parameters/enabled");
if (!buffer) {
PERROR("Memory allocation error\n");
return false;
return FALSE;
}
new_ent = new_entry(buffer, AA_MAY_READ, NULL);
if (!new_ent) {
free(buffer);
PERROR("Memory allocation error\n");
return false;
return FALSE;
}
add_entry_to_policy(prof, new_ent);
@@ -254,17 +254,17 @@ static bool add_proc_access(Profile *prof, const char *rule)
buffer = strdup(rule);
if (!buffer) {
PERROR("Memory allocation error\n");
return false;
return FALSE;
}
new_ent = new_entry(buffer, AA_MAY_WRITE, NULL);
if (!new_ent) {
free(buffer);
PERROR("Memory allocation error\n");
return false;
return FALSE;
}
add_entry_to_policy(prof, new_ent);
return true;
return TRUE;
}
#define CHANGEPROFILE_PATH "/proc/*/attr/{apparmor/,}{current,exec}"

View File

@@ -363,7 +363,7 @@ public:
struct cond_entry_list xattrs;
/* char *sub_name; */ /* subdomain name or NULL */
/* bool default_deny; */
/* int default_deny; */ /* TRUE or FALSE */
bool local;
Profile *parent;

View File

@@ -23,7 +23,7 @@
#include <iomanip>
#include <string>
#include <sstream>
#include <unordered_map>
#include <map>
#include "parser.h"
#include "profile.h"
@@ -35,7 +35,7 @@
#define MAXRT_SIG 32 /* Max RT above MINRT_SIG */
/* Signal names mapped to and internal ordering */
static unordered_map<string, int> signal_map = {
static struct signal_map { const char *name; int num; } signal_map[] = {
{"hup", 1},
{"int", 2},
{"quit", 3},
@@ -55,8 +55,7 @@ static unordered_map<string, int> signal_map = {
{"chld", 17},
{"cont", 18},
{"stop", 19},
{"stp", 20}, // parser's previous name for SIGTSTP
{"tstp", 20},
{"stp", 20},
{"ttin", 21},
{"ttou", 22},
{"urg", 23},
@@ -65,12 +64,14 @@ static unordered_map<string, int> signal_map = {
{"vtalrm", 26},
{"prof", 27},
{"winch", 28},
{"io", 29}, // SIGIO == SIGPOLL
{"poll", 29},
{"io", 29},
{"pwr", 30},
{"sys", 31},
{"emt", 32},
{"exists", 35},
/* terminate */
{NULL, 0}
};
/* this table is ordered post sig_map[sig] mapping */
@@ -95,7 +96,7 @@ static const char *const sig_names[MAXMAPPED_SIG + 1] = {
"chld",
"cont",
"stop",
"tstp",
"stp",
"ttin",
"ttou",
"urg",
@@ -104,7 +105,7 @@ static const char *const sig_names[MAXMAPPED_SIG + 1] = {
"vtalrm",
"prof",
"winch",
"io", // SIGIO == SIGPOLL
"io",
"pwr",
"sys",
"emt",
@@ -129,14 +130,12 @@ int find_signal_mapping(const char *sig)
return -1;
return MINRT_SIG + n;
} else {
// Can't use string_view because that requires C++17
auto sigmap = signal_map.find(string(sig));
if (sigmap != signal_map.end()) {
return sigmap->second;
} else {
return -1;
for (int i = 0; signal_map[i].name; i++) {
if (strcmp(sig, signal_map[i].name) == 0)
return signal_map[i].num;
}
}
return -1;
}
void signal_rule::extract_sigs(struct value_list **list)

View File

@@ -6,7 +6,7 @@ PARSER_BIN=apparmor_parser
PARSER=$(PARSER_DIR)/$(PARSER_BIN)
# parser.conf to use in tests. Note that some test scripts have the parser options hardcoded, so passing PARSER_ARGS=... is not enough to override it.
PARSER_ARGS=--config-file=./parser.conf
PROVE_ARG=-f --directives -j2
PROVE_ARG=-f --directives
ifeq ($(VERBOSE),1)
PROVE_ARG+=-v
@@ -37,11 +37,6 @@ error_output: $(PARSER)
parser_sanity: $(PARSER) gen_xtrans gen_dbus
$(Q)LANG=C APPARMOR_PARSER="$(PARSER)" ${PROVE} ${PROVE_ARG} ${TESTS}
# use this target for faster manual testing if you don't want/need to test all the profiles generated by gen-*.py
parser_sanity-no-gen: clean $(PARSER)
@echo WARNING: not creating the profiles using the gen-*.py scripts
$(Q)LANG=C APPARMOR_PARSER="$(PARSER)" ${PROVE} ${PROVE_ARG} ${TESTS}
caching: $(PARSER)
LANG=C ./caching.py -p "$(PARSER)" $(PYTEST_ARG)

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - missing end of range
#=EXRESULT FAIL
#
/usr/bin/foo {
network port=22-,
}

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - missing end of range
#=EXRESULT FAIL
#
/usr/bin/foo {
network peer=(port=2222-),
}

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - spaces in range not allowed
#=EXRESULT FAIL
#
/usr/bin/foo {
network port=22 - 443,
}

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - spaces in range not allowed
#=EXRESULT FAIL
#
/usr/bin/foo {
network peer=(port=22 - 443),
}

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - invalid "--"
#=EXRESULT FAIL
#
/usr/bin/foo {
network port=22--443,
}

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - invalid "--"
#=EXRESULT FAIL
#
/usr/bin/foo {
network peer=(port=22--443),
}

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - 3 items in range
#=EXRESULT FAIL
#
/usr/bin/foo {
network port=22-443-1024,
}

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - 3 items in range
#=EXRESULT FAIL
#
/usr/bin/foo {
network peer=(port=22-443-1024),
}

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - additional spaces
#=EXRESULT PASS
#
/usr/bin/foo {
network port = 22-443 ,
}

View File

@@ -1,8 +0,0 @@
#
#=DESCRIPTION network port range conditional test - additional spaces
#=EXRESULT PASS
#
/usr/bin/foo {
network peer=( port = 22-443 ),
}

View File

@@ -98,11 +98,12 @@ class AATestTemplate(unittest.TestCase, metaclass=AANoCleanupMetaClass):
except OSError as e:
return 127, str(e), ''
timeout_communicate = TimeoutFunction(sp.communicate, timeout)
out, outerr = (None, None)
try:
out, outerr = sp.communicate(input, timeout)
out, outerr = timeout_communicate(input)
rc = sp.returncode
except subprocess.TimeoutExpired:
except TimeoutFunctionException:
sp.terminate()
outerr = 'test timed out, killed'
rc = TIMEOUT_ERROR_CODE
@@ -116,6 +117,31 @@ class AATestTemplate(unittest.TestCase, metaclass=AANoCleanupMetaClass):
return rc, out, outerr
# Timeout handler using alarm() from John P. Speno's Pythonic Avocado
class TimeoutFunctionException(Exception):
"""Exception to raise on a timeout"""
class TimeoutFunction:
def __init__(self, function, timeout):
self.timeout = timeout
self.function = function
def handle_timeout(self, signum, frame):
raise TimeoutFunctionException()
def __call__(self, *args, **kwargs):
old = signal.signal(signal.SIGALRM, self.handle_timeout)
signal.alarm(self.timeout)
try:
result = self.function(*args, **kwargs)
finally:
signal.signal(signal.SIGALRM, old)
signal.alarm(0)
return result
def filesystem_time_resolution():
"""detect whether the filesystem stores subsecond timestamps"""

View File

@@ -154,12 +154,11 @@ check-logprof: test-dependencies
.PHONY: check-abstractions.d
check-abstractions.d:
@echo "*** Checking if all abstractions (with a few exceptions) contain 'include if exists <abstractions/*.d>' and 'abi <abi/4.0>,'"
@echo "*** Checking if all abstractions (with a few exceptions) contain 'include if exists <abstractions/*.d>'"
$(Q)for file in $$(find ${ABSTRACTIONS_SOURCE} ${EXTRAS_ABSTRACTIONS_SOURCE} -maxdepth 1 -type f) ; do \
case "$${file}" in */ubuntu-browsers | */ubuntu-helpers) continue ;; esac ; \
include="include if exists <abstractions/$$(basename $${file}).d>" ; \
grep -q "^ $${include}\$$" $${file} || { echo "$${file} does not contain '$${include}'"; exit 1; } ; \
grep -q "^ *abi <abi/4.0>," $${file} || { echo "$${file} does not contain 'abi <abi/4.0>,'"; exit 1; } ; \
done
.PHONY: check-tunables.d
@@ -173,10 +172,9 @@ check-tunables.d:
.PHONY: check-local
check-local:
@echo "*** Checking if all profiles contain 'include if exists <local/*>' and 'abi <abi/4.0>,'"
@echo "*** Checking if all profiles contain 'include if exists <local/*>'"
$(Q)for file in $$(find ${PROFILES_SOURCE} ${EXTRAS_SOURCE} -maxdepth 1 -type f) ; do \
case "$${file}" in */README) continue ;; esac ; \
include="include if exists <local/$$(basename $${file})>" ; \
grep -q "^ *$${include}\$$" $${file} || { echo "$${file} does not contain '$${include}'"; exit 1; } ; \
grep -q "^ *abi <abi/4.0>," $${file} || { echo "$${file} does not contain 'abi <abi/4.0>,'"; exit 1; } ; \
done

View File

@@ -0,0 +1,22 @@
# ------------------------------------------------------------------
#
# Copyright (C) 2021 Mikhail Morfikov
# Copyright (C) 2021-2025 Alexandre Pujol <alexandre@pujol.io>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
# License published by the Free Software Foundation.
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <abstractions/devices-usb-read>
/dev/bus/usb/@{int}/@{int} wk,
@{sys}/devices/**/usb@{int}/{,**} w,
include if exists <abstractions/devices-usb.d>
# vim:syntax=apparmor

View File

@@ -0,0 +1,35 @@
# ------------------------------------------------------------------
#
# Copyright (C) 2021 Mikhail Morfikov
# Copyright (C) 2021-2025 Alexandre Pujol <alexandre@pujol.io>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
# License published by the Free Software Foundation.
#
# ------------------------------------------------------------------
abi <abi/4.0>,
/dev/ r,
/dev/bus/usb/ r,
/dev/bus/usb/@{int}/ r,
/dev/bus/usb/@{int}/@{int} r,
@{sys}/class/ r,
@{sys}/class/usbmisc/ r,
@{sys}/bus/ r,
@{sys}/bus/usb/ r,
@{sys}/bus/usb/devices/{,**} r,
@{sys}/devices/**/usb@{int}/{,**} r,
# Udev data about usb devices (~equal to content of lsusb -v)
@{run}/udev/data/+usb:* r,
@{run}/udev/data/c16[6,7]:@{int} r, # USB modems
@{run}/udev/data/c18[0,8,9]:@{int} r, # USB devices & USB serial converters
include if exists <abstractions/devices-usb-read.d>
# vim:syntax=apparmor

View File

@@ -10,8 +10,6 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
# Note: executing groff and nroff themself is not included in this abstraction
# so that you can choose to ix, Px or Cx them in your profile

View File

@@ -42,6 +42,9 @@
# have open
@{run}/nscd/db* mix,
# make libnss-libvirt name resolution work.
/var/lib/libvirt/dnsmasq/* r,
# make libnss-libvirt name resolution work.
/var/lib/libvirt/dnsmasq/ r,
/var/lib/libvirt/dnsmasq/*.status r,

View File

@@ -13,19 +13,19 @@
abi <abi/4.0>,
/{usr/,}bin/ r,
/{usr/,}bin/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]} r,
/{usr/,}bin/python{2.[4-7],3,3.[0-9],3.1[0-9]} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/**.{pyc,so,so.*[0-9]} mr,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/**.{egg,py,pth} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/ r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/**/ r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/*.dist-info/{METADATA,namespace_packages.txt} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/*.VERSION r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/*.egg-info/PKG-INFO r,
/usr/{local/,}lib{,32,64}/python{3.[0-9],3.[1-9][0-9]}/lib-dynload/*.so mr,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/**.{pyc,so,so.*[0-9]} mr,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/**.{egg,py,pth} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/ r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/**/ r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/*.dist-info/{METADATA,namespace_packages.txt} r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/*.VERSION r,
/usr/{local/,}lib{,32,64}/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/*.egg-info/PKG-INFO r,
/usr/{local/,}lib{,32,64}/python3.{1,}[0-9]/lib-dynload/*.so mr,
# Site-wide configuration
/etc/python{2.[4-7],3.[0-9],3.[1-9][0-9]}/** r,
/etc/python{2.[4-7],3.[0-9],3.1[0-9]}/** r,
# shared python paths
/usr/share/{pyshared,pycentral,python-support}/** r,
@@ -38,12 +38,12 @@
/usr/lib/wx/python/*.pth r,
# python build configuration and headers
/usr/include/python{2.[4-7],3.[0-9],3.[1-9][0-9]}*/pyconfig.h r,
/usr/include/python{2.[4-7],3.[0-9],3.1[0-9]}*/pyconfig.h r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/**.{pyc,so} mr,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/**.{egg,py,pth} r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/ r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.[1-9][0-9]}/{site,dist}-packages/**/ r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.1[0-9]}/**.{pyc,so} mr,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.1[0-9]}/**.{egg,py,pth} r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/ r,
owner @{HOME}/.local/lib/python{2.[4-7],3,3.[0-9],3.1[0-9]}/{site,dist}-packages/**/ r,
# Starting with Python 3.8, you can use the PYTHONPYCACHEPREFIX environment
# variable to define a cache directory for Python.

View File

@@ -1,5 +1,3 @@
abi <abi/4.0>,
profile snap_browsers {
include if exists <abstractions/snap_browsers.d>
include <abstractions/base>

View File

@@ -2,8 +2,6 @@
# LOGPROF-SUGGEST: no
# Author: Daniel Richard G. <skunk@iSKUNK.ORG>
abi <abi/4.0>,
include <abstractions/base>
include <abstractions/freedesktop.org>
include <abstractions/nameservice>

View File

@@ -1,37 +0,0 @@
#------------------------------------------------------------------
# Copyright (C) 2024 Canonical Ltd.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
# License published by the Free Software Foundation.
#------------------------------------------------------------------
# vim: ft=apparmor
abi <abi/4.0>,
include <tunables/global>
profile tar /usr/bin/tar {
include <abstractions/base>
# used to extract user files as root
capability chown,
# used to compress user files as root
capability dac_override,
capability dac_read_search,
file rwl /**,
# tar can be made to filter archives through an arbitrary program
/{usr{/local,},}/{bin,sbin}/* ix,
/opt/** ix,
# tar can compress/extract files over rsh/ssh
network stream ip=127.0.0.1,
network stream ip=::1,
# Site-specific additions and overrides. See local/README for details.
include if exists <local/tar>
}

View File

@@ -17,6 +17,7 @@ include <tunables/multiarch>
include <tunables/proc>
include <tunables/alias>
include <tunables/kernelvars>
include <tunables/system>
include <tunables/xdg-user-dirs>
include <tunables/share>
include <tunables/etc>

View File

@@ -0,0 +1,99 @@
# ------------------------------------------------------------------
#
# Copyright (C) 2025 Alexandre Pujol <alexandre@pujol.io>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
# License published by the Free Software Foundation.
#
# ------------------------------------------------------------------
# Any digit
@{d}=[0-9]
# Any letter
@{l}=[a-zA-Z]
# Single alphanumeric character
@{c}=[0-9a-zA-Z]
# Word character: matches any letter, digit or underscore.
@{w}=[a-zA-Z0-9_]
# Single hexadecimal character
@{h}=[0-9a-fA-F]
# Integer up to 10 digits (0-9999999999)
@{int}=@{d}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}{@{d},}
# hexadecimal, alphanumeric and word up to 64 characters
@{hex}=@{h}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}{@{h},}
@{rand}=@{c}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}{@{c},}
@{word}=@{w}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}{@{w},}
# Unsigned integer over 8 bits (0...255)
@{u8}=[0-9]{[0-9],} 1[0-9][0-9] 2[0-4][0-9] 25[0-5]
# Unsigned integer over 16 bits (0...65,535 5 digits)
@{u16}={@{d},[1-9]@{d},[1-9][@{d}@{d},[1-9]@{d}@{d}@{d},[1-6]@{d}@{d}@{d}@{d}}
# Unsigned integer over 32 bits (0...4,294,967,295 10 digits)
@{u32}={@{d},[1-9]@{d},[1-9]@{d}@{d},[1-9]@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-4]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}}
# Unsigned integer over 64 bits (0...18,446,744,073,709,551,615 20 digits).
@{u64}={@{d},[1-9]@{d},[1-9]@{d}@{d},[1-9]@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},[1-9]@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d},1@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}@{d}}
# Any x digits characters
@{int2}=@{d}@{d}
@{int4}=@{int2}@{int2}
@{int6}=@{int4}@{int2}
@{int8}=@{int4}@{int4}
@{int9}=@{int8}@{d}
@{int10}=@{int8}@{int2}
@{int12}=@{int8}@{int4}
@{int15}=@{int8}@{int4}@{int2}@{d}
@{int16}=@{int8}@{int8}
@{int32}=@{int16}@{int16}
@{int64}=@{int32}@{int32}
# Any x hexadecimal characters
@{hex2}=@{h}@{h}
@{hex4}=@{hex2}@{hex2}
@{hex6}=@{hex4}@{hex2}
@{hex8}=@{hex4}@{hex4}
@{hex9}=@{hex8}@{h}
@{hex10}=@{hex8}@{hex2}
@{hex12}=@{hex8}@{hex4}
@{hex15}=@{hex8}@{hex4}@{hex2}@{h}
@{hex16}=@{hex8}@{hex8}
@{hex32}=@{hex16}@{hex16}
@{hex38}=@{hex32}@{hex6}
@{hex64}=@{hex32}@{hex32}
# Any x alphanumeric characters
@{rand2}=@{c}@{c}
@{rand4}=@{rand2}@{rand2}
@{rand6}=@{rand4}@{rand2}
@{rand8}=@{rand4}@{rand4}
@{rand9}=@{rand8}@{c}
@{rand10}=@{rand8}@{rand2}
@{rand12}=@{rand8}@{rand4}
@{rand15}=@{rand8}@{rand4}@{rand2}@{c}
@{rand16}=@{rand8}@{rand8}
@{rand32}=@{rand16}@{rand16}
@{rand64}=@{rand32}@{rand32}
# Any x word characters
@{word2}=@{w}@{w}
@{word4}=@{word2}@{word2}
@{word6}=@{word4}@{word2}
@{word8}=@{word4}@{word4}
@{word9}=@{word8}@{w}
@{word10}=@{word8}@{word2}
@{word12}=@{word8}@{word4}
@{word15}=@{word8}@{word4}@{word2}@{w}
@{word16}=@{word8}@{word8}
@{word32}=@{word16}@{word16}
@{word64}=@{word32}@{word32}
include if exists <tunables/system.d>

View File

@@ -9,8 +9,6 @@
# ------------------------------------------------------------------
# vim: ft=apparmor
abi <abi/4.0>,
include <tunables/global>
profile dovecot-director /usr/lib*/dovecot/director flags=(attach_disconnected) {

View File

@@ -9,8 +9,6 @@
# ------------------------------------------------------------------
# vim: ft=apparmor
abi <abi/4.0>,
include <tunables/global>
profile dovecot-doveadm-server /usr/lib*/dovecot/doveadm-server flags=(attach_disconnected) {

View File

@@ -12,8 +12,6 @@
# vim: ft=apparmor
# for https://wiki.dovecot.org/Replication
abi <abi/4.0>,
include <tunables/dovecot>
include <tunables/global>

View File

@@ -11,17 +11,19 @@
#
# disabled by default as it can break some use cases on a system that
# doesn't have or has disable user namespace restrictions for unconfined
# use aa-enforce to enable it
abi <abi/4.0>,
include <tunables/global>
profile unshare /usr/bin/unshare flags=(attach_disconnected) {
# not allow all, to allow for cix transition
# and to limit executable mapping to just unshare
profile unshare /usr/bin/unshare flags=(attach_disconnected mediate_deleted) {
# not allow all, to allow for pix stack on systems that don't support
# rule priority.
#
# sadly we have to allow 'm' every where to allow children to work under
# profile stacking atm.
allow capability,
allow file rwlk /{**,},
allow file rwmlk /{**,},
allow network,
allow unix,
allow ptrace,
@@ -33,33 +35,41 @@ profile unshare /usr/bin/unshare flags=(attach_disconnected) {
allow umount,
allow pivot_root,
allow dbus,
audit allow cx /** -> unpriv,
allow file m /usr/lib/@{multiarch}/libc.so.6,
allow file m /usr/bin/unshare,
# This will stack a target profile against unpriv_unshare
# Most of the comments for the pix transition in bwrap-userns-restrict
# also apply here, with the exception of unshare not using no-new-privs
# Thus, we only need a two-layer stack instead of a three-layer stack
audit allow pix /** -> &unpriv_unshare,
# the local include should not be used without understanding the userns
# restriction.
# Site-specific additions and overrides. See local/README for details.
include if exists <local/unshare-userns-restrict>
profile unpriv flags=(attach_disconnected) {
# not allow all, to allow for pix stack
allow file rwlkm /{**,},
allow network,
allow unix,
allow ptrace,
allow signal,
allow mqueue,
allow io_uring,
allow userns,
allow mount,
allow umount,
allow pivot_root,
allow dbus,
allow pix /** -> &unshare//unpriv,
audit deny capability,
}
}
profile unpriv_unshare flags=(attach_disconnected mediate_deleted) {
# not allow all, to allow for pix stack
allow file rwlkm /{**,},
allow network,
allow unix,
allow ptrace,
allow signal,
allow mqueue,
allow io_uring,
allow userns,
allow mount,
allow umount,
allow pivot_root,
allow dbus,
# Maintain the stack against itself for further transitions
# If done recursively the stack will remove any duplicate
allow pix /** -> &unpriv_unshare,
audit deny capability,
# the local include should not be used without understanding the userns
# restriction.
# Site-specific additions and overrides. See local/README for details.
include if exists <local/unpriv_unshare>
}

View File

@@ -8,8 +8,6 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <tunables/global>
profile pyzorsocket /usr/bin/pyzorsocket {

View File

@@ -8,8 +8,6 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <tunables/global>
profile razorsocket /usr/bin/razorsocket {

View File

@@ -8,8 +8,6 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <tunables/global>
profile clamd /usr/sbin/clamd {

View File

@@ -8,8 +8,6 @@
#
# ------------------------------------------------------------------
abi <abi/4.0>,
include <tunables/global>
profile haproxy /usr/sbin/haproxy {

View File

@@ -66,7 +66,6 @@ backends:
- ubuntu-cloud-24.04:
username: ubuntu
password: ubuntu
workers: 4
manual: true
- ubuntu-cloud-24.10:
username: ubuntu

View File

@@ -1,13 +0,0 @@
summary: smoke test for the tar profile
execute: |
# tar works (this is a very basic test).
# create a text file, archive it and delete the original file
echo "test" > file.txt
tar -czf archive.tar file.txt
rm file.txt
# extract archive, assert content is correct
tar -xzf archive.tar
test "$(cat file.txt)" = "test"
# The profile is attached based on the program path.
"$SPREAD_PATH"/tests/bin/actual-profile-of tar | MATCH 'tar \(enforce\)'

View File

@@ -64,7 +64,7 @@ mount_cleanup() {
}
do_onexit="mount_cleanup"
dd if=/dev/zero of=${mount_file} bs=1024 count=512 2> /dev/null
fallocate -l 512K ${mount_file}
/sbin/mkfs -t${fstype} -F ${mount_file} > /dev/null 2> /dev/null
/bin/mkdir ${mount_point}
/bin/mkdir ${mount_point2}
@@ -77,8 +77,8 @@ if [ ! -b /dev/loop0 ] ; then
fi
# find the next free loop device and mount it
loop_device=$(losetup -f) || fatalerror 'Unable to find a free loop device'
/sbin/losetup "$loop_device" ${mount_file} > /dev/null 2> /dev/null
/sbin/losetup -f ${mount_file} || fatalerror 'Unable to set up a loop device'
loop_device="$(/sbin/losetup -n -O NAME -l -j ${mount_file})"
options=(
# default and non-default options

View File

@@ -83,7 +83,7 @@ environment:
# test is expected to fail.
#
# Error: unix_fd_server passed. Test 'ATTACH_DISCONNECTED (attach_disconnected.path rule at /)' was expected to 'fail'
XFAIL/attach_disconnected: opensuse-cloud-tumbleweed debian-cloud-12 debian-cloud-13 ubuntu-cloud-22.04 ubuntu-cloud-24.04 ubuntu-cloud-24.10
XFAIL/attach_disconnected: opensuse-cloud-tumbleweed debian-cloud-12 debian-cloud-13 ubuntu-cloud-22.04
# Error: unix_fd_server failed. Test 'fd passing; unconfined client' was expected to 'pass'. Reason for failure 'FAIL - bind failed: Permission denied'
# Error: unix_fd_server failed. Test 'fd passing; confined client w/ rw' was expected to 'pass'. Reason for failure 'FAIL - bind failed: Permission denied'
XFAIL/deleted: opensuse-cloud-tumbleweed debian-cloud-12 debian-cloud-13
@@ -102,21 +102,6 @@ environment:
# Error: unix_socket passed. Test 'AF_UNIX pathname socket (seqpacket); confined server w/ a missing af_unix access (create)' was expected to 'fail'
# Error: unix_socket failed. Test 'AF_UNIX pathname socket (seqpacket); confined client w/ access (rw)' was expected to 'pass'. Reason for failure 'FAIL - setsockopt (SO_RCVTIMEO): Permission denied'
XFAIL/unix_socket_pathname: opensuse-cloud-tumbleweed debian-cloud-12 debian-cloud-13
# using ptrace v6 tests ...
# Error: ptrace failed. Test 'test allow all' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -c' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -h' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -hc' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -h prog' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
# Error: ptrace failed. Test 'test allow all -hc prog' was expected to 'pass'. Reason for failure 'FAIL: child exec failed - : Permission denied'
XFAIL/ptrace: debian-cloud-12 debian-cloud-13 ubuntu-cloud-22.04 ubuntu-cloud-24.04 ubuntu-cloud-24.10 opensuse-cloud-tumbleweed
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename- Permission denied'
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all : mq_notify)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename 4- Permission denied'
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all : select)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename- Permission denied'
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all : poll)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename- Permission denied'
# Error: posix_mq_rcv failed. Test 'POSIX MQUEUE (confined root - allow all : epoll)' was expected to 'pass'. Reason for failure 'FAIL 0 - execlp /tmp/apparmor/tests/regression/apparmor/posix_mq_snd /queuename- Permission denied'
XFAIL/mqueue: debian-cloud-12 debian-cloud-13 ubuntu-cloud-22.04 ubuntu-cloud-24.04 ubuntu-cloud-24.10 opensuse-cloud-tumbleweed
XFAIL/posix_ipc: ubuntu-cloud-22.04 ubuntu-cloud-24.04 ubuntu-cloud-24.10
artifacts:
- bash.log
- bash.err
@@ -138,14 +123,13 @@ execute: |
echo "Test execution logs are in the files bash.{log,err,trace} and are collected as artifacts"
echo "Bash errors are listed below:"
cat bash.err
echo "Tail of the trace is:"
tail bash.trace
exit 1
else
for xfail in ${XFAIL:-}; do
if [ "$SPREAD_SYSTEM" = "$xfail" ]; then
echo "Test $SPREAD_VARIANT has unexpectedly passed"
echo "Test execution logs are in the files bash.{log,err,trace} and are collected as artifacts"
echo "Bash errors are listed below:"
cat bash.err
exit 1
fi
done

View File

@@ -39,10 +39,10 @@ all: docs
docs: ${MANPAGES} ${HTMLMANPAGES}
# need some better way of determining this
DESTDIR=/
BINDIR=${DESTDIR}/usr/sbin
CONFDIR=${DESTDIR}/etc/apparmor
PYPREFIX=/usr
DESTDIR?=/
BINDIR?=${DESTDIR}/usr/sbin
CONFDIR?=${DESTDIR}/etc/apparmor
PYPREFIX?=/usr
po/${NAME}.pot: ${TOOLS} ${PYMODULES}

View File

@@ -139,7 +139,7 @@ ratelimit_saved = sysctl_read(ratelimit_sysctl)
try:
sysctl_write(ratelimit_sysctl, 0)
except PermissionError: # will fail in lxd
except OSError: # will fail in lxd
warn("Can't set printk_ratelimit, some events might be lost")
atexit.register(restore_ratelimit)

View File

@@ -1,7 +1,7 @@
#! /usr/bin/python3
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2014-2024 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2014-2018 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -17,6 +17,7 @@ import argparse
import apparmor.aa
import apparmor.cleanprofile as cleanprofile
import apparmor.severity
import apparmor.ui as aaui
from apparmor.fail import enable_aa_exception_handler
from apparmor.translations import init_translation
@@ -113,11 +114,12 @@ class Merge(object):
def ask_merge_questions(self):
other = self.base
log_dict = {'merge': other.active_profiles.get_all_profiles()}
log_dict = {'merge': apparmor.aa.split_to_merged(other.aa)}
apparmor.aa.loadincludes()
apparmor.aa.load_sev_db()
if not apparmor.aa.sev_db:
apparmor.aa.sev_db = apparmor.severity.Severity(apparmor.aa.CONFDIR + '/severity.db', _('unknown'))
# ask about preamble rules
apparmor.aa.ask_rule_questions(

View File

@@ -37,11 +37,9 @@ import os
import pwd
import re
import sys
import tempfile
import time
import subprocess
from collections import defaultdict
import notify2
import psutil
@@ -56,7 +54,7 @@ from apparmor.fail import enable_aa_exception_handler
from apparmor.notify import get_last_login_timestamp
from apparmor.translations import init_translation
from apparmor.logparser import ReadLog
from apparmor.gui import UsernsGUI, ErrorGUI, ShowMoreGUI, ShowMoreGUIAggregated, set_interface_theme
from apparmor.gui import UsernsGUI, ErrorGUI, ShowMoreGUI, set_interface_theme
from apparmor.rule.file import FileRule
from dbus import DBusException
@@ -123,7 +121,7 @@ def is_event_in_filter(event, filters):
return True
def daemonize():
def notify_about_new_entries(logfile, filters, wait=0):
"""Run the notification daemon in the background."""
# Kill other instances of aa-notify if already running
for process in psutil.process_iter():
@@ -146,9 +144,19 @@ def daemonize():
except DBusException:
sys.exit(_('Cannot initialize notify2. Please check that your terminal can use a graphical interface'))
thread = threading.Thread(target=start_glib_loop)
thread.daemon = True
thread.start()
try:
thread = threading.Thread(target=start_glib_loop)
thread.daemon = True
thread.start()
for event in follow_apparmor_events(logfile, wait):
if not is_event_in_filter(event, filters):
continue
debug_logger.info(format_event(event, logfile))
yield event, format_event(event, logfile)
except PermissionError:
sys.exit(_("ERROR: Cannot read {}. Please check permissions.").format(logfile))
else:
print(_('Notification emitter started in the background'))
# pids = (os.getpid(), newpid)
@@ -156,17 +164,6 @@ def daemonize():
os._exit(0) # Exit child without calling exit handlers etc
def notify_about_new_entries(logfile, filters, wait=0):
try:
for event in follow_apparmor_events(logfile, wait):
if not is_event_in_filter(event, filters):
continue
debug_logger.info(format_event(event, logfile))
yield event, format_event(event, logfile)
except PermissionError:
sys.exit(_("ERROR: Cannot read {}. Please check permissions.").format(logfile))
def show_entries_since_epoch(logfile, epoch_since, filters):
"""Show AppArmor notifications since given timestamp."""
count = 0
@@ -295,22 +292,6 @@ def reopen_logfile_if_needed(logfile, logdata, log_inode, log_size):
return (logdata, log_inode, log_size)
def get_apparmor_events_return(logfile, since=0):
"""Read audit events from log source and return all relevant events."""
out = []
# Get logdata from file
# @TODO Implement more log sources in addition to just the logfile
try:
with open_file_read(logfile) as logdata:
for event in parse_logdata(logdata):
if event.epoch > since:
out.append(event)
return out
except PermissionError:
sys.exit(_("ERROR: Cannot read {}. Please check permissions.".format(logfile)))
def get_apparmor_events(logfile, since=0):
"""Read audit events from log source and yield all relevant events."""
@@ -397,10 +378,6 @@ def drop_privileges():
os.setegid(int(next_gid))
os.seteuid(int(next_uid))
# sudo does not preserve DBUS address, so we need to guess it based on UID
if 'DBUS_SESSION_BUS_ADDRESS' not in os.environ:
os.environ['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/{}/bus'.format(os.geteuid())
def raise_privileges():
"""Raise privileges of process.
@@ -500,95 +477,77 @@ def ask_for_user_ns_denied(path, name, interactive=True):
debug_logger.debug('No action from the user for {}'.format(path))
def can_leverage_userns_event(ev):
def prompt_userns(ev, special_profiles):
"""If the user namespace creation denial was generated by an unconfined binary, displays a graphical notification.
Creates a new profile to allow userns if the user wants it. Returns whether a notification was displayed to the user
"""
if not is_special_profile_userns(ev, special_profiles):
return False
if ev['execpath'] is None:
return 'error_cannot_find_path'
UsernsGUI.show_error_cannot_find_execpath(ev['comm'], os.path.dirname(os.path.abspath(__file__)) + '/default_unconfined.template')
return True
aa.update_profiles()
if aa.get_profile_filename_from_profile_name(ev['comm']):
return 'error_userns_profile_exists'
return 'ok'
def prompt_userns(ev):
"""If the user namespace creation denial was generated by an unconfined binary, displays a graphical notification.
Creates a new profile to allow userns if the user wants it. Returns whether a notification was displayed to the user
"""
userns_event_usable = can_leverage_userns_event(ev)
if userns_event_usable == 'error_cannot_find_path':
UsernsGUI.show_error_cannot_find_execpath(ev['comm'], os.path.dirname(os.path.abspath(__file__)) + '/default_unconfined.template')
elif userns_event_usable == 'error_userns_profile_exists':
# There is already a profile with this name: we show an error to the user.
# We could use the full path as profile name like for the old profiles if we want to handle this case
# but if execpath is not supported by the kernel it could also mean that we inferred a bad path
# So we do nothing beyond showing this error.
ErrorGUI(
_('Application {profile} tried to create an user namespace, but a profile already exists with this name.\n'
'This is likely because there is several binaries named {profile} thus the path inferred by AppArmor ({inferred_path}) is not correct.\n'
'You should review your profiles (in {profile_dir}).').format(profile=ev['comm'], inferred_path=ev['execpath'], profile_dir=aa.profile_dir),
'This is likely because there is several binaries named {profile} thus the path inferred by AppArmor ({inferred_path}) is not correct.\n'
'You should review your profiles (in {profile_dir}).').format(profile=ev['comm'], inferred_path=ev['execpath'], profile_dir=aa.profile_dir),
False).show()
elif userns_event_usable == 'ok':
ask_for_user_ns_denied(ev['execpath'], ev['comm'])
return True
ask_for_user_ns_denied(ev['execpath'], ev['comm'])
def get_more_info_about_event(rl, ev, special_profiles, header='', get_clean_rule=False):
out = header
clean_rule = None
for key, value in ev.items():
if value:
out += '\t{} = {}\n'.format(_(key), value)
out += _('\nThe software that declined this operation is {}\n').format(ev['profile'])
rule = rl.create_rule_from_ev(ev)
if rule:
if type(rule) is FileRule and rule.exec_perms == FileRule.ANY_EXEC:
rule.exec_perms = 'Pix'
aa.update_profiles()
if customized_message['userns']['cond'](ev, special_profiles):
profile_path = None
out += _('You may allow it through a dedicated unconfined profile for {}.').format(ev['comm'])
userns_event_usable = can_leverage_userns_event(ev)
if userns_event_usable == 'error_cannot_find_path':
clean_rule = _('# You may allow it through a dedicated unconfined profile for {0}. However, apparmor cannot find {0}. If you want to allow it, please create a profile for it manually.').format(ev['comm'])
elif userns_event_usable == 'error_userns_profile_exists':
clean_rule = _('# You may allow it through a dedicated unconfined profile for {} ({}). However, a profile already exists with this name. If you want to allow it, please create a profile for it manually.').format(ev['comm'], ev['execpath'])
elif userns_event_usable == 'ok':
clean_rule = _('# You may allow it through a dedicated unconfined profile for {} ({})').format(ev['comm'], ev['execpath'])
else:
profile_path = aa.get_profile_filename_from_profile_name(ev['profile'])
clean_rule = rule.get_clean()
if profile_path:
out += _('If you want to allow this operation you can add the line below in profile {}\n').format(profile_path)
out += clean_rule
else:
out += _('However {profile} is not in {profile_dir}\nIt is likely that the profile was not stored in {profile_dir} or was removed.\n').format(profile=ev['profile'], profile_dir=aa.profile_dir)
else: # Should not happen
out += _('ERROR: Could not create rule from event.')
profile_path = None
if get_clean_rule:
return out, profile_path, clean_rule
else:
return out, profile_path
return True
# TODO reuse more code from aa-logprof in callbacks
def cb_more_info(notification, action, _args):
(ev, rl, special_profiles) = _args
args.wait = args.min_wait
(raw_ev, rl, special_profiles) = _args
notification.close()
out, profile_path, clean_rule = get_more_info_about_event(rl, ev, special_profiles, _('Operation denied by AppArmor\n\n'), get_clean_rule=True)
parsed_event = rl.parse_record(raw_ev)
out = _('Operation denied by AppArmor\n\n')
ans = ShowMoreGUI(profile_path, out, clean_rule, ev['profile'], profile_path is not None).show()
for key, value in parsed_event.items():
if value:
out += '\t{} = {}\n'.format(_(key), value)
out += _('\nThe software that declined this operation is {}\n').format(parsed_event['profile'])
rule = rl.create_rule_from_ev(parsed_event)
# Exec events are created with the default FileRule.ANY_EXEC. We use Pix for actual rules
if type(rule) is FileRule and rule.exec_perms == FileRule.ANY_EXEC:
rule.exec_perms = 'Pix'
if rule:
aa.update_profiles()
if customized_message['userns']['cond'](parsed_event, special_profiles):
profile_path = None
out += _('You may allow it through a dedicated unconfined profile for {}.').format(parsed_event['comm'])
else:
profile_path = aa.get_profile_filename_from_profile_name(parsed_event['profile'])
if profile_path:
out += _('If you want to allow this operation you can add the line below in profile {}\n').format(profile_path)
out += rule.get_clean()
else:
out += _('However {profile} is not in {profile_dir}\nIt is likely that the profile was not stored in {profile_dir} or was removed.\n').format(profile=parsed_event['profile'], profile_dir=aa.profile_dir)
else: # Should not happen
out += _('ERROR: Could not create rule from event.')
return
ans = ShowMoreGUI(profile_path, out, rule.get_clean(), parsed_event['profile'], profile_path is not None).show()
if ans == 'add_rule':
add_to_profile(clean_rule, ev['profile'])
add_to_profile(rule.get_clean(), parsed_event['profile'])
elif ans in {'allow', 'deny'}:
create_userns_profile(ev['comm'], ev['execpath'], ans)
create_userns_profile(parsed_event['comm'], parsed_event['execpath'], ans)
def add_to_profile(rule, profile_name):
@@ -614,69 +573,12 @@ def add_to_profile(rule, profile_name):
ErrorGUI(_('Failed to add rule {rule} to {profile}\nError code = {retcode}').format(rule=rule, profile=profile_name, retcode=e.returncode), False).show()
def create_from_file(file_path):
update_profile_path = update_profile.__file__
command = ['pkexec', '--keep-cwd', update_profile_path, 'from_file', file_path]
try:
subprocess.run(command, check=True)
except subprocess.CalledProcessError as e:
if e.returncode != 126: # return code 126 means the user cancelled the request
ErrorGUI(_('Failed to add some rules'), False).show()
def allow_all(clean_rules):
local_template_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'default_unconfined.template')
if os.path.exists(local_template_path): # We are using local aa-notify -> we use local template
template_path = local_template_path
else:
template_path = aa.CONFDIR + '/default_unconfined.template'
tmp = tempfile.NamedTemporaryFile()
with open(tmp.name, mode='w') as f:
profile = None
for line in clean_rules.splitlines():
if line == '':
continue
elif line[0] == '#':
profile = None
pass
elif line[0] != '\t':
profile = line[8:-1] # 8:-1 is to remove 'profile ' and ':'
else:
if line[1] == '#': # Add to userns
if line[-1] == '.': # '.' <==> There is an error: we cannot add the profile automatically
continue
profile_name = line.split()[-2] # line always finishes by <profile_name>
bin_path = line.split()[-1][1:-1] # 1:-1 to remove the parenthesis
profile_path = aa.get_profile_filename_from_profile_name(profile_name, True)
if not profile_path:
ErrorGUI(_('Cannot get profile path for {}.').format(profile_name), False).show()
continue
f.write('create_userns\t{}\t{}\t{}\t{}\t{}\n'.format(template_path, profile_name, bin_path, profile_path, 'allow'))
else:
if profile is not None:
f.write('add_rule\t{}\t{}\n'.format(line[1:], profile))
else:
print(_("Rule {} cannot be added automatically").format(line[1:]), file=sys.stdout)
create_from_file(tmp.name)
# TODO reuse more code from aa-logprof in callbacks
def cb_more_info_aggregated(notification, action, _args):
(to_display, aggregated, clean_rules) = _args
args.wait = args.min_wait
res = ShowMoreGUIAggregated(to_display, aggregated, clean_rules).show()
if res == 'allow_all':
allow_all(clean_rules)
def cb_add_to_profile(notification, action, _args):
(ev, rl, special_profiles) = _args
args.wait = args.min_wait
(raw_ev, rl, special_profiles) = _args
notification.close()
parsed_event = rl.parse_record(raw_ev)
rule = rl.create_rule_from_ev(ev)
rule = rl.create_rule_from_ev(parsed_event)
# Exec events are created with the default FileRule.ANY_EXEC. We use Pix for actual rules
if type(rule) is FileRule and rule.exec_perms == FileRule.ANY_EXEC:
@@ -688,10 +590,10 @@ def cb_add_to_profile(notification, action, _args):
aa.update_profiles()
if customized_message['userns']['cond'](ev, special_profiles):
ask_for_user_ns_denied(ev['execpath'], ev['comm'], False)
if customized_message['userns']['cond'](parsed_event, special_profiles):
ask_for_user_ns_denied(parsed_event['execpath'], parsed_event['comm'], False)
else:
add_to_profile(rule.get_clean(), ev['profile'])
add_to_profile(rule.get_clean(), parsed_event['profile'])
customized_message = {
@@ -714,83 +616,6 @@ def start_glib_loop():
loop.run()
def aggregate_event(agg, ev, keys_to_aggregate):
profile = ev['profile']
agg[profile]['count'] += 1
agg[profile]['events'].append(ev)
for key in keys_to_aggregate:
if key in ev:
value = ev[key]
agg[profile]['values'][key][value] += 1
return agg
def get_aggregated(rl, agg, max_nb_profiles, keys_to_aggregate, special_profiles):
notification = ''
summary = ''
more_info = ''
clean_rules = ''
summary = _('Notifications were raised for profiles: {}\n').format(', '.join(list(agg.keys())))
sorted_profiles = sorted(agg.items(), key=lambda item: item[1]['count'], reverse=True)
for profile, data in sorted_profiles:
profile_notif = _('profile: {} — {} events\n').format(profile, data['count'])
notification += profile_notif
summary += profile_notif
if len(agg) <= max_nb_profiles:
for key in keys_to_aggregate:
if key in data['values']:
total_key_events = sum(data['values'][key].values())
sorted_values = sorted(data['values'][key].items(), key=lambda item: item[1], reverse=True)
for value, count in sorted_values:
percent = (count / total_key_events) * 100
if percent >= 20: # We exclude rare cases for clarity. 20% is arbitrary
summary += _('\t{} was {} {:.1f}% of the time\n').format(key, value, percent)
summary += '\n'
more_info += _('profile {}, {} events\n').format(profile, data['count'])
rules_for_profiles = set()
found_profile = True
for i, ev in enumerate(data['events']):
more_info_rule, profile_path, clean_rule = get_more_info_about_event(rl, ev, special_profiles, _(' - Event {} -\n').format(i + 1), get_clean_rule=True)
if i != 0:
more_info += '\n\n'
if not profile_path:
found_profile = False
more_info += more_info_rule
if clean_rule:
rules_for_profiles.add(clean_rule)
if rules_for_profiles != set():
if profile not in special_profiles:
if found_profile:
clean_rules += _('profile {}:').format(profile)
else:
clean_rules += _('# Unknown profile {}').format(profile)
else:
clean_rules += _('# unprivileged userns denials ({}):').format(profile)
clean_rules += '\n\t' + '\n\t'.join(rules_for_profiles) + '\n'
return notification, summary, more_info, clean_rules
def display_notification(ev, rl, message, userns_special_profiles):
message = customize_notification_message(ev, message, userns_special_profiles)
n = notify2.Notification(_('AppArmor security notice'), message, 'gtk-dialog-warning')
if can_allow_rule(ev, userns_special_profiles):
n.add_action('clicked', 'Allow', cb_add_to_profile, (ev, rl, userns_special_profiles))
n.add_action('more_clicked', 'Show More', cb_more_info, (ev, rl, userns_special_profiles))
n.show()
def display_aggregated_notification(rl, aggregated, maximum_number_notification_profiles, keys_to_aggregate, special_profiles):
notification, summary, more_info, clean_rules = get_aggregated(rl, aggregated, maximum_number_notification_profiles, keys_to_aggregate, special_profiles)
n = notify2.Notification(_('AppArmor security notice'), notification, 'gtk-dialog-warning')
n.add_action('more_aggregated_clicked', 'Show More Info', cb_more_info_aggregated, (summary, more_info, clean_rules))
n.show()
def main():
"""Run aa-notify.
@@ -827,7 +652,6 @@ def main():
parser.add_argument('-v', '--verbose', action='store_true', help=_('show messages with stats'))
parser.add_argument('-u', '--user', type=str, help=_('user to drop privileges to when not using sudo'))
parser.add_argument('-w', '--wait', type=int, metavar=('NUM'), help=_('wait NUM seconds before displaying notifications (with -p)'))
parser.add_argument('-m', '--merge-notifications', action='store_true', help=_('Merge notification for improved readability (with -p)'))
parser.add_argument('--prompt-filter', type=str, metavar=('PF'), help=_('kind of operations which display a popup prompt'))
parser.add_argument('--debug', action='store_true', help=_('debug mode'))
parser.add_argument('--configdir', type=str, help=argparse.SUPPRESS)
@@ -912,8 +736,6 @@ def main():
- ignore_denied_capability
- interface_theme
- prompt_filter
- maximum_number_notification_profiles
- keys_to_aggregate
- filter.profile,
- filter.operation,
- filter.name,
@@ -934,8 +756,6 @@ def main():
'show_notifications',
'message_body',
'message_footer',
'maximum_number_notification_profiles',
'keys_to_aggregate',
'filter.profile',
'filter.operation',
'filter.name',
@@ -1032,16 +852,6 @@ def main():
if unsupported:
sys.exit(_('ERROR: using an unsupported prompt filter: {}\nSupported values: {}').format(', '.join(unsupported), ', '.join(supported_prompt_filter)))
if 'maximum_number_notification_profiles' in config['']:
maximum_number_notification_profiles = int(config['']['maximum_number_notification_profiles'].strip())
else:
maximum_number_notification_profiles = 2
if 'keys_to_aggregate' in config['']:
keys_to_aggregate = config['']['keys_to_aggregate'].strip().split(',')
else:
keys_to_aggregate = {'operation', 'class', 'name', 'denied', 'target'}
if args.file:
logfile = args.file
elif os.path.isfile('/var/run/auditd.pid') and os.path.isfile('/var/log/audit/audit.log'):
@@ -1078,76 +888,49 @@ def main():
# Initialize the list of profiles for can_allow_rule
aa.read_profiles()
drop_privileges()
daemonize()
raise_privileges()
# At this point this script needs to be able to read 'logfile' but once
# the for loop starts, privileges can be dropped since the file descriptor
# has been opened and access granted. Further reads of the file will not
# trigger any new permission checks.
# @TODO Plan to catch PermissionError here or..?
for (event, message) in notify_about_new_entries(logfile, filters, args.wait):
ev = rl.parse_record(event)
if args.merge_notifications:
if not args.wait or args.wait == 0:
# args.wait now uses an exponential backoff.
# If there is several notifications on a time period, the time period doubles to avoid flooding.
# If there is no notification on a time period, the time period is divided by two.
args.wait = 5
args.min_wait = args.wait
args.max_wait = args.wait * 2**5 # Arbitrary power of two (2 minutes 40 if args.wait is 5 seconds)
# @TODO redo special behaviours with a more regular function
# We ignore capability denials for binaries in ignore_denied_capability
if ev['operation'] == 'capable' and ev['comm'] in ignore_denied_capability:
continue
old_time = int(time.time())
while True:
raw_evs = get_apparmor_events_return(logfile, old_time)
drop_privileges()
if len(raw_evs) == 1: # Single event: we handle it without aggregation
raw_ev = raw_evs[0]
ev = rl.parse_record(raw_ev)
display_notification(ev, rl, format_event(raw_ev, logfile), userns_special_profiles)
elif len(raw_evs) > 1:
if args.wait < args.max_wait:
args.wait *= 2
aggregated = defaultdict(lambda: {'count': 0, 'values': defaultdict(lambda: defaultdict(int)), 'events': []})
for raw_ev in raw_evs:
ev = rl.parse_record(raw_ev)
aggregate_event(aggregated, ev, keys_to_aggregate)
display_aggregated_notification(rl, aggregated, maximum_number_notification_profiles, keys_to_aggregate, userns_special_profiles)
else:
if args.wait > args.min_wait:
args.wait /= 2
old_time = int(time.time())
# When notification is sent, raise privileged back to root if the
# original effective user id was zero (to be able to read AppArmor logs)
raise_privileges()
time.sleep(args.wait)
else:
args.min_wait = args.wait
# At this point this script needs to be able to read 'logfile' but once
# the for loop starts, privileges can be dropped since the file descriptor
# has been opened and access granted. Further reads of the file will not
# trigger any new permission checks.
# @TODO Plan to catch PermissionError here or..?
for (event, message) in notify_about_new_entries(logfile, filters, args.wait):
ev = rl.parse_record(event)
# @TODO redo special behaviours with a more regular function
# We ignore capability denials for binaries in ignore_denied_capability
if ev['operation'] == 'capable' and ev['comm'] in ignore_denied_capability:
continue
# Special behaivor for userns:
if args.prompt_filter and 'userns' in args.prompt_filter and customized_message['userns']['cond'](ev, userns_special_profiles):
prompt_userns(ev)
# Special behaivor for userns:
if args.prompt_filter and 'userns' in args.prompt_filter and customized_message['userns']['cond'](ev, userns_special_profiles):
if prompt_userns(ev, userns_special_profiles):
continue # Notification already displayed for this event, we go to the next one.
# Notifications should not be run as root, since root probably is
# the wrong desktop user and not the one getting the notifications.
drop_privileges()
# Notifications should not be run as root, since root probably is
# the wrong desktop user and not the one getting the notifications.
drop_privileges()
display_notification(ev, rl, message, userns_special_profiles)
# sudo does not preserve DBUS address, so we need to guess it based on UID
if 'DBUS_SESSION_BUS_ADDRESS' not in os.environ:
os.environ['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/{}/bus'.format(os.geteuid())
# When notification is sent, raise privileged back to root if the
# original effective user id was zero (to be able to read AppArmor logs)
raise_privileges()
message = customize_notification_message(ev, message, userns_special_profiles)
n = notify2.Notification(
_('AppArmor security notice'),
message,
'gtk-dialog-warning'
)
if can_allow_rule(ev, userns_special_profiles):
n.add_action('clicked', 'Allow', cb_add_to_profile, (event, rl, userns_special_profiles))
n.add_action('more_clicked', 'Show More', cb_more_info, (event, rl, userns_special_profiles))
n.show()
# When notification is sent, raise privileged back to root if the
# original effective user id was zero (to be able to read AppArmor logs)
raise_privileges()
elif args.since_last:
show_entries_since_last_login(logfile, filters)

View File

@@ -3,7 +3,7 @@ Type=Application
Name=AppArmor Notify
Comment=Receive on screen notifications of AppArmor denials
TryExec=/usr/bin/aa-notify
Exec=/usr/bin/aa-notify --poll --merge-notifictions --since-days 1 --wait 5
Exec=/usr/bin/aa-notify -p -s 1 -w 60
StartupNotify=false
NoDisplay=true
X-Ubuntu-Gettext-Domain=aa-notify

View File

@@ -1,6 +1,6 @@
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2014-2024 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2014-2021 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -69,7 +69,6 @@ use_abstractions = True
include = dict()
active_profiles = ProfileList()
original_profiles = ProfileList()
extra_profiles = ProfileList()
# To store the globs entered by users so they can be provided again
@@ -79,6 +78,9 @@ user_globs = {}
# let ask_addhat() remember answers for already-seen change_hat events
transitions = {}
aa = {} # Profiles originally in sd, replace by aa
original_aa = hasher()
changed = dict()
created = []
helpers = dict() # Preserve this between passes # was our
@@ -90,11 +92,12 @@ def reset_aa():
Used by aa-mergeprof and some tests.
"""
global include, active_profiles, original_profiles
global aa, include, active_profiles, original_aa
aa = {}
include = dict()
active_profiles = ProfileList()
original_profiles = ProfileList()
original_aa = hasher()
def on_exit():
@@ -414,7 +417,6 @@ def create_new_profile(localfile, is_stub=False):
full_hat = combine_profname((localfile, hat))
if not local_profile.get(full_hat, False):
local_profile[full_hat] = ProfileStorage('NEW', hat, 'create_new_profile() required_hats')
local_profile[full_hat]['parent'] = localfile
local_profile[full_hat]['is_hat'] = True
local_profile[full_hat]['flags'] = 'complain'
@@ -426,10 +428,30 @@ def create_new_profile(localfile, is_stub=False):
return local_profile
def delete_profile(local_prof):
"""Deletes the specified file from the disk and remove it from our list"""
profile_file = get_profile_filename_from_profile_name(local_prof, True)
if os.path.isfile(profile_file):
os.remove(profile_file)
if aa.get(local_prof, False):
aa.pop(local_prof)
# prof_unload(local_prof)
def confirm_and_abort():
ans = aaui.UI_YesNo(_('Are you sure you want to abandon this set of profile changes and exit?'), 'n')
if ans == 'y':
aaui.UI_Info(_('Abandoning all changes.'))
for prof in created:
delete_profile(prof)
sys.exit(0)
def get_profile(prof_name):
"""search for inactive/extra profile, and ask if it should be used"""
if not extra_profiles.profile_exists(prof_name):
if not extra_profiles.profiles.get(prof_name, False):
return None # no inactive profile found
# TODO: search based on the attachment, not (only?) based on the profile name
@@ -502,14 +524,14 @@ def autodep(bin_name, pname=''):
file = get_profile_filename_from_profile_name(pname, True)
profile_data[pname]['filename'] = file # change filename from extra_profile_dir to /etc/apparmor.d/
for p in profile_data.keys():
original_profiles.add_profile(file, p, profile_data[p]['attachment'], deepcopy(profile_data[p]))
attach_profile_data(aa, profile_data)
attach_profile_data(original_aa, profile_data)
attachment = profile_data[pname]['attachment']
if not attachment and pname.startswith('/'):
attachment = pname # use name as name and attachment
active_profiles.add_profile(file, pname, attachment, profile_data[pname])
active_profiles.add_profile(file, pname, attachment)
if os.path.isfile(profile_dir + '/abi/4.0'):
active_profiles.add_abi(file, AbiRule('abi/4.0', False, True))
@@ -559,8 +581,6 @@ def change_profile_flags(prof_filename, program, flag, set_flag):
for lineno, line in enumerate(f_in):
if RE_PROFILE_START.search(line):
depth += 1
# TODO: hand over profile and hat (= parent profile)
# (and find out why it breaks test-aa.py with several "a child profile inside another child profile is not allowed" errors when doing so)
(profile, hat, prof_storage) = ProfileStorage.parse(line, prof_filename, lineno, '', '')
old_flags = prof_storage['flags']
newflags = ', '.join(add_or_remove_flag(old_flags, flag, set_flag))
@@ -581,7 +601,6 @@ def change_profile_flags(prof_filename, program, flag, set_flag):
line = '%s\n' % line[0]
elif RE_PROFILE_HAT_DEF.search(line):
depth += 1
# TODO: hand over profile and hat (= parent profile)
(profile, hat, prof_storage) = ProfileStorage.parse(line, prof_filename, lineno, '', '')
old_flags = prof_storage['flags']
newflags = ', '.join(add_or_remove_flag(old_flags, flag, set_flag))
@@ -591,7 +610,6 @@ def change_profile_flags(prof_filename, program, flag, set_flag):
line = '%s\n' % line[0]
elif RE_PROFILE_END.search(line):
depth -= 1
# TODO: restore 'profile' and 'hat' to previous values (not really needed/used for aa-complain etc., but can't hurt)
f_out.write(line)
os.rename(temp_file.name, prof_filename)
@@ -603,6 +621,23 @@ def change_profile_flags(prof_filename, program, flag, set_flag):
raise AppArmorException("%(file)s doesn't contain a valid profile for %(profile)s (syntax error?)" % {'file': prof_filename, 'profile': program})
def profile_exists(program):
"""Returns True if profile exists, False otherwise"""
# Check cache of profiles
if active_profiles.filename_from_attachment(program):
return True
# Check the disk for profile
prof_path = get_profile_filename_from_attachment(program, True)
# print(prof_path)
if os.path.isfile(prof_path):
# Add to cache of profile
raise AppArmorBug('Reached strange condition in profile_exists(), please open a bugreport!')
# active_profiles[program] = prof_path
# return True
return False
def build_x_functions(default, options, exec_toggle):
ret_list = []
fallback_toggle = False
@@ -657,7 +692,7 @@ def ask_addhat(hashlog):
for full_hat in hashlog[aamode][profile]['change_hat']:
hat = full_hat.split('//')[-1]
if active_profiles.profile_exists(full_hat):
if aa[profile].get(hat, False):
continue # no need to ask if the hat already exists
default_hat = None
@@ -694,26 +729,18 @@ def ask_addhat(hashlog):
transitions[context] = ans
filename = active_profiles.filename_from_profile_name(profile) # filename of parent profile, will be used for new hats
if ans == 'CMD_ADDHAT':
hat_obj = ProfileStorage(profile, hat, 'ask_addhat addhat')
hat_obj['parent'] = profile
hat_obj['flags'] = active_profiles[profile]['flags']
new_full_hat = combine_profname([profile, hat])
active_profiles.add_profile(filename, new_full_hat, hat, hat_obj)
hashlog[aamode][full_hat]['final_name'] = new_full_hat
aa[profile][hat] = ProfileStorage(profile, hat, 'ask_addhat addhat')
aa[profile][hat]['flags'] = aa[profile][profile]['flags']
hashlog[aamode][full_hat]['final_name'] = '%s//%s' % (profile, hat)
changed[profile] = True
elif ans == 'CMD_USEDEFAULT':
hat = default_hat
new_full_hat = combine_profname([profile, hat])
hashlog[aamode][full_hat]['final_name'] = new_full_hat
if not active_profiles.profile_exists(full_hat):
hashlog[aamode][full_hat]['final_name'] = '%s//%s' % (profile, default_hat)
if not aa[profile].get(hat, False):
# create default hat if it doesn't exist yet
hat_obj = ProfileStorage(profile, hat, 'ask_addhat default hat')
hat_obj['parent'] = profile
hat_obj['flags'] = active_profiles[profile]['flags']
active_profiles.add_profile(filename, new_full_hat, hat, hat_obj)
aa[profile][hat] = ProfileStorage(profile, hat, 'ask_addhat default hat')
aa[profile][hat]['flags'] = aa[profile][profile]['flags']
changed[profile] = True
elif ans == 'CMD_DENY':
# As unknown hat is denied no entry for it should be made
@@ -721,7 +748,7 @@ def ask_addhat(hashlog):
continue
def ask_exec(hashlog, default_ans=''):
def ask_exec(hashlog):
"""ask the user about exec events (requests to execute another program) and which exec mode to use"""
for aamode in hashlog:
@@ -736,14 +763,14 @@ def ask_exec(hashlog, default_ans=''):
raise AppArmorBug(
'exec permissions requested for directory %s (profile %s). This should not happen - please open a bugreport!' % (exec_target, full_profile))
if not active_profiles.profile_exists(profile):
if not aa.get(profile):
continue # ignore log entries for non-existing profiles
if not active_profiles.profile_exists(full_profile):
if not aa[profile].get(hat):
continue # ignore log entries for non-existing hats
exec_event = FileRule(exec_target, None, FileRule.ANY_EXEC, FileRule.ALL, owner=False, log_event=True)
if is_known_rule(active_profiles[full_profile], 'file', exec_event):
if is_known_rule(aa[profile][hat], 'file', exec_event):
continue
# nx is not used in profiles but in log files.
@@ -809,10 +836,7 @@ def ask_exec(hashlog, default_ans=''):
# ask user about the exec mode to use
ans = ''
while ans not in ('CMD_ix', 'CMD_px', 'CMD_cx', 'CMD_nx', 'CMD_pix', 'CMD_cix', 'CMD_nix', 'CMD_ux', 'CMD_DENY'): # add '(I)gnore'? (hotkey conflict with '(i)x'!)
if default_ans:
ans = default_ans
else:
ans = q.promptUser()[0]
ans = q.promptUser()[0]
if ans.startswith('CMD_EXEC_IX_'):
exec_toggle = not exec_toggle
@@ -847,22 +871,20 @@ def ask_exec(hashlog, default_ans=''):
elif ans in ('CMD_px', 'CMD_cx', 'CMD_pix', 'CMD_cix'):
exec_mode = ans.replace('CMD_', '')
px_msg = _(
"Should AppArmor enable secure-execution mode\n"
"when switching profiles?\n"
"Should AppArmor sanitise the environment when\n"
"switching profiles?\n"
"\n"
"Doing so is more secure, but some applications\n"
"depend on the presence of LD_PRELOAD or\n"
"LD_LIBRARY_PATH, which would be sanitized by\n"
"enabling secure-execution mode.")
"Sanitising environment is more secure,\n"
"but some applications depend on the presence\n"
"of LD_PRELOAD or LD_LIBRARY_PATH.")
if parent_uses_ld_xxx:
px_msg = _(
"Should AppArmor enable secure-execution mode\n"
"when switching profiles?\n"
"Should AppArmor sanitise the environment when\n"
"switching profiles?\n"
"\n"
"Doing so is more secure,\n"
"Sanitising environment is more secure,\n"
"but this application appears to be using LD_PRELOAD\n"
"or LD_LIBRARY_PATH, and sanitising those environment\n"
"variables by enabling secure-execution mode\n"
"or LD_LIBRARY_PATH and sanitising the environment\n"
"could cause functionality problems.")
ynans = aaui.UI_YesNo(px_msg, 'y')
@@ -896,7 +918,7 @@ def ask_exec(hashlog, default_ans=''):
file_perm = 'mr'
else:
if ans == 'CMD_DENY':
active_profiles[full_profile]['file'].add(FileRule(exec_target, None, 'x', FileRule.ALL, owner=False, log_event=True, deny=True))
aa[profile][hat]['file'].add(FileRule(exec_target, None, 'x', FileRule.ALL, owner=False, log_event=True, deny=True))
changed[profile] = True
if target_profile and hashlog[aamode].get(target_profile):
hashlog[aamode][target_profile]['final_name'] = ''
@@ -909,7 +931,7 @@ def ask_exec(hashlog, default_ans=''):
else:
rule_to_name = FileRule.ALL
active_profiles[full_profile]['file'].add(FileRule(exec_target, file_perm, exec_mode, rule_to_name, owner=False, log_event=True))
aa[profile][hat]['file'].add(FileRule(exec_target, file_perm, exec_mode, rule_to_name, owner=False, log_event=True))
changed[profile] = True
@@ -920,16 +942,16 @@ def ask_exec(hashlog, default_ans=''):
exec_target_rule = FileRule(exec_target, 'r', None, FileRule.ALL, owner=False)
interpreter_rule = FileRule(interpreter_path, None, 'ix', FileRule.ALL, owner=False)
if not is_known_rule(active_profiles[full_profile], 'file', exec_target_rule):
active_profiles[full_profile]['file'].add(exec_target_rule)
if not is_known_rule(active_profiles[full_profile], 'file', interpreter_rule):
active_profiles[full_profile]['file'].add(interpreter_rule)
if not is_known_rule(aa[profile][hat], 'file', exec_target_rule):
aa[profile][hat]['file'].add(exec_target_rule)
if not is_known_rule(aa[profile][hat], 'file', interpreter_rule):
aa[profile][hat]['file'].add(interpreter_rule)
if abstraction:
abstraction_rule = IncludeRule(abstraction, False, True)
if not active_profiles[full_profile]['inc_ie'].is_covered(abstraction_rule):
active_profiles[full_profile]['inc_ie'].add(abstraction_rule)
if not aa[profile][hat]['inc_ie'].is_covered(abstraction_rule):
aa[profile][hat]['inc_ie'].add(abstraction_rule)
# Update tracking info based on kind of change
@@ -969,21 +991,19 @@ def ask_exec(hashlog, default_ans=''):
if to_name:
exec_target = to_name
full_exec_target = combine_profname([profile, exec_target])
if not active_profiles.profile_exists(full_exec_target):
if not aa[profile].get(exec_target, False):
ynans = 'y'
if 'i' in exec_mode:
ynans = aaui.UI_YesNo(_('A profile for %s does not exist.\nDo you want to create one?') % exec_target, 'n')
if ynans == 'y':
if not active_profiles.profile_exists(full_exec_target):
stub_profile = create_new_profile(exec_target, True)
for p in stub_profile:
active_profiles.add_profile(prof_filename, p, stub_profile[p]['attachment'], stub_profile[p])
if not aa[profile].get(exec_target, False):
stub_profile = merged_to_split(create_new_profile(exec_target, True))
aa[profile][exec_target] = stub_profile[exec_target][exec_target]
if profile != exec_target:
active_profiles[full_exec_target]['flags'] = active_profiles[profile]['flags']
aa[profile][exec_target]['flags'] = aa[profile][profile]['flags']
active_profiles[full_exec_target]['flags'] = 'complain'
aa[profile][exec_target]['flags'] = 'complain'
if target_profile and hashlog[aamode].get(target_profile):
hashlog[aamode][target_profile]['final_name'] = '%s//%s' % (profile, exec_target)
@@ -1036,8 +1056,8 @@ def ask_the_questions(log_dict):
else:
sev_db.set_variables({})
if active_profiles.profile_exists(profile): # only continue/ask if the parent profile exists # XXX check direct parent or top-level? Also, get rid of using "profile" here!
if not active_profiles.profile_exists(full_profile):
if aa.get(profile): # only continue/ask if the parent profile exists
if not aa[profile].get(hat, {}).get('file'):
if aamode != 'merge':
# Ignore log events for a non-existing profile or child profile. Such events can occur
# after deleting a profile or hat manually, or when processing a foreign log.
@@ -1070,21 +1090,18 @@ def ask_the_questions(log_dict):
continue # don't ask about individual rules if the user doesn't want the additional subprofile/hat
if log_dict[aamode][full_profile]['is_hat']:
prof_obj = ProfileStorage(profile, hat, 'mergeprof ask_the_questions() - missing hat')
prof_obj['is_hat'] = True
aa[profile][hat] = ProfileStorage(profile, hat, 'mergeprof ask_the_questions() - missing hat')
aa[profile][hat]['is_hat'] = True
else:
prof_obj = ProfileStorage(profile, hat, 'mergeprof ask_the_questions() - missing subprofile')
prof_obj['is_hat'] = False
prof_obj['parent'] = profile
active_profiles.add_profile(prof_filename, full_profile, hat, prof_obj)
aa[profile][hat] = ProfileStorage(profile, hat, 'mergeprof ask_the_questions() - missing subprofile')
aa[profile][hat]['is_hat'] = False
# check for and ask about conflicting exec modes
ask_conflict_mode(active_profiles[full_profile], log_dict[aamode][full_profile])
ask_conflict_mode(aa[profile][hat], log_dict[aamode][full_profile])
prof_changed, end_profiling = ask_rule_questions(
log_dict[aamode][full_profile], full_profile,
active_profiles[full_profile], ruletypes)
log_dict[aamode][full_profile], combine_name(profile, hat),
aa[profile][hat], ruletypes)
if prof_changed:
changed[profile] = True
if end_profiling:
@@ -1097,7 +1114,7 @@ def ask_rule_questions(prof_events, profile_name, the_profile, r_types):
parameter typical value
prof_events log_dict[aamode][full_profile]
profile_name profile name (possible profile//hat)
the_profile active_profiles[full_profile] -- will be modified
the_profile aa[profile][hat] -- will be modified
r_types ruletypes
returns:
@@ -1464,11 +1481,14 @@ def set_logfile(filename):
def do_logprof_pass(logmark='', out_dir=None):
# set up variables for this pass
global active_profiles
global sev_db
# aa = hasher()
# changed = dict()
aaui.UI_Info(_('Reading log entries from %s.') % logfile)
load_sev_db()
if not sev_db:
sev_db = apparmor.severity.Severity(CONFDIR + '/severity.db', _('unknown'))
# print(pid)
# print(active_profiles)
@@ -1488,7 +1508,7 @@ def do_logprof_pass(logmark='', out_dir=None):
def save_profiles(is_mergeprof=False, out_dir=None):
# Ensure the changed profiles are actual active profiles
for prof_name in changed.keys():
if not active_profiles.profile_exists(prof_name):
if not aa.get(prof_name, False):
print("*** save_profiles(): removing %s" % prof_name)
print('*** This should not happen. Please open a bugreport!')
changed.pop(prof_name)
@@ -1525,19 +1545,19 @@ def save_profiles(is_mergeprof=False, out_dir=None):
elif ans == 'CMD_VIEW_CHANGES':
oldprofile = None
if active_profiles[profile_name].get('filename', False):
oldprofile = active_profiles[profile_name]['filename']
if aa[profile_name][profile_name].get('filename', False):
oldprofile = aa[profile_name][profile_name]['filename']
else:
oldprofile = get_profile_filename_from_attachment(profile_name, True)
serialize_options = {'METADATA': True}
newprofile = serialize_profile(active_profiles, profile_name, serialize_options)
newprofile = serialize_profile(split_to_merged(aa), profile_name, serialize_options)
aaui.UI_Changes(oldprofile, newprofile, comments=True)
elif ans == 'CMD_VIEW_CHANGES_CLEAN':
oldprofile = serialize_profile(original_profiles, profile_name, {})
newprofile = serialize_profile(active_profiles, profile_name, {})
oldprofile = serialize_profile(split_to_merged(original_aa), profile_name, {})
newprofile = serialize_profile(split_to_merged(aa), profile_name, {})
aaui.UI_Changes(oldprofile, newprofile)
@@ -1570,9 +1590,9 @@ def collapse_log(hashlog, ignore_null_profiles=True):
profile, hat = split_name(final_name) # XXX limited to two levels to avoid an Exception on nested child profiles or nested null-*
# TODO: support nested child profiles
# used to avoid calling is_known_rule() on events for a non-existing profile
# used to avoid to accidentally initialize aa[profile][hat] or calling is_known_rule() on events for a non-existing profile
hat_exists = False
if active_profiles.profile_exists(profile) and active_profiles.profile_exists(final_name): # we need to check for the target profile here
if aa.get(profile) and aa[profile].get(hat):
hat_exists = True
if not log_dict[aamode].get(final_name):
@@ -1581,7 +1601,7 @@ def collapse_log(hashlog, ignore_null_profiles=True):
for ev_type, ev_class in ReadLog.ruletypes.items():
for rule in ev_class.from_hashlog(hashlog[aamode][full_profile][ev_type]):
if not hat_exists or not is_known_rule(active_profiles[full_profile], ev_type, rule):
if not hat_exists or not is_known_rule(aa[profile][hat], ev_type, rule):
log_dict[aamode][final_name][ev_type].add(rule)
return log_dict
@@ -1601,8 +1621,9 @@ def read_profiles(ui_msg=False, skip_profiles=()):
#
# The skip_profiles parameter should only be specified by tests.
global original_profiles
original_profiles = ProfileList()
global aa, original_aa
aa = {}
original_aa = hasher()
if ui_msg:
aaui.UI_Info(_('Updating AppArmor profiles in %s.') % profile_dir)
@@ -1656,7 +1677,7 @@ def read_inactive_profiles(skip_profiles=()):
read_profile(full_file, False)
def read_profile(file, is_active_profile, read_error_fatal=False):
def read_profile(file, active_profile, read_error_fatal=False):
data = None
try:
with open_file_read(file) as f_in:
@@ -1674,17 +1695,30 @@ def read_profile(file, is_active_profile, read_error_fatal=False):
if not profile_data:
return
for profile in profile_data:
attachment = profile_data[profile]['attachment']
filename = profile_data[profile]['filename']
if active_profile:
attach_profile_data(aa, profile_data)
attach_profile_data(original_aa, profile_data)
if not attachment and profile.startswith('/'):
attachment = profile # use profile as name and attachment
for profile in profile_data:
if '//' in profile:
continue # TODO: handle hats/child profiles independent of main profiles
attachment = profile_data[profile]['attachment']
filename = profile_data[profile]['filename']
if not attachment and profile.startswith('/'):
attachment = profile # use profile as name and attachment
active_profiles.add_profile(filename, profile, attachment)
else:
for profile in profile_data:
attachment = profile_data[profile]['attachment']
filename = profile_data[profile]['filename']
if not attachment and profile.startswith('/'):
attachment = profile # use profile as name and attachment
if is_active_profile:
active_profiles.add_profile(filename, profile, attachment, profile_data[profile])
original_profiles.add_profile(filename, profile, attachment, deepcopy(profile_data[profile]))
else:
extra_profiles.add_profile(filename, profile, attachment, profile_data[profile])
@@ -1862,7 +1896,6 @@ def parse_profile_data(data, file, do_include, in_preamble):
profname = combine_profname((parsed_prof, hat))
if not profile_data.get(profname, False):
profile_data[profname] = ProfileStorage(parsed_prof, hat, 'parse_profile_data() required_hats')
profile_data[profname]['parent'] = parsed_prof
profile_data[profname]['is_hat'] = True
# End of file reached but we're stuck in a profile
@@ -1932,6 +1965,23 @@ def merged_to_split(profile_data):
return compat
def split_to_merged(profile_data):
"""(temporary) helper function to convert a traditional compat['foo']['bar'] to a profile['foo//bar'] list"""
merged = {}
for profile in profile_data:
for hat in profile_data[profile]:
if profile == hat:
merged_name = profile
else:
merged_name = combine_profname((profile, hat))
merged[merged_name] = profile_data[profile][hat]
return merged
def write_piece(profile_data, depth, name, nhat):
pre = ' ' * depth
data = []
@@ -1980,8 +2030,7 @@ def write_piece(profile_data, depth, name, nhat):
def serialize_profile(profile_data, name, options):
''' combine the preamble and profiles in a file to a string (to be written to the profile file) '''
string = ''
data = []
if not isinstance(options, dict):
@@ -1990,11 +2039,12 @@ def serialize_profile(profile_data, name, options):
include_metadata = options.get('METADATA', False)
if include_metadata:
data.extend(['# Last Modified: %s' % time.asctime()])
string = '# Last Modified: %s\n' % time.asctime()
# if profile_data[name].get('initial_comment', False):
# comment = profile_data[name]['initial_comment']
# data.append(comment)
# comment.replace('\\n', '\n')
# string += comment + '\n'
if options.get('is_attachment'):
prof_filename = get_profile_filename_from_attachment(name, True)
@@ -2005,29 +2055,23 @@ def serialize_profile(profile_data, name, options):
# Here should be all the profiles from the files added write after global/common stuff
for prof in sorted(active_profiles.profiles_in_file(prof_filename)):
if active_profiles.profiles[prof]['parent']:
continue # child profile or hat, already part of its parent profile
# aa-logprof asks to save each file separately. Therefore only update the given profile, and keep the original version of other profiles in the file
if prof != name:
if original_profiles.profile_exists(prof) and original_profiles[prof].get('initial_comment'):
comment = original_profiles[prof]['initial_comment']
data.extend([comment, ''])
data.extend(write_piece(original_profiles.get_profile_and_childs(prof), 0, prof, prof))
if original_aa[prof][prof].get('initial_comment', False):
comment = original_aa[prof][prof]['initial_comment']
comment.replace('\\n', '\n')
data.append(comment + '\n')
data.extend(write_piece(split_to_merged(original_aa), 0, prof, prof))
else:
if profile_data[name].get('initial_comment', False):
comment = profile_data[name]['initial_comment']
data.extend([comment, ''])
comment.replace('\\n', '\n')
data.append(comment + '\n')
# write_piece() expects a dict, not a ProfileList - TODO: change write_piece()?
if type(profile_data) is dict:
data.extend(write_piece(profile_data, 0, name, name))
else:
data.extend(write_piece(profile_data.get_profile_and_childs(name), 0, name, name))
data.extend(write_piece(profile_data, 0, name, name))
return '\n'.join(data) + '\n'
string += '\n'.join(data)
return string + '\n'
def write_profile_ui_feedback(profile, is_attachment=False, out_dir=None):
@@ -2036,15 +2080,15 @@ def write_profile_ui_feedback(profile, is_attachment=False, out_dir=None):
def write_profile(profile, is_attachment=False, out_dir=None):
if active_profiles[profile]['filename']:
prof_filename = active_profiles[profile]['filename']
if aa[profile][profile].get('filename', False):
prof_filename = aa[profile][profile]['filename']
elif is_attachment:
prof_filename = get_profile_filename_from_attachment(profile, True)
else:
prof_filename = get_profile_filename_from_profile_name(profile, True)
serialize_options = {'METADATA': True, 'is_attachment': is_attachment}
profile_string = serialize_profile(active_profiles, profile, serialize_options)
profile_string = serialize_profile(split_to_merged(aa), profile, serialize_options)
try:
with NamedTemporaryFile('w', suffix='~', delete=False, dir=out_dir or profile_dir) as newprof:
@@ -2069,9 +2113,7 @@ def write_profile(profile, is_attachment=False, out_dir=None):
else:
debug_logger.info("Unchanged profile written: %s (not listed in 'changed' list)", profile)
for full_profile in active_profiles.get_profile_and_childs(profile):
if profile == full_profile or active_profiles[full_profile]['parent']: # copy main profile and childs, but skip external hats
original_profiles.replace_profile(full_profile, deepcopy(active_profiles[full_profile]))
original_aa[profile] = deepcopy(aa[profile])
def include_list_recursive(profile, in_preamble=False):
@@ -2101,8 +2143,10 @@ def include_list_recursive(profile, in_preamble=False):
def is_known_rule(profile, rule_type, rule_obj):
if profile[rule_type].is_covered(rule_obj, False):
return True
# XXX get rid of get() checks after we have a proper function to initialize a profile
if profile.get(rule_type, False):
if profile[rule_type].is_covered(rule_obj, False):
return True
includelist = include_list_recursive(profile)
@@ -2371,10 +2415,3 @@ def init_aa(confdir=None, profiledir=None):
parser = conf.find_first_file(cfg['settings'].get('parser')) or '/sbin/apparmor_parser'
if not os.path.isfile(parser) or not os.access(parser, os.EX_OK):
raise AppArmorException("Can't find apparmor_parser at %s" % (parser))
def load_sev_db():
global sev_db
if not sev_db:
sev_db = apparmor.severity.Severity(CONFDIR + '/severity.db', _('unknown'))

View File

@@ -1,6 +1,6 @@
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2014-2024 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2014-2015 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -18,6 +18,7 @@ import apparmor.aa as apparmor
class Prof:
def __init__(self, filename):
apparmor.init_aa()
self.aa = apparmor.aa
self.active_profiles = apparmor.active_profiles
self.include = apparmor.include
self.filename = filename
@@ -35,7 +36,7 @@ class CleanProf:
deleted += self.other.active_profiles.delete_preamble_duplicates(self.other.filename)
for profile in self.profile.active_profiles.get_all_profiles():
for profile in self.profile.aa.keys():
deleted += self.remove_duplicate_rules(profile)
return deleted
@@ -49,22 +50,22 @@ class CleanProf:
deleted += self.profile.active_profiles.delete_preamble_duplicates(self.profile.filename)
# Process every hat in the profile individually
for full_profile in sorted(self.profile.active_profiles.get_profile_and_childs(program)):
includes = self.profile.active_profiles[full_profile]['inc_ie'].get_all_full_paths(apparmor.profile_dir)
for hat in sorted(self.profile.aa[program].keys()):
includes = self.profile.aa[program][hat]['inc_ie'].get_all_full_paths(apparmor.profile_dir)
# Clean up superfluous rules from includes in the other profile
for inc in includes:
if not self.profile.include.get(inc, {}).get(inc, False):
apparmor.load_include(inc)
if self.other.active_profiles.profile_exists(full_profile):
deleted += apparmor.delete_all_duplicates(self.other.active_profiles[full_profile], inc, apparmor.ruletypes)
if self.other.aa[program].get(hat): # carefully avoid to accidentally initialize self.other.aa[program][hat]
deleted += apparmor.delete_all_duplicates(self.other.aa[program][hat], inc, apparmor.ruletypes)
# Clean duplicate rules in other profile
for ruletype in apparmor.ruletypes:
if not self.same_file:
if self.other.active_profiles.profile_exists(full_profile):
deleted += self.other.active_profiles[full_profile][ruletype].delete_duplicates(self.profile.active_profiles[full_profile][ruletype])
if self.other.aa[program].get(hat): # carefully avoid to accidentally initialize self.other.aa[program][hat]
deleted += self.other.aa[program][hat][ruletype].delete_duplicates(self.profile.aa[program][hat][ruletype])
else:
deleted += self.other.active_profiles[full_profile][ruletype].delete_duplicates(None)
deleted += self.other.aa[program][hat][ruletype].delete_duplicates(None)
return deleted

View File

@@ -40,8 +40,8 @@ class GUI:
self.label_frame = ttk.Frame(self.master, padding=(20, 10))
self.label_frame.pack()
self.button_frame = ttk.Frame(self.master, padding=(10, 10))
self.button_frame.pack(fill='x', expand=True)
self.button_frame = ttk.Frame(self.master, padding=(0, 10))
self.button_frame.pack()
def show(self):
self.master.mainloop()
@@ -86,75 +86,6 @@ class ShowMoreGUI(GUI):
self.do_nothing_button.pack(side=tk.LEFT, fill=tk.BOTH, expand=True, padx=5, pady=5)
class ShowMoreGUIAggregated(GUI):
def __init__(self, summary, detailed_text, clean_rules):
self.summary = summary
self.detailed_text = detailed_text
self.clean_rules = clean_rules
self.states = {
'summary': {
'msg': self.summary,
'btn_left': _('Show more details'),
'btn_right': _('Show rules only')
},
'detailed': {
'msg': self.detailed_text,
'btn_left': _('Show summary'),
'btn_right': _('Show rules only')
},
'rules_only': {
'msg': self.clean_rules,
'btn_left': _('Show more details'),
'btn_right': _('Show summary')
}
}
self.state = 'rules_only'
super().__init__()
self.master.title(_('AppArmor - More info'))
self.text_display = tk.Text(self.label_frame, height=40, width=100, wrap='word')
if ttkthemes:
self.text_display.configure(background=self.bg_color, foreground=self.fg_color)
self.text_display.insert('1.0', self.states[self.state]['msg'])
self.text_display['state'] = 'disabled'
self.scrollbar = ttk.Scrollbar(self.label_frame, command=self.text_display.yview)
self.text_display['yscrollcommand'] = self.scrollbar.set
self.scrollbar.pack(side='right', fill='y')
self.text_display.pack(side='left', fill='both', expand=True)
self.btn_left = ttk.Button(self.button_frame, text=self.states[self.state]['btn_left'], width=1, command=lambda: self.change_view('btn_left'))
self.btn_left.grid(row=0, column=0, padx=5, pady=5, sticky="ew")
self.btn_right = ttk.Button(self.button_frame, text=self.states[self.state]['btn_right'], width=1, command=lambda: self.change_view('btn_right'))
self.btn_right.grid(row=0, column=1, padx=5, pady=5, sticky="ew")
self.btn_allow_all = ttk.Button(self.button_frame, text="Allow All", width=1, command=lambda: self.set_result('allow_all'))
self.btn_allow_all.grid(row=0, column=2, padx=5, pady=5, sticky="ew")
for i in range(3):
self.button_frame.grid_columnconfigure(i, weight=1)
def change_view(self, action):
if action == 'btn_left':
self.state = 'detailed' if self.state != 'detailed' else 'summary'
elif action == 'btn_right':
self.state = 'rules_only' if self.state != 'rules_only' else 'summary'
self.btn_left['text'] = self.states[self.state]['btn_left']
self.btn_right['text'] = self.states[self.state]['btn_right']
self.text_display['state'] = 'normal'
self.text_display.delete('1.0', 'end')
self.text_display.insert('1.0', self.states[self.state]['msg'])
self.text_display['state'] = 'disabled'
class UsernsGUI(GUI):
def __init__(self, name, path):
self.name = name

View File

@@ -1,5 +1,5 @@
# ----------------------------------------------------------------------
# Copyright (C) 2018-2024 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2018-2020 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -52,12 +52,6 @@ class ProfileList:
name = type(self).__name__
return '\n<%s>\n%s\n</%s>\n' % (name, '\n'.join(self.files), name)
def __getitem__(self, key):
if key in self.profiles:
return self.profiles[key]
else:
raise AppArmorBug('attempt to read unknown profile %s' % key)
def init_file(self, filename):
if self.files.get(filename):
return # don't re-initialize / overwrite existing data
@@ -69,7 +63,7 @@ class ProfileList:
for rule in preamble_ruletypes:
self.files[filename][rule] = preamble_ruletypes[rule]['ruleset']()
def add_profile(self, filename, profile_name, attachment, prof_storage):
def add_profile(self, filename, profile_name, attachment, prof_storage=None):
"""Add the given profile and attachment to the list"""
if not filename:
@@ -78,7 +72,7 @@ class ProfileList:
if not profile_name and not attachment:
raise AppArmorBug('Neither profile name or attachment given')
if type(prof_storage) is not ProfileStorage:
if type(prof_storage) is not ProfileStorage and prof_storage is not None:
raise AppArmorBug('Invalid profile type: %s' % type(prof_storage))
if profile_name in self.profile_names:
@@ -107,21 +101,6 @@ class ProfileList:
self.files[filename]['profiles'].append(attachment)
self.profiles[attachment] = prof_storage
def replace_profile(self, profile_name, prof_storage):
"""Replace the given profile in the profile list"""
if profile_name not in self.profiles:
raise AppArmorBug('Attempt to replace non-existing profile %s' % profile_name)
if type(prof_storage) is not ProfileStorage:
raise AppArmorBug('Invalid profile type: %s' % type(prof_storage))
# we might lift this restriction later, but for now, forbid changing the attachment instead of updating self.attachments
if self.profiles[profile_name]['attachment'] != prof_storage['attachment']:
raise AppArmorBug('Attempt to change atttachment while replacing profile %s - original: %s, new: %s' % (profile_name, self.profiles[profile_name]['attachment'], prof_storage['attachment']))
self.profiles[profile_name] = prof_storage
def add_rule(self, filename, ruletype, rule):
"""Store the given rule for the given profile filename preamble"""
@@ -189,9 +168,6 @@ class ProfileList:
return deleted
def get_all_profiles(self):
return self.profiles
def get_profile_and_childs(self, profile_name):
found = {}
for prof in self.profiles:
@@ -307,9 +283,6 @@ class ProfileList:
return merged_variables
def profile_exists(self, profile_name):
return profile_name in self.profiles
def profiles_in_file(self, filename):
"""Return list of profiles in the given file"""
if not self.files.get(filename):

View File

@@ -1,6 +1,6 @@
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2014-2024 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2014-2021 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -77,7 +77,6 @@ class ProfileStorage:
data['filename'] = ''
data['logprof_suggest'] = '' # set in abstractions that should be suggested by aa-logprof
data['parent'] = '' # parent profile, or '' for top-level profiles and external hats
data['name'] = ''
data['attachment'] = ''
data['xattrs'] = ''
@@ -222,13 +221,11 @@ class ProfileStorage:
_('%(profile)s profile in %(file)s contains syntax errors in line %(line)s: a child profile inside another child profile is not allowed.')
% {'profile': profile, 'file': file, 'line': lineno + 1})
parent = profile
hat = matches['profile']
prof_or_hat_name = hat
pps_set_hat_external = False
else: # stand-alone profile
parent = ''
profile = matches['profile']
prof_or_hat_name = profile
if len(profile.split('//')) > 2:
@@ -244,7 +241,6 @@ class ProfileStorage:
prof_storage = cls(profile, hat, cls.__name__ + '.parse()')
prof_storage['parent'] = parent
prof_storage['name'] = prof_or_hat_name
prof_storage['filename'] = file
prof_storage['external'] = pps_set_hat_external

View File

@@ -1,6 +1,6 @@
# ----------------------------------------------------------------------
# Copyright (C) 2013 Kshitij Gupta <kgupta8592@gmail.com>
# Copyright (C) 2015-2024 Christian Boltz <apparmor@cboltz.de>
# Copyright (C) 2015-2023 Christian Boltz <apparmor@cboltz.de>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of version 2 of the GNU General Public
@@ -64,7 +64,7 @@ class aa_tools:
prof_filename = apparmor.get_profile_filename_from_attachment(fq_path, True)
else:
which_ = which(p)
if self.name == 'cleanprof' and apparmor.active_profiles.profile_exists(p):
if self.name == 'cleanprof' and p in apparmor.aa:
program = p # not really correct, but works
profile = p
prof_filename = apparmor.get_profile_filename_from_profile_name(profile)
@@ -104,14 +104,14 @@ class aa_tools:
if program is None:
program = profile
if not program or not (os.path.exists(program) or apparmor.active_profiles.profile_exists(profile)):
if not program or not (os.path.exists(program) or profile in apparmor.aa):
if program and not program.startswith('/'):
program = aaui.UI_GetString(_('The given program cannot be found, please try with the fully qualified path name of the program: '), '')
else:
aaui.UI_Info(_("%s does not exist, please double-check the path.") % program)
sys.exit(1)
if program and apparmor.active_profiles.profile_exists(profile):
if program and profile in apparmor.aa:
self.clean_profile(program, profile, prof_filename)
else:
@@ -207,8 +207,8 @@ class aa_tools:
apparmor.write_profile_ui_feedback(profile)
self.reload_profile(prof_filename)
elif ans == 'CMD_VIEW_CHANGES':
# oldprofile = apparmor.serialize_profile(apparmor.original_profiles, profile, {})
newprofile = apparmor.serialize_profile(apparmor.active_profiles, profile, {}) # , {'is_attachment': True})
# oldprofile = apparmor.serialize_profile(apparmor.split_to_merged(apparmor.original_aa), profile, {})
newprofile = apparmor.serialize_profile(apparmor.split_to_merged(apparmor.aa), profile, {}) # , {'is_attachment': True})
aaui.UI_Changes(prof_filename, newprofile, comments=True)
def unload_profile(self, prof_filename):

View File

@@ -29,15 +29,15 @@ def create_userns(template_path, name, bin_path, profile_path, decision):
def add_to_profile(rule, profile_name):
aa.init_aa()
aa.update_profiles()
aa.read_profiles()
rule_type, rule_class = ReadLog('', '', '').get_rule_type(rule)
rule_obj = rule_class.create_instance(rule)
if not aa.active_profiles.profile_exists(profile_name):
if profile_name not in aa.aa or profile_name not in aa.aa[profile_name]:
exit(_('Cannot find {} in profiles').format(profile_name))
aa.active_profiles[profile_name][rule_type].add(rule_obj, cleanup=True)
aa.aa[profile_name][profile_name][rule_type].add(rule_obj, cleanup=True)
# Save changes
aa.write_profile_ui_feedback(profile_name)
@@ -48,50 +48,31 @@ def usage(is_help):
print('This tool is a low level tool - do not use it directly')
print('{} create_userns <template_path> <name> <bin_path> <profile_path> <decision>'.format(sys.argv[0]))
print('{} add_rule <rule> <profile_name>'.format(sys.argv[0]))
print('{} from_file <file>'.format(sys.argv[0]))
if is_help:
exit(0)
else:
exit(1)
def create_from_file(file_path):
with open(file_path) as file:
for line in file:
args = line[:-1].split('\t')
if len(args) > 1:
command = args[0]
else:
command = None # Handle the case where no command is provided
do_command(command, args)
def do_command(command, args):
if command == 'from_file':
if not len(args) == 2:
usage(False)
create_from_file(args[1])
elif command == 'create_userns':
if not len(args) == 6:
usage(False)
create_userns(args[1], args[2], args[3], args[4], args[5])
elif command == 'add_rule':
if not len(args) == 3:
usage(False)
add_to_profile(args[1], args[2])
elif command == 'help':
usage(True)
else:
usage(False)
def main():
if len(sys.argv) > 1:
command = sys.argv[1]
else:
command = None # Handle the case where no command is provided
do_command(command, sys.argv[1:])
if command == 'create_userns':
if not len(sys.argv) == 7:
usage(False)
create_userns(sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5], sys.argv[6])
elif command == 'add_rule':
if not len(sys.argv) == 4:
usage(False)
add_to_profile(sys.argv[2], sys.argv[3])
elif command == 'help':
usage(True)
else:
usage(False)
if __name__ == '__main__':

View File

@@ -0,0 +1,424 @@
;;; apparmor-mode.el --- Major mode for editing AppArmor policy files -*- lexical-binding: t; -*-
;; Copyright (c) 2018 Alex Murray
;; Author: Alex Murray <murray.alex@gmail.com>
;; Maintainer: Alex Murray <murray.alex@gmail.com>
;; URL: https://gitlab.com/apparmor/apparmor
;; Version: 0.8.2
;; Package-Requires: ((emacs "26.1"))
;; This file is not part of GNU Emacs.
;; This program is free software: you can redistribute it and/or modify
;; it under the terms of the GNU General Public License as published by
;; the Free Software Foundation, either version 3 of the License, or
;; (at your option) any later version.
;; This program is distributed in the hope that it will be useful,
;; but WITHOUT ANY WARRANTY; without even the implied warranty of
;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
;; GNU General Public License for more details.
;; You should have received a copy of the GNU General Public License
;; along with this program. If not, see <https://www.gnu.org/licenses/>.
;;; Commentary:
;; The following documentation sources were used:
;; https://gitlab.com/apparmor/apparmor/wikis/QuickProfileLanguage
;; https://gitlab.com/apparmor/apparmor/wikis/ProfileLanguage
;; TODO:
;; - do smarter completion syntactically via regexps
;; - decide if to use entire line regexp for statements or
;; - not (ie just a subset?) if we use regexps above then
;; - should probably keep full regexps here so can reuse
;; - expand highlighting of mount rules (options=...) similar to dbus
;; - add tests via ert etc
;;;; Setup
;; (require 'apparmor-mode)
;;; Code:
;; for flycheck integration
(declare-function flycheck-define-command-checker "ext:flycheck.el"
(symbol docstring &rest properties))
(declare-function flycheck-valid-checker-p "ext:flycheck.el")
(defvar flycheck-checkers)
(defgroup apparmor nil
"Major mode for editing AppArmor policies."
:group 'tools)
(defcustom apparmor-mode-indent-offset 2
"Indentation offset in `apparmor-mode' buffers."
:type 'integer
:group 'apparmor)
(defvar apparmor-mode-keywords '("all" "audit" "capability" "chmod" "delegate"
"dbus" "deny" "file" "flags" "io_uring" "include"
"include if exists" "link" "mount" "mqueue"
"network" "on" "owner" "pivot_root" "profile"
"quiet" "remount" "rlimit" "safe" "subset" "to"
"umount" "unsafe" "userns"))
(defvar apparmor-mode-profile-flags '("enforce" "complain" "debug" "kill"
"chroot_relative" "namespace_relative"
"attach_disconnected" "no_attach_disconnected"
"chroot_attach" "chroot_no_attach"
"unconfined"))
(defvar apparmor-mode-capabilities '("audit_control" "audit_write" "chown"
"dac_override" "dac_read_search" "fowner"
"fsetid" "ipc_lock" "ipc_owner" "kill"
"lease" "linux_immutable" "mac_admin"
"mac_override" "mknod" "net_admin"
"net_bind_service" "net_broadcast"
"net_raw" "setfcap" "setgid" "setpcap"
"setuid" "syslog" "sys_admin" "sys_boot"
"sys_chroot" "sys_module" "sys_nice"
"sys_pacct" "sys_ptrace" "sys_rawio"
"sys_resource" "sys_time"
"sys_tty_config"))
(defvar apparmor-mode-network-permissions '("create" "accept" "bind" "connect"
"listen" "read" "write" "send"
"receive" "getsockname" "getpeername"
"getsockopt" "setsockopt" "fcntl"
"ioctl" "shutdown" "getpeersec"
"sqpoll" "override_creds"))
(defvar apparmor-mode-network-domains '("inet" "ax25" "ipx" "appletalk" "netrom"
"bridge" "atmpvc" "x25" "inet6" "rose"
"netbeui" "security" "key" "packet"
"ash" "econet" "atmsvc" "sna" "irda"
"pppox" "wanpipe" "bluetooth" "unix"))
(defvar apparmor-mode-network-types '("stream" "dgram" "seqpacket" "raw" "rdm"
"packet" "dccp"))
;; TODO: this is not complete since is not fully documented
(defvar apparmor-mode-network-protocols '("tcp" "udp" "icmp"))
(defvar apparmor-mode-dbus-permissions '("r" "w" "rw" "send" "receive"
"acquire" "bind" "read" "write"))
(defvar apparmor-mode-rlimit-types '("fsize" "data" "stack" "core" "rss" "as"
"memlock" "msgqueue" "nofile" "locks"
"sigpending" "nproc" "rtprio" "cpu"
"nice"))
(defvar apparmor-mode-abi-regexp "^\\s-*\\(#?abi\\)\\s-+\\([<\"][[:graph:]]+[\">]\\)")
(defvar apparmor-mode-include-regexp "^\\s-*\\(#?include\\( if exists\\)?\\)\\s-+\\([<\"][[:graph:]]+[\">]\\)")
(defvar apparmor-mode-capability-regexp (concat "^\\s-*\\(capability\\)\\s-+\\("
(regexp-opt apparmor-mode-capabilities)
"\\s-*\\)*"))
(defvar apparmor-mode-variable-name-regexp "@{[[:alpha:]_]+}")
(defvar apparmor-mode-variable-regexp
(concat "^\\s-*\\(" apparmor-mode-variable-name-regexp "\\)\\s-*\\(\\+?=\\)\\s-*\\([[:graph:]]+\\)\\(\\s-+\\([[:graph:]]+\\)\\)?\\s-*\\(#.*\\)?$"))
(defvar apparmor-mode-profile-name-regexp "[[:alnum:]]+")
(defvar apparmor-mode-profile-attachment-regexp "[][[:alnum:]*@/_{},-.?]+")
(defvar apparmor-mode-profile-flags-regexp
(concat "\\(flags\\)=(\\(" (regexp-opt apparmor-mode-profile-flags) "\\s-*\\)*)") )
(defvar apparmor-mode-profile-regexp
(concat "^\\s-*\\(\\(profile\\)\\s-+\\(\\(" apparmor-mode-profile-name-regexp "\\)\\s-+\\)?\\)?\\(\\^?" apparmor-mode-profile-attachment-regexp "\\)\\(\\s-+" apparmor-mode-profile-flags-regexp "\\)?\\s-+{\\s-*$"))
(defvar apparmor-mode-file-rule-permissions-regexp "[CPUaciklmpruwx]+")
(defvar apparmor-mode-file-rule-permissions-prefix-regexp
(concat "^\\s-*\\(\\(audit\\|owner\\|deny\\)\\s-+\\)*\\(file\\s-+\\)?"
"\\(" apparmor-mode-file-rule-permissions-regexp "\\)\\s-+"
"\\(" apparmor-mode-profile-attachment-regexp "\\)\\s-*"
"\\(->\\s-+\\(" apparmor-mode-profile-attachment-regexp "\\)\\)?\\s-*"
","))
(defvar apparmor-mode-file-rule-permissions-suffix-regexp
(concat "^\\s-*\\(\\(audit\\|owner\\|deny\\)\\s-+\\)*\\(file\\s-+\\)?"
"\\(" apparmor-mode-profile-attachment-regexp "\\)\\s-+"
"\\(" apparmor-mode-file-rule-permissions-regexp "\\)\\s-*"
"\\(->\\s-+\\(" apparmor-mode-profile-attachment-regexp "\\)\\)?\\s-*"
","))
(defvar apparmor-mode-network-rule-regexp
(concat
"^\\s-*\\(\\(audit\\|quiet\\|deny\\)\\s-+\\)*network\\s-*"
"\\(" (regexp-opt apparmor-mode-network-permissions 'words) "\\)?\\s-*"
"\\(" (regexp-opt apparmor-mode-network-domains 'words) "\\)?\\s-*"
"\\(" (regexp-opt apparmor-mode-network-types 'words) "\\)?\\s-*"
"\\(" (regexp-opt apparmor-mode-network-protocols 'words) "\\)?\\s-*"
;; TODO: address expression
"\\(delegate to\\s-+\\(" apparmor-mode-profile-attachment-regexp "\\)\\)?\\s-*"
","))
(defvar apparmor-mode-dbus-rule-regexp
(concat
"^\\s-*\\(\\(audit\\|deny\\)\\s-+\\)?dbus\\s-*"
"\\(\\(bus\\)=\\(system\\|session\\)\\)?\\s-*"
"\\(\\(dest\\)=\\([[:alpha:].]+\\)\\)?\\s-*"
"\\(\\(path\\)=\\([[:alpha:]/]+\\)\\)?\\s-*"
"\\(\\(interface\\)=\\([[:alpha:].]+\\)\\)?\\s-*"
"\\(\\(method\\)=\\([[:alpha:]_]+\\)\\)?\\s-*"
;; permissions - either a single permission or multiple permissions in
;; parentheses with commas and whitespace separating
"\\("
(regexp-opt apparmor-mode-dbus-permissions 'words)
"\\|"
"("
(regexp-opt apparmor-mode-dbus-permissions 'words)
"\\("
(regexp-opt apparmor-mode-dbus-permissions 'words) ",\\s-+"
"\\)"
"\\)?\\s-*"
","))
(defvar apparmor-mode-font-lock-defaults
`(((,(regexp-opt apparmor-mode-keywords 'symbols) . font-lock-keyword-face)
(,(regexp-opt apparmor-mode-rlimit-types 'symbols) . font-lock-type-face)
;; comma at end-of-line
(",\\s-*$" . 'font-lock-builtin-face)
;; TODO be more specific about where these are valid
("->" . 'font-lock-builtin-face)
("[=\\+()]" . 'font-lock-builtin-face)
("+=" . 'font-lock-builtin-face)
("<=" . 'font-lock-builtin-face) ; rlimit
;; abi
(,apparmor-mode-abi-regexp
(1 font-lock-preprocessor-face t)
(2 font-lock-string-face t))
;; includes
(,apparmor-mode-include-regexp
(1 font-lock-preprocessor-face t)
(3 font-lock-string-face t))
;; variables
(,apparmor-mode-variable-name-regexp 0 font-lock-variable-name-face t)
;; profiles
(,apparmor-mode-profile-regexp
(4 font-lock-function-name-face t nil)
(5 font-lock-variable-name-face t))
;; capabilities
(,apparmor-mode-capability-regexp 2 font-lock-type-face t)
;; file rules
(,apparmor-mode-file-rule-permissions-prefix-regexp
(3 font-lock-keyword-face nil t) ; class
(4 font-lock-constant-face t) ; permissions
(7 font-lock-function-name-face nil t)) ;profile
(,apparmor-mode-file-rule-permissions-suffix-regexp
(3 font-lock-keyword-face nil t) ; class
(5 font-lock-constant-face t) ; permissions
(7 font-lock-function-name-face nil t)) ;profile
;; network rules
(,apparmor-mode-network-rule-regexp
(3 font-lock-constant-face t) ;permissions
(4 font-lock-function-name-face t) ;domain
(5 font-lock-variable-name-face t) ;type
(6 font-lock-type-face t)) ; protocol
;; dbus rules
(,apparmor-mode-dbus-rule-regexp
(4 font-lock-variable-name-face t) ;bus
(5 font-lock-constant-face t) ;system/session
(7 font-lock-variable-name-face t) ;dest
(10 font-lock-variable-name-face t)
(13 font-lock-variable-name-face t)
(16 font-lock-variable-name-face t)))))
(defvar apparmor-mode-syntax-table
(let ((table (make-syntax-table)))
;; # is comment start
(modify-syntax-entry ?# "<" table)
;; newline finishes comment line
(modify-syntax-entry ?\n ">" table)
;; / and + is used in path names which we want to treat as an entire word
(modify-syntax-entry ?/ "w" table)
(modify-syntax-entry ?+ "w" table)
table))
(defun apparmor-mode-complete-include (prefix &optional local)
"Return list of completions of include for PREFIX which could be LOCAL."
(let* ((file-name (file-name-base prefix))
(parent (file-name-directory prefix))
(directory (concat (if local default-directory "/etc/apparmor.d") "/"
parent)))
;; need to prepend all of directory part of prefix
(mapcar (lambda (f) (concat parent f))
(file-name-all-completions file-name directory))))
;; TODO - make a lot smarter than just keywords - complete paths from the
;; system if we look like a path, do sub-completion based on the current lines
;; keyword etc - ie match against syntax highlighting regexes and use those to
;; further complete etc
(defun apparmor-mode-completion-at-point ()
"`completion-at-point' function for `apparmor-mode'."
(let ((prefix (or (thing-at-point 'word t) ""))
(bounds (bounds-of-thing-at-point 'word))
(bol (save-excursion (beginning-of-line) (point)))
(candidates nil))
(setq candidates
(cond ((looking-back "#?include\\s-+\\([<\"]\\)[[:graph:]]*" bol)
(apparmor-mode-complete-include
prefix (string= (match-string 1) "\"")))
(t apparmor-mode-keywords)))
(list (car bounds) ; start
(cdr bounds) ; end
candidates
:company-docsig #'identity)))
(defun apparmor-mode-indent-line ()
"Indent current line in `apparmor-mode'."
(interactive)
(if (bolp)
(apparmor-mode--indent-line)
(save-excursion
(apparmor-mode--indent-line))))
(defun apparmor-mode--indent-line ()
"Indent current line in `apparmor-mode'."
(beginning-of-line)
(cond
((bobp)
;; simple case indent to 0
(indent-line-to 0))
((looking-at "^\\s-*}\\s-*$")
;; block closing, deindent relative to previous line
(indent-line-to (save-excursion
(forward-line -1)
(max 0 (- (current-indentation) apparmor-mode-indent-offset)))))
;; other cases need to look at previous lines
(t
(indent-line-to (save-excursion
(forward-line -1)
;; keep going backwards until we have a line with actual
;; content since blank lines don't count
(while (and (looking-at "^\\s-*$")
(not (bobp)))
(forward-line -1))
(cond
((looking-at "\\(^.*{[^}]*$\\)")
;; previous line opened a block, indent to that line
(+ (current-indentation) apparmor-mode-indent-offset))
(t
;; default case, indent the same as previous line
(current-indentation))))))))
;;;###autoload
(define-derived-mode apparmor-mode prog-mode "AppArmor"
"Major mode for editing AppArmor profiles."
:syntax-table apparmor-mode-syntax-table
(setq font-lock-defaults apparmor-mode-font-lock-defaults)
(setq-local indent-line-function #'apparmor-mode-indent-line)
(add-to-list 'completion-at-point-functions #'apparmor-mode-completion-at-point)
(setq imenu-generic-expression `(("Profiles" ,apparmor-mode-profile-regexp 5)))
(setq comment-start "#")
(setq comment-end "")
(when (require 'flycheck nil t)
(unless (flycheck-valid-checker-p 'apparmor)
(flycheck-define-command-checker 'apparmor
"A checker using apparmor_parser. "
:command '("apparmor_parser"
"-Q" ;; skip kernel load
"-K" ;; skip cache
source)
:error-patterns '((error line-start "AppArmor parser error at line "
line ": " (message)
line-end)
(error line-start "AppArmor parser error for "
(one-or-more not-newline)
" in profile " (file-name)
" at line " line ": " (message)
line-end))
:modes '(apparmor-mode)))
(add-to-list 'flycheck-checkers 'apparmor t)))
;; flymake integration
(defvar-local apparmor-mode--flymake-proc nil)
(defun apparmor-mode-flymake (report-fn &rest _args)
"`flymake' backend function for `apparmor-mode' to report errors via REPORT-FN."
;; disable if apparmor_parser is not available
(unless (executable-find "apparmor_parser")
(error "Cannot find apparmor_parser"))
;; kill any existing running instance
(when (process-live-p apparmor-mode--flymake-proc)
(kill-process apparmor-mode--flymake-proc))
(let ((source (current-buffer))
(contents (buffer-substring (point-min) (point-max))))
;; when the current buffer is an abstraction then fake a profile around it so
;; we can check it
(when (and (buffer-file-name)
(string-match-p ".*/abstractions/.*" (buffer-file-name)))
(setq contents (format "profile %s { %s }" (buffer-name) contents)))
(save-restriction
(widen)
;; Reset the `apparmor-mode--flymake-proc' process to a new process
;; calling check-syntax.
(setq
apparmor-mode--flymake-proc
(make-process
:name "apparmor-mode-flymake" :noquery t :connection-type 'pipe
;; Make output go to a temporary buffer.
:buffer (generate-new-buffer " *apparmor-mode-flymake*")
;; TODO: specify the base directory so that includes resolve correctly
;; rather than using the system ones
:command '("apparmor_parser" "-Q" "-K" "/dev/stdin")
:sentinel
(lambda (proc _event)
(when (memq (process-status proc) '(exit signal))
(unwind-protect
;; Only proceed if `proc' is the same as
;; `apparmor-mode--flymake-proc', which indicates that
;; `proc' is not an obsolete process.
;;
(if (with-current-buffer source (eq proc apparmor-mode--flymake-proc))
(with-current-buffer (process-buffer proc)
(goto-char (point-min))
;; Parse the output buffer for diagnostic's
;; messages and locations, collect them in a list
;; of objects, and call `report-fn'.
;;
(cl-loop
while (search-forward-regexp
"^\\(AppArmor parser error \\(?:for /dev/stdin in profile .*\\)?at line \\)\\([0-9]+\\): \\(.*\\)$"
nil t)
for msg = (match-string 3)
for (beg . end) = (flymake-diag-region
source
(string-to-number (match-string 2)))
for type = :error
collect (flymake-make-diagnostic source beg end type msg)
into diags
finally (funcall report-fn diags)))
(flymake-log :warning "Cancelling obsolete check %s" proc))
;; Cleanup the temporary buffer used to hold the
;; check's output.
(kill-buffer (process-buffer proc)))))))
(process-send-string apparmor-mode--flymake-proc contents)
(process-send-eof apparmor-mode--flymake-proc))))
;;;###autoload
(defun apparmor-mode-setup-flymake-backend ()
"Setup the `flymake' backend for `apparmor-mode'."
(add-hook 'flymake-diagnostic-functions 'apparmor-mode-flymake nil t))
;;;###autoload
(add-hook 'apparmor-mode-hook 'apparmor-mode-setup-flymake-backend)
;;;###autoload
(add-to-list 'auto-mode-alist '("\\`/etc/apparmor\\.d/" . apparmor-mode))
;;;###autoload
(add-to-list 'auto-mode-alist '("\\`/var/lib/snapd/apparmor/profiles/" . apparmor-mode))
(provide 'apparmor-mode)
;;; apparmor-mode.el ends here

View File

@@ -4,7 +4,7 @@
"http://www.freedesktop.org/standards/PolicyKit/1/policyconfig.dtd">
<policyconfig>
<action id="com.ubuntu.pkexec.aa-notify.modify_profile">
<action id="net.apparmor.pkexec.aa-notify.modify_profile">
<description>AppArmor: modifying security profile</description>
<message>To modify an AppArmor security profile, you need to authenticate.</message>
<defaults>
@@ -15,7 +15,7 @@
<annotate key="org.freedesktop.policykit.exec.path">{LIB_PATH}apparmor/update_profile.py</annotate>
<annotate key="org.freedesktop.policykit.exec.argv1">add_rule</annotate>
</action>
<action id="com.ubuntu.pkexec.aa-notify.create_userns">
<action id="net.apparmor.pkexec.aa-notify.create_userns">
<description>AppArmor: adding userns profile</description>
<message>To allow a program to use unprivileged user namespaces, you need to authenticate.</message>
<defaults>
@@ -26,16 +26,5 @@
<annotate key="org.freedesktop.policykit.exec.path">{LIB_PATH}apparmor/update_profile.py</annotate>
<annotate key="org.freedesktop.policykit.exec.argv1">create_userns</annotate>
</action>
<action id="com.ubuntu.pkexec.aa-notify.from_file">
<description>AppArmor: Modifying profile from file</description>
<message>To modify an AppArmor security profile from file, you need to authenticate.</message>
<defaults>
<allow_any>auth_admin</allow_any>
<allow_inactive>auth_admin</allow_inactive>
<allow_active>auth_admin</allow_active>
</defaults>
<annotate key="org.freedesktop.policykit.exec.path">{LIB_PATH}apparmor/update_profile.py</annotate>
<annotate key="org.freedesktop.policykit.exec.argv1">from_file</annotate>
</action>
</policyconfig>

View File

@@ -23,12 +23,6 @@ ignore_denied_capability="sudo,su"
# OPTIONAL - kind of operations which display a popup prompt.
# prompt_filter="userns"
# OPTIONAL - Maximum number of profile that can send notification before they are merged
# maximum_number_notification_profiles=2
# OPTIONAL - Keys to aggregate when merging events
# keys_to_aggregate="operation,class,name,denied,target"
# OPTIONAL - restrict using aa-notify to users in the given group
# (if not set, everybody who has permissions to read the logfile can use it)
# use_group="admin"

View File

@@ -1,8 +1,10 @@
GENERATING TRANSLATION MESSAGES
To generate the .pot file, run the following command in the po directory:
To generate the messages.pot file:
pygettext3 -o apparmor-utils.pot ../apparmor/*.py $(find .. -executable -name 'aa-*')
Run the following command in Translate.
python pygettext.py ../apparmor/*.py ../Tools/aa*
It will generate the apparmor-utils.pot file in the po directory.
It will generate the messages.pot file in the Translate directory.
You might need to provide the full path to pygettext.py from your python installation. It will typically be in the /path/to/python/libs/Tools/i18n/pygettext.py

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More