2
0
mirror of https://gitlab.com/apparmor/apparmor synced 2025-08-22 01:57:43 +00:00

35 Commits

Author SHA1 Message Date
John Johansen
501e87a3f2 parser: Cleanup parser control flags, so they display as expected to user
Instead of having multiple tables, since we have room post split
of optimization and dump flags just move all the optimization and
dump flags into a common table.

We can if needed switch the flag entry size to a long in the future.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2023-07-08 19:58:59 -07:00
John Johansen
e84e481263 parser: cleanup and rework optimization and dump flag handling
In preparation for more flags (not all of the backend dfa based),
rework the optimization and dump flag handling which has been exclusively
around the dfa up to this point.

- split dfa control and dump flags into separate fields. This gives more
  room for new flags in the existing DFA set
- rename DFA_DUMP, and DFA_CONTROL to CONTROL_DFA and DUMP_DFA as
  this will provide more uniform naming for none dfa flags
- group dump and control flags into a structure so they can be passed
  together.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2023-07-07 17:47:41 -07:00
Alfonso Sánchez-Beato
5aab543a3b parser: replace dynamic_cast with is_type method
The dynamic_cast operator is slow as it needs to look at RTTI
information and even does some string comparisons, especially in deep
hierarchies like the one for Node. Profiling with callgrind showed
that dynamic_cast can eat a huge portion of the running time, as it
takes most of the time that is spent in the simplify_tree()
function. For some complex profiles, the number of calls to
dynamic_cast can be in the range of millions.

This commit replaces the use of dynamic_cast in the Node hierarchy
with a method called is_type(), which returns true if the pointer can
be casted to the specified type. It works by looking at a Node object
field that is an integer with bits set for each type up in the
hierarchy. Therefore, dynamic_cast is replaced by a simple bits
operation.

This change can reduce the compilation times for some profiles more
that 50%, especially in arm/arm64 arch. This opens the door to maybe
avoid "-O no-expr-simplify" in the snapd daemon, as now that option
would make the compilation slower in almost all cases.

This is the example profile used in some of my tests, with this change
the run-time is around 1/3 of what it was before on an x86 laptop:

profile "test" (attach_disconnected,mediate_deleted) {
dbus send
    bus={fcitx,session}
    path=/inputcontext_[0-9]*
    interface=org.fcitx.Fcitx.InputContext
    member="{Close,Destroy,Enable}IC"
    peer=(label=unconfined),
dbus send
    bus={fcitx,session}
    path=/inputcontext_[0-9]*
    interface=org.fcitx.Fcitx.InputContext
    member=Reset
    peer=(label=unconfined),
dbus receive
    bus=fcitx
    peer=(label=unconfined),
dbus receive
    bus=session
    interface=org.fcitx.Fcitx.*
    peer=(label=unconfined),
dbus send
    bus={fcitx,session}
    path=/inputcontext_[0-9]*
    interface=org.fcitx.Fcitx.InputContext
    member="Focus{In,Out}"
    peer=(label=unconfined),
dbus send
    bus={fcitx,session}
    path=/inputcontext_[0-9]*
    interface=org.fcitx.Fcitx.InputContext
    member="{CommitPreedit,Set*}"
    peer=(label=unconfined),
dbus send
    bus={fcitx,session}
    path=/inputcontext_[0-9]*
    interface=org.fcitx.Fcitx.InputContext
    member="{MouseEvent,ProcessKeyEvent}"
    peer=(label=unconfined),
dbus send
    bus={fcitx,session}
    path=/inputcontext_[0-9]*
    interface=org.freedesktop.DBus.Properties
    member=GetAll
    peer=(label=unconfined),
dbus (send)
    bus=session
    path=/org/a11y/bus
    interface=org.a11y.Bus
    member=GetAddress
    peer=(label=unconfined),
dbus (send)
    bus=session
    path=/org/a11y/bus
    interface=org.freedesktop.DBus.Properties
    member=Get{,All}
    peer=(label=unconfined),
dbus (receive, send)
    bus=accessibility
    path=/org/a11y/atspi/**
    peer=(label=unconfined),
dbus (send)
    bus=system
    path=/org/freedesktop/Accounts
    interface=org.freedesktop.DBus.Introspectable
    member=Introspect
    peer=(label=unconfined),
dbus (send)
    bus=system
    path=/org/freedesktop/Accounts
    interface=org.freedesktop.Accounts
    member=FindUserById
    peer=(label=unconfined),
dbus (receive, send)
    bus=system
    path=/org/freedesktop/Accounts/User[0-9]*
    interface=org.freedesktop.DBus.Properties
    member={Get,PropertiesChanged}
    peer=(label=unconfined),
dbus (send)
    bus=session
    interface=org.gtk.Actions
    member=Changed
    peer=(name=org.freedesktop.DBus, label=unconfined),
dbus (receive)
    bus=session
    interface=org.gtk.Actions
    member={Activate,DescribeAll,SetState}
    peer=(label=unconfined),
dbus (receive)
    bus=session
    interface=org.gtk.Menus
    member={Start,End}
    peer=(label=unconfined),
dbus (send)
    bus=session
    interface=org.gtk.Menus
    member=Changed
    peer=(name=org.freedesktop.DBus, label=unconfined),
dbus (send)
    bus=session
    path="/com/ubuntu/MenuRegistrar"
    interface="com.ubuntu.MenuRegistrar"
    member="{Register,Unregister}{App,Surface}Menu"
    peer=(label=unconfined),
}
2021-02-16 10:23:10 +01:00
Steve Beattie
8782f53593
parser: spelling fixes in aare_rules.c
Adjust function and variable names to spell separator correctly. Kept
as a distinct change in case someone wants to cherrypick other fixes.

Signed-off-by: Steve Beattie <steve.beattie@canonical.com>
Acked-by: Christian Boltz <apparmor@cboltz.de>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/687
2020-12-01 12:47:26 -08:00
John Johansen
c9d01a325d parser: don't apply exec mapping computations to the policydb
v8 network permissions extend into the range used by exec mapping
so it is important to not blindly do execmapping on both the
file dfa and policydb dfa any more.

Track what type of dfa and its permissions we are building so
we can properly apply exec mapping only when building the
file dfa.

MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/521
Signed-off-by: John Johansen <john.johansen@canonical.com>
2020-09-29 03:34:47 -07:00
Eric Chiang
4116f847df libapparmor_re: fix resource leaks detected by coverity.com
Fixes two resource leaks. https://scan.coverity.com/projects/apparmor

I don't actually know how to link to the individual reports but the
first one comes from an early return. The second comes from an iterator
potentially being empty.
2020-01-02 18:09:40 -08:00
John Johansen
444b8e3836 parser: change xattr encoding and allow append_rule to embedd permissions
The current encoding makes every xattr optional and uses this to
propogate the permission from the tail to the individual rule match
points.

This however is wrong. Instead change the encoding so that an xattr
(unless optional) is required to be matched before allowing moving
onto the next xattr match.

The permission is carried on the end on each rule portion file match,
xattr 1, xattr 2, ...

Signed-off-by: John Johansen <john.johansen@canonical.com>
2019-11-26 21:32:08 -08:00
John Johansen
2992e6973f parser: convert xmatch to use out of band transitions
xattrs can contain NULL characters in their values which means we can
not user regular NULL transitions to separate values. To fix this
use out of band transition instead.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2019-11-26 21:32:08 -08:00
John Johansen
16b67ddbd6 add ability to use out of band transitions
Currently the NULL character is used as an out of band transition
for string/path elements. This works for them as the NULL character
is not valid for this data. However this does not work for binary
data that can contain a NULL character.

So far we have only dealt with fixed length fields of binary data
making the NULL separator either unnecessary.

However binary data like in the xattr match and mount data field are
variable length and can contain NULL characters. To deal with this
add the ability to specify out of band transitions, that can only
be triggered by code not input data.

The out of band transition can be used to separate variable length
data fields just as the NULL transition has been used to separate
variable length strings.

In the compressed hfa out of band transitions are expressed as a
negative offset from the states base. This leaves us room to expand
the character match range in the future if desired and on average
makes the range between the out of band transition and the input
transitions smaller than would be had if the out of band transition
had been stored after the valid input transitions.

Out of band transitions in the dfa will not break old kernels
that don't know about them, but they won't be able to trigger
the out of band transition match. So they should not be used unless
the kernel indicates that it supports them.

It should be noted that this patch only adds support for a single
out of band transition. If multiple out of band transitions are
required. It is trivial to extend.
- Add a tag indicating support in the kernel
- add a oob max range field to the dfa header so the kernel knows
  what the max range that needs verifying is.
- extend oob generation fns to generate oob based on value instead
  of a fixed -1.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2019-11-26 21:32:08 -08:00
John Johansen
72f93d9aba parser: rename uchar to transchar
Signed-off-by: John Johansen <john.johansen@canonical.com>
2019-11-26 21:32:08 -08:00
Eric Chiang
a42fd8c6f4 parser: add support for matching based on extended file attributes
Add userland support for matching based on extended file attributes.
This leverages DFA based matching already in the kernel:

https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=8e51f908
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=73f488cd

Matching is exposed via flags on the profile:

  /usr/bin/* xattrs=(user.foo=bar user.bar=**) {
      # ...
  }

Profiles list the set of extended attributes that a file MUST have, and
a regex to match the value of that extended attributes. Additional
extended attributes on the file don't effect the match.

Signed-off-by: Eric Chiang <ericchiang@google.com>
2019-03-14 10:47:54 -07:00
Eric Chiang
cc09794fbd parser: determine xmatch priority based on smallest DFA match
The length of a xmatch is used to prioritize multiple profiles that
match the same path, with the intent that the more specific match wins.
Currently, the length of a xmatch is computed by the position of the
first regex character.

While trying to work around issues with no_new_privs by combining
profiles, we noticed that the xmatch length computation doesn't work as
expected for multiple regexs. Consider the following two profiles:

    profile all /** { }
    profile bins /{,usr/,usr/local/}bin/** { }

xmatch_len is currently computed as "1" for both profiles, even though
"bins" is clearly more specific.

When determining the length of a regex, compute the smallest possible
match and use that for xmatch priority instead of the position of the
first regex character.
2019-02-08 13:51:02 -08:00
Steve Beattie
768f11b497 parser: revert changes from commit rev 3248
The changes to the parser made in commit rev 3248 were accidental and
not intended to be committed.
2015-10-14 13:49:26 -07:00
John Johansen
99322d3978 Add LSS presentations about apparmor security model 2015-10-13 15:39:17 -07:00
John Johansen
8efb5850f2 Move rule simplification into the tree construction phase
The current rule simplification algorithm has issues that need to be
addressed in a rewrite, but it is still often a win, especially for
larger profiles.

However doing rule simplification as a single pass limits what it can
do. We default to right simplification first because this has historically
shown the most benefits. For two reasons
  1. It allowed better grouping of the split out accept nodes that we
     used to do (changed in previous patches)
  2. because trailing regexes like
       /foo/**,
       /foo/**.txt,
     can be combined and they are the largest source of node set
     explosion.

However the move to unique node sets, eliminates 1, and forces 2 to
work within only the single unique permission set on the right side
factoring pass, but it still incures the penalty of walking the whole
tree looking for potential nodes to factor.

Moving tree simplification into the construction phases gets rid of
the need for the right side factoring pass to walk other node sets
that will never combine, and since we are doing simplification we can
do it before the cat and permission nodes are added reducing the
set of nodes to look at by another two.

We do loose the ability to combine nodes from different sets during
the left factoring pass, but experimentation shows that doing
simplification only within the unique permission sets achieve most of
the factoring that a single global pass would achieve.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 16:38:04 -06:00
John Johansen
832455de2c Change expr tree construction so that rules are grouped by perms
Currently rules are added to the expression tree in order, and then
tree simplification and factoring is done. This forces simplification
to "search" through the tree to find rules with the same permissions
during right factoring, which dependent on ordering of factoring may
not be able to group all rules of the same permissions.

Instead of having tree factoring do the work to regroup rules with the
same permissions, pregroup them as part of the expr tree construction.
And only build the full tree when the dfa is constructed.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 16:38:02 -06:00
John Johansen
5a9300c91c Move the permission map into the rule set
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 15:54:15 -06:00
John Johansen
292f3be438 switch away from doing an individual accept node for each perm bit
accept nodes per perm bit where done from the very begining in a
false belief that they would help produce minimized dfas because
a nfa states could share partial overlapping permissions.

In reality they make tree factoring harder, reduce in longer nfa
state sets during dfa construction and do not result in a minimized
dfa.

Moving to unique permission sets, allows us to minimize the number
of nodes sets, and helps reduce recreating each set type multiple
times during the dfa construction.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 14:08:55 -06:00
John Johansen
19c942e5c2 parser: split accept perm processing from rule parsing
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-03 14:40:08 -07:00
John Johansen
ee7bf1dc28 parser: Refactor rule accumulation to use some helper functions
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-03 14:24:37 -07:00
John Johansen
f7e12a9bc5 Convert aare_rules into a class
This cleans things up a bit and fixes a bug where not all rules are
getting properly counted so that the addition of policy_mediation
rules fails to generate the policy dfa in some cases.

Because the policy dfa is being generated correctly now we need to
fix some tests to use the new -M flag to specify the expected features
set of the test.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-04-23 10:57:16 -07:00
John Johansen
22855508e8 Add Differential State Compression to the DFA
Differential state compression encodes a state's transitions as the
difference between the state and its default state (the state it is
relative too).

This reduces the number of transitions that need to be stored in the
transition table, hence reducing the size of the dfa.  There is a
trade off in that a single input character may have to traverse more
than one state.  This is somewhat offset by reduced table sizes providing
better locality and caching properties.

With carefully encoding we can still make constant match time guarentees.
This patch guarentees that a state that is differentially encoded will do at
most 3m state traversal to match an input of length m (as opposed to a
non-differentially compressed dfa doing exactly m state traversals).
In practice the actually number of extra traversals is less than this becaus
we selectively choose which states are differentially encoded.

In addition to reducing the size of the dfa by reducing the number of
transitions that have to be stored.  Differential encoding reduces the
number of transitions that need to be considered by comb compression,
which can result in tighter packing, due to a reduction in sparseness, and
also reduces the time spent in comb compression which currently uses an
O(n^2) algorithm.

Differential encoding will always result in a DFA that is smaller or equal
in size to the encoded DFA, and will usually improve compilation times,
with the performance improvements increasing as the DFA gets larger.

Eg. Given a example DFA that created 8991 states after minimization.
* If only comb compression (current default) is used

 52057 transitions are packed into a table of 69591 entries. Achieving an
 efficiency of about 75% (an average of about 7.74 table entries per state).
 With a resulting compressed dfa16 size of 404238 bytes and a run time for
 the dfa compilation of
   real 0m9.037s
   user 0m8.893s
   sys  0m0.036s

* If differential encoding + comb compression is used, 8292 of the 8991
  states are differentially encoded, with 31557 trans removed.  Resulting in

  20500 transitions are packed into a table of 20675 entries.  Acheiving an
  efficiency of about 99.2% (an average of about 2.3 table entries per state
  With a resulting compressed dfa16 size of 207874 bytes (about 48.6%
  reduction) and a run time for the dfa compilation of
   real 0m5.416s (about 40% faster)
   user 0m5.280s
   sys  0m0.040s

Repeating with a larger DFA that has 17033 states after minimization.
* If only comb compression (current default) is used

 102992 transitions are packed into a table of 137987 entries.  Achieving
 an efficiency of about 75% (an average of about 8.10 entries per state).
 With a resultant compressed dfa16 size of 790410 bytes and a run time for d
 compilation of
  real  0m28.153s
  user  0m27.634s
  sys   0m0.120s

* with differential encoding
 39374 transition are packed into a table of 39594 entries. Achieving an
 efficiency of about 99.4% (an average of about 2.32 entries per state).
 With a resultant compressed dfa16 size of 396838 bytes (about 50% reduction
 and a run time for dfa compilation of
  real  0m11.804s (about 58% faster)
  user  0m11.657s
  sys   0m0.084s

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-01-09 16:55:55 -08:00
Steve Beattie
9c50ff9fb3 parser - terminate search early if wildcards are discovered
This patch is a very minor optimization to the search to determine
whether a given rule is an exact match or not. If a wildcard rule
(i.e.  an inexact match) is discovered, exact_match is set to 0,
so we don't need to continue the tree traversal.

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: John Johansen <john.johansen@canonical.com>
2013-10-14 14:36:05 -07:00
Steve Beattie
cf57476d6b parser - Fix const char warnings
This patch addresses a bunch of the compiler string conversion warnings
that were introduced with the C++-ification patch.

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
2013-10-01 10:59:04 -07:00
John Johansen
a34059b1e5 Convert the parser to C++
This conversion is nothing more than what is required to get it to
compile. Further improvements will come as the code is refactored.

Unfortunately due to C++ not supporting designated initializers, the auto
generation of af names needed to be reworked, and "netlink" and "unix"
domain socket keywords leaked in. Since these where going to be added in
separate patches I have not bothered to do the extra work to replace them
with a temporary place holder.

Signed-off-by: John Johansen <john.johansen@canonical.com>
[tyhicks: merged with dbus changes and memory leak fixes]
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2013-09-27 16:13:22 -07:00
John Johansen
66717a2aec temp fix using the 2.8 patch until the 3.0 patch is ready to land
fix a nasty little bug that can surface in apparmor 2.8 when
Hats/children profiles are used.
  
the matchflags in the dfa backend are not getting properly reset, which
results in a previously processed profiles match flags being used. This is
not a problem for most permissions but can result in x conflict errors.
  
Note: this should not result in profiles with the wrong x transitions loaded
as it causes compilation to file with an x conflict.
  
This is a minimal patch targeted at the 2.8 release. As such I have just
updated the delete_ruleset routine to clear the flags as it is already
being properly called for every rule set.

Apparmor 2.9/3.0 will have a different approach where it is not possible
to reuse the flags.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <sbeattie@ubuntu.com>
2012-12-10 17:08:19 -08:00
John Johansen
37f446dd79 Fix/cleanup the permission reporting for the dfa dumps
The permission reporting was not reporting the full set of permission
flags and was inconsistent between the dump routines.

Report permissions as the quad (allow/deny/audit/quiet) in hex.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-09 04:17:47 -08:00
John Johansen
e61b7b9241 Update the copyright dates for the apparmor_parser
Signed-off-by: John Johansen <john.johansen@canonical.com>
2012-02-24 04:21:59 -08:00
John Johansen
662ad60cd7 Extend the information dumped by -D rule-exprs to include permissions
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-24 04:17:19 -08:00
John Johansen
e7c550243c Make second minimization pass optional
The removal of deny information is a one way operation, that can result
in a smaller dfa, but also results in a dfa that should not be used in
future operations because the deny rules from the precomputed dfa would
not get applied.

For now default filtering out of deny information to off, as it takes
extra time and seldom results in further state reduction.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-16 07:43:02 -08:00
John Johansen
6f95ff5637 Track full permission set through all stages of DFA construction.
Previously permission information was thrown away early and permissions
where packed to their CHFA form at the start of DFA construction.  Because
of this permissions hashing to setup the initial DFA partitions was
required as x transition conflicts, etc. could not be resolved.

Move the mapping of permissions to CHFA construction, and track the full
permission set through DFA construction.  This allows removal of the
perm_hashing hack, which prevented a full minimization from happening
in some DFAs.  It also could result in x conflicts not being correctly
detected, and deny rules not being fully applied in some situations.

Eg.
 pre full minimization
   Created dfa: states 33451
   Minimized dfa: final partitions 17033

 with full minimization
   Created dfa: states 33451
   Minimized dfa: final partitions 9550
   Dfa minimization no states removed: partitions 9550

The tracking of deny rules through to the completed DFA construction creates
a new class of states.  That is states that are marked as being accepting
(carry permission information) but infact are non-accepting as they
only carry deny information.  We add a second minimization pass where such
states have their permission information cleared and are thus moved into the
non-accepting partion.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-16 07:41:40 -08:00
John Johansen
9d374d4726 Rename compressed_hfa.{c,h} and TransitionTable within them to chfa. This
is done to be clear what TransitionTable is, as we will then add matching
capabilities.  Renaming the files is just to make them consistent with
the class in the file.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2011-12-15 05:06:32 -08:00
John Johansen
84c0bba1ef Lindent + hand cleanups aare_rules
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2011-03-13 05:53:08 -07:00
John Johansen
6aad970d1c Split out compressed dfa "transition table" compression
Split hfa into hfa and compressed_hfa files.  The hfa portion focuses on
creating an manipulating hfas, while compressed_hfa is used for creating
compressed hfas that can be used/reused at run time with much less memory
usage than the full blown hfa.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2011-03-13 05:50:34 -07:00
John Johansen
298a36bffb Split out aare_rules which are used to encapsulate creating the dfa
Split out the aare_rule bits that encapsulate the convertion of apparmor
rules into the final compressed dfa.

This patch will not compile because of the it needs hfa to export an interface
but hfa is going to be split so just delay until hfa and transtable are
split and they can each export their own interface.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2011-03-13 05:49:15 -07:00