Elaborate in class comment of firstpos, lastpos, followpos, and nullable
fields beyond just referencing the Dragon book. Also add the section of
the book these are explained in.
The shared node type will be used in the future to add new capabilities
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
We need to rework permission type mapping to nodesets, which means we
need to move the nodeset computations earlier in the dfa creation
processes, instead of a post step of follow(), so move the nodeset
into expr-tree
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Make the accept information dump output be in hexidecimal like the
other dumps so its easier to reference between them.
Signed-off-by: John Johansen <john.johansen@canonical.com>
This is patch tries to reduce the number of dynamic_cast<>s needed
during normalization by pushing the operations of normalize_tree()
into the expr-tree classes themselves rather than perform it as
an external function. This eliminates the need for dynamic_cast<>
checks on the current object under inspection and reduces the number
of checks needing to be performed on child Nodes as well.
In non-strict benchmarking, doing the dynamic_cast<> reduction
for just the tree normalization operation resulted in a ~10-15%
improvement in overall time on a couple of different hosts (amd64,
armel), as measured against apparmor_parser -Q. Valgrind's callgrind
tool indicated a reduction in the number of calls to dynamic_cast<>
on the tst/simple_tests/vars/dbus_vars_9.sd test profile from ~19
million calls to ~12 million.
In comparisons with dumped expr trees over both the entire
tst/simple_tests/ tree and from 1000 randomly generated profiles via
stress.rb, the generated trees were identical.
Patch history:
v1: initial version of patch
v2: update patch to take into account the infinite loop fix in
trunk rev 1975 and refresh against current code.
v3: no change
Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: John Johansen <john.johansen@canonical.com>
non-accepting, and have the proto-state use them.
To reduce memory overhead each set gains its own "cache" that make sure
there is only a single instance of each NodeSet generated. And since
we have a cache abstraction, move relavent stats into it.
Also refactor code slightly to make caches and work_queue etc, DFA member
variables instead of passing them as parameters.
The split + caching results in a small reduction in memory use as the
cost of ProtoState + Caching is less than the redundancy that is eliminated.
However this results in a small decrease in performance.
Sorry I know this really should have been split into multiple patches
but the patch evolved and I got lazy and decided to just not bother
splitting it.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
Create a new ProtoState class that will encapsulate the split, but for
this patch it will just contain what was done previously with NodeSet
Signed-off-by: John Johansen <john.johansen@canonical.com>
Split hfa into hfa and compressed_hfa files. The hfa portion focuses on
creating an manipulating hfas, while compressed_hfa is used for creating
compressed hfas that can be used/reused at run time with much less memory
usage than the full blown hfa.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
Start of splitting regexp.y into logical components instead of the mess
it is today. Split out the expr-tree and parsing components from regexp.y
int expr-tree.x and parse.y and since regexp.y no longer does parsing
rename it to hfa.cc
Some code cleanups snuck their way into this patch and since I am to
lazy to redo it, I have left them in.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>