This is a hot-fix with a regression fix and an urgent
support for the latest-n-greatest kernel API change.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Currently this is lsm.c's static variable, but since kdat
is now cached (and uncached) this value stays zero (no lsm)
if the cache file gets loaded, which is obviously wrong and
breaks the restore all the time on lsm-enabled hosts.
https://github.com/xemul/criu/issues/323
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
This is to remove the function pointer and have only "type"
variable left.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
We need to keep the host LSM mode on kerndat (next patches),
at the same time the --lsm-profile option needs to correspond
to it.
So split the option handling into two parts -- first keep it
as is, next -- check for kerndat correspondance.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
"With the recent kernel changes criu should never look outside of start-end
region reported by /proc/maps; and restore doesn't even need to know if a
GROWSDOWN region will actually grow or not, because (iiuc) you do not need
to auto-grow the stack vma during restore, criu re-creates the whole vma
with the same length using MAP_FIXED and it should never write below the
addr returned by mmap(MAP_FIXED)" // Oleg
Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
In vanilla kernel commit 1be7107fbe18eed3e319a6c3e83c78254b693acb
show_map_vma() no longer report PAGE_SIZE. Detect it with
simple test and remember in kdat settings.
Suggested-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
This is the no-new-features release :) We have several bugfixes,
memory restore optimization and a little bit more.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
This case is legacy, tfds are merged into epoll entry, but
to make it working we have separate list of tfds and extra
code in ->open callback.
Keep the legacy code in one place.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
Current collect helper frees the pb entry if there's
zero priv_size on cinfo. For files we'll have zero
priv_size (as entries will be collected by sub-cinfos),
while the entry in question should NOT be freed.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
For the upcoming userfaultfd integration the skip_pages functionality is
required to find the userfaultfd requested pages.
Signed-off-by: Adrian Reber <areber@redhat.com>
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Previous patch (5a1e1aac) tried to minimize the amount of
open()s called when mmap()ing the files. Unfortunatley, there
was a mistake and wrong flags were compared which resulted in
the whole optimization working randomly (typically not
working).
Fixing the flags comparison revealed another problem. The
patch in question correllated with the 03e8c417 one, which
caused some vmas to be opened and mmaped much later than the
premap. When hitting the situation when vmas sharing their
fds are partially premapped and partially not, the whole
vm_open sharing became broken in multiple places -- either
needed fd was not opened, or the not needed left un-closed.
To fix this the context, that tracks whether the fd should
be shared or not, should be moved from collect stage to
the real opening loop. In this case we need to explicitly
know which vmas _may_ share fds (file private and shared)
with each other, so the sharing knowledge becomes spread
between open_filemap() and its callers. Oh, well...
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
On real apps it's typical to have sequences ov VMAs with
absolutely the same file mapped. We've seen this dump-time
and fixed multiple openings of map_files links with the
file_borrowed flag.
Restore situation is the same -- the vm_open() call in many
cases re-open the same path with the same flags. This slows
things down.
To fix this -- chain VMAs with mapped files to each other
and only the first one opens the file and only the last
one closes it.
✓ travis-ci: success for mem: Do not re-open files for mappings when not required
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
In this routine we'll need to compare fdflags, so to
avoid double if-s, let's calculate and set fdflags early.
✓ travis-ci: success for mem: Do not re-open files for mappings when not required
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
When a vma we restore doesn't have any pages in pagemaps there's
not need to enforce PROT_WRITE bit on it.
This only applies to non-premmaped vmas.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
Performance experiments show, that we spend (relatively) a lot of time
mremap-ing areas from premap area into their proper places. This time
depends on the task being restored, but for those with many vmas this
can be up to 20%.
The thing is that premapping is only needed to restore cow pages since
we don't have any API in the kernel to share a page between two or more
anonymous vmas. For non-cowing areas we map mmap() them directly in
place. But for such cases we'll also need to restore the page's contents
also from the pie code.
Doing the whole page-read code from PIE is way too complex (for now), so
the proposal is to optimize the case when we have a single local pagemap
layer. This is what pr.pieok boolean stands for.
v2:
* Fixed ARM compiling (vma addresses formatting)
* Unused tail of premapped area was left in task after restore
* Preadv-ing pages in restorer context worked on corrupted iovs
due to mistakes in pointer arithmetics
* AIO mapping skipped at premap wasn't mapped in pie
* Growsdown VMAs should sometimes (when they are "guarded" by
previous VMA and guard page's contents cannot be restored in
place) be premmaped
* Always premmap for lazy-pages restore
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
Next patch will stop premapping some private vmas. In particular -- those,
that are not COW-ed with anyone. To make this work we need to distinguish
vmas that are not cowed with anyone from those cowed with children only.
Currently both have vma->parent pointer set to NULL, so for former let's
introduce the special mark.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
Inherited VMAs don't need the descriptor to work with.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
Collect VMAs into COW-groups. This is done by checking each pstree_item's
VMA list in parallel with the parent one and finding VMAs that have
chances to get COW pages. The vma->parent pointer is used to tie such
areas together.
v2:
* Reworded comment about pvmas
* Check for both vmas to be private, not only child
* Handle helper tasks
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
We currently keep pointer on parent vma bitmap, but more info
about the parent will be needed soon.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
Not all private VMA-s will be premmaped, so a separate sign of
a VMA being on the premap area is needed.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
The page-read will be needed during the premap stage.
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
I've met missing vvar on Virtuozzo 7 kernel - just skip
unmapping it.
TODO: check ia32 C/R with kernel CONFIG_VDSO=n
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
I've deleted it previously by the reason that I searched
vdso vma in [vdso/vvar] vma's pair by magic header.
So, I needed to suppress this error.
>From that moment, I've reworked how 32-bit vdso is parsed
and now we don't need to search it, even more: we parse it
only once in the criu helper.
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
On x86_64 defconfig it's =m, so if you boot kernel without initramfs
in qemu, you will see this.
[xemul: split long line]
Fixes: #292
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
I was adapting CRIU with ia32 support for building with Koji,
and found that Koji can't build x86_64 packages and have
i686 libs installed.
While at it, I found that i686 libraries requirement is
no longer valid since I've deleted the second parasite.
Drop feature test for i686 libs and put test for gcc.
That will effectively test if gcc can compile 32-bit code
and bug with debian's gcc (#315).
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
I need to add feature test written in assembly to check
if the feature can be compiled.
Add a make function for this purpose.
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
After commit 2e8970beda "mount: create a mount point
for the root mount namespace in the roots yard", top
of the tree of mount_infos points to the fake mount.
So, when we're looking for appropriate place for
binfmt_misc, we can't find "xxx/proc/sys/fs/binfmt_misc".
Fix that by finding real NS_ROOT manually.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
The routine in question just sets up the mutex to access
/dev/ptmx. This initialization can be done when we collect
a single tty.
✓ travis-ci: success for Sanitize initialization bits
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
No need to schedule both post-actions, we can merge them. This
also sanitizes the "void *unised" arguments for both.
✓ travis-ci: success for Sanitize initialization bits
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
This routine just initializes the remap open lock,
and there's already the code that initializes the
whole remap engine.
Re-arrange this part.
✓ travis-ci: success for Sanitize initialization bits
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Now this lock is only needed to serialize remap open
code, so name it such.
✓ travis-ci: success for Sanitize initialization bits
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Ghost remaps allocate path with shmalloc. Add comment
why this is such.
✓ travis-ci: success for Sanitize initialization bits
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
We used to have users counter on remap which was
incremented each time this routine was called. Nowadays
remaps are managed w/o the refcounting and we no
longer need global mutex protection for it.
✓ travis-ci: success for Sanitize initialization bits
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
There's no need in separate call to prepare_procfs_remaps().
All remaps are collected one step earlier and we can do
open_remap_dead_process() right at once.
Also rename the latter routine.
✓ travis-ci: success for Sanitize initialization bits
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
This just moves all the deprecated code into one place.
✓ travis-ci: success for Sanitize fsnotify legacy code
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
The lists are only needed to collect marks (deprecated) into
notify objects. The latter ones are stored in fdsec hash, so
for this legacy case we can find them there.
✓ travis-ci: success for Sanitize fsnotify legacy code
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>
Marks images were merged into regular in 1.3.
✓ travis-ci: success for Sanitize fsnotify legacy code
Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com>