Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

all: do comparison substitution for extra coverage #4844

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

a-nogikh
Copy link
Collaborator

Collect comparison arguments for extra coverage.
For that, we now need to start remote coverage collection for every forked program.

Substitute the arguments into all calls that have remote_cover set.


Do we need any special handling for the SYZ_EXECUTOR_USES_FORK_SERVER == 0 case?

@a-nogikh a-nogikh requested a review from dvyukov May 28, 2024 16:01
@a-nogikh a-nogikh force-pushed the features/comparisons-refactor branch from 348f228 to d979a81 Compare May 28, 2024 17:05
dvyukov
dvyukov previously approved these changes May 31, 2024
Collect comparison arguments for extra coverage.
For that, we now need to start remote coverage collection for every
forked program.

Substitute the arguments into all calls that have remote_cover set.
Copy link

codecov bot commented May 31, 2024

Codecov Report

Attention: Patch coverage is 0% with 24 lines in your changes are missing coverage. Please review.

Project coverage is 61.2%. Comparing base (34889ee) to head (1dcae20).
Report is 2 commits behind head on master.

Additional details and impacted files
Files Coverage Δ
pkg/ipc/ipc.go 47.6% <0.0%> (-0.2%) ⬇️
pkg/fuzzer/job.go 21.8% <0.0%> (-0.8%) ⬇️

... and 2 files with indirect coverage changes

@a-nogikh
Copy link
Collaborator Author

a-nogikh commented May 31, 2024

Hmm, the backward incompatibility is worse than I thought.

Since we may lose a current->kcov reference (the fix will be posted soon), we're not just leaking remote kcov handles. If it were only that, we could address it by trying out a number of different handles for every procid.

But we're also unable to reuse the same *kcov object since

And we cannot obtain a new /dev/kcov handle inside execute_one() -- when we're there, we're already in a sandboxed environment.

UPD: We might have to return to keeping a single remote kcov instance per a syz-executor process. Maybe we could then open two remote kcov instances -- one for the normal coverage and one for comparisons? The only real downside is that we'd have to also keep two memory buffers for them.

@dvyukov
Copy link
Collaborator

dvyukov commented Jun 3, 2024

Can we create the remote kcov on a thread that never executes? I.e. not the main thread, but rather an additional special thread that exists just for that purpose?

@a-nogikh
Copy link
Collaborator Author

a-nogikh commented Jun 3, 2024

Can we create the remote kcov on a thread that never executes? I.e. not the main thread, but rather an additional special thread that exists just for that purpose?

Interesting!
Though it won't be a thread that truly never executes since it will at least have to execute the KCOV_REMOTE_ENABLE ioctl. We can try to e.g. block the thread after that ioctl. There's still a chance to catch an irq in between, but a much smaller one.

@dvyukov
Copy link
Collaborator

dvyukov commented Jun 3, 2024

Can we create the remote kcov on a thread that never executes? I.e. not the main thread, but rather an additional special thread that exists just for that purpose?

Interesting! Though it won't be a thread that truly never executes since it will at least have to execute the KCOV_REMOTE_ENABLE ioctl. We can try to e.g. block the thread after that ioctl. There's still a chance to catch an irq in between, but a much smaller one.

KCOV_REMOTE_ENABLE is done once per top level process, so it should reduce probability several orders of magnitude.
If can be mmaped by a different thread, then we could even create them in syz-fuzzer (I wouldn't bother now, we can do it in the rewritten executor runner).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants