Conversation
Signed-off-by: Adam Korczynski <adam@adalogics.com>
gpshead
left a comment
There was a problem hiding this comment.
It feels like you maybe had an LLM generate these and accepted its categorization plan that focused on a mix of limiting fuzz_ module size combined with aiming to limit the number generated more so than considering which things actually made sense together. No problem with LLM use if so - it makes a ton of sense to use one to generate these. But I think they need some rearranging for sensibility reasons.
I'm not going to be able to review them in detail, this is the kind of code for the kind of purpose where i'm more likely to trust that it does what it claims within the fuzzing sandbox environment it'll run in, and won't dive in on its implementation details for a given test scenario unless it starts producing failure results that do not make sense.
My review attitude on this does leave opportunity for things to not be doing useful fuzzing vs the compute time they're given without further detailed consideration. But is intended to unblock.
Signed-off-by: Adam Korczynski <adam@adalogics.com>
|
I have started splitting up the fuzzers per module. I have added 8 of those new fuzzers in this PR and will make other PRs with other batches. Let me know if you prefer a PR per fuzzer. I have tested these 8 in OSS-Fuzz and they run well. |
Adds 8 fuzzers targetting Python modules.
cc @ammaraskar @gpshead for info