-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
YJIT: implement call fuzzer script #9129
Conversation
Attempt to detect bugs in YJIT call implementation.
This is looking pretty good!
Alternatively, you could make the method return its parameters as an array and assert that. Gives you more flexibility as to what you could pass, as you mentioned you might want to do.
I think the current approach should expose most problems, but if we want to compare agains the interpreter later, it seems easy to add. Running with result = testbed.call
fork do
RubyVM::YJIT.enable
yjit_result = testbed.call
# send result to parent
end
Process.wait
assert_eq(result, yjit_result)
We've seen number of local variables in a method giving causing bugs, so we could fuzz that by adding unused locals. You can add block related constructs, like defining the method using |
Thanks for the suggestions. Making notes and will likely add some of these :) I'm in the process of refactoring the code to make the call fuzzer more maintainable and powerful. |
Compute checksum of arguments
I completed a first version of this script. I refactored it in a way that I think makes it easier to understand and extend with new functionality. For the moment, with the experiments I've done, no errors have been detected, which is encouraging. Next I would like to add the ability to define methods using define_method in block form as Alan suggested (can you link to some documentation/examples @XrXr ?). Another thing I'm not yet really testing is rest/splat arguments. |
This is the method doc for Module#define_method: https://docs.ruby-lang.org/en/3.2/Module.html#method-i-define_method |
Ty Alan! |
Does this take a lot of time to run? If not, we could run this on CI too. |
Depends how many iterations you want to run. Right now it runs 80 iterations per second with YJIT. So I guess it would depend what our time budget is in terms of minutes of execution time. The other factor is that the tests are randomly generated, so non-deterministic by default. We could also make them deterministic by setting the random seed if we prefer. At this time though, this test has not detected any new bugs. I may still try to test more things next week though. |
Attempt to detect bugs in YJIT call implementation.
This is very basic at this point and so I'll make it a draft PR, but would like some suggestions on where to go next @XrXr
I was thinking I could compute the sum and/or product of the arguments to make sure that the generated methods produce the result we expect, signifying that arguments were passed correctly. I could also pass things like strings or objects that require heap pointers, to make sure that we track types correctly.
I'll try to make it as dynamic as possible, and add more tricky edge cases (suggestions welcome), but somewhat afraid that this script will become complicated and hard to follow fairly quickly 😅
Another potential approach to this would be to fuzz by concatenating random tokens, and then forking both a CRuby interpreter and a YJIT process, and checking that the output from both is the same. This is more inefficient in terms of compute and time
Sample output: