Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consistency fuzz testing? #2191

Open
MichaelChirico opened this issue Sep 19, 2023 · 4 comments
Open

Consistency fuzz testing? #2191

MichaelChirico opened this issue Sep 19, 2023 · 4 comments
Labels
automation 🤖 internals Issues related to inner workings of lintr, i.e., not user-visible testing

Comments

@MichaelChirico
Copy link
Collaborator

One concern I have about the consistency cleanups in this release (#2039, #2046, #2190) is there's no mechanism to ensure future improvements/new linters continue to keep "up to code" & enforce the same consistency.

One approach that seems doable is to have a GHA that fuzzes our test suite. For example, for #2190, that means:

  • Go through the suite and identify all expect_lint() calls using "function(...)" or \(...) in the content= argument (text matching should be enough, but we could also use getParseData() here...).
  • Randomly (or exhaustively?) switch all such usages to their opposite: s/function/\/g and s/\/function/g
  • Run the test suite
  • Any test failure means a linter is inconsistently handling function(...) and \(...)

Similar fuzzes could apply in the other cases:

  • Randomly swap %>% and |>
  • Randomly swap $ and @

Some exceptions will need to be carved out, e.g. for cases where intentionally only one alternative is meant to be linted, and we might need some work to make the fuzzing robust to changes in an expected lint message. But overall this approach seems pretty doable to me.

If run with randomness, I'd also run it periodically, not on push/merge, since a failure might not be caught up-front & surfacing that failure on "someone else's" PR would be bad form.

@MichaelChirico MichaelChirico added internals Issues related to inner workings of lintr, i.e., not user-visible testing automation 🤖 labels Sep 19, 2023
@AshesITR
Copy link
Collaborator

Interesting idea. Things to consider:

  • Lint metadata may also change, especially expected column ranges.
  • |> and %>% can't be swapped in all contexts; they also have different placeholder semantics (_ vs. .)

@MichaelChirico
Copy link
Collaborator Author

True. Let's see how relevant those turn out in practice. Another idea is to allow for a # nofuzz escape to skip certain tests if they're too onerous to accomodate.

@MichaelChirico
Copy link
Collaborator Author

we can also work on #2221 here by fuzzing the lint metadata & making sure tests fail.

@MichaelChirico
Copy link
Collaborator Author

Another idea for the fuzz/metatesting suite, based on #2402 -- inject comments in random places in the test code and ensure the linters keep working. Have to exclude linters where <COMMENT> is meaningful, of course.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automation 🤖 internals Issues related to inner workings of lintr, i.e., not user-visible testing
Projects
None yet
Development

No branches or pull requests

2 participants