Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finish evaluation code #11

Open
matveypashkovskiy opened this issue Aug 27, 2020 · 1 comment
Open

Finish evaluation code #11

matveypashkovskiy opened this issue Aug 27, 2020 · 1 comment

Comments

@matveypashkovskiy
Copy link
Contributor

@guotin not sure that it is still relevant. If you have plans please describe it here if not and it is in good shape - just close the issue

@guotin
Copy link
Contributor

guotin commented Aug 27, 2020

Current status

  • Code in place to do the following:
    • Random remove test
      1. store information to database: project info (name, commit, mapping.db size, full test-suite size)
      2. remove a single random line
      3. check changes
      4. find tests on line-level granularity
      5. find tests on file-level granularity
      6. execute line-level, file-level and full test-suites, extract pytest exit codes
      7. store data to database: all exit codes, diff, test suite sizes
    • Build mapping by iterating through commits of a project
      • Perhaps could be used for evaluation somehow
      • Issues: changing dependencies and structures. No solution at this time or plans to fix.

Future

  • Remove more random lines (configurable), compare data like with removal of one
  • Is replacing a line that has code with newline (\n) a valid line removal strategy? Causes a lot of faults that make the code not run.
  • Should individual test fails be collected? Currently random removal test compares pytest exit codes (if all tests found fault when test selection tests did not) and size of test sets. Is it interesting to know how many individual tests failed in cases where test selection fails to find any tests?
  • Collect some time data, for example how much time is saved with only running selected tests instead of all of them.
  • Also open to any suggestions.

@matveypashkovskiy matveypashkovskiy added this to the 1.[X].0 evaluation milestone Sep 14, 2020
@matveypashkovskiy matveypashkovskiy removed this from the 1.[X].0 evaluation milestone Feb 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants