Skip to content

Example Results

Alan J. Sastre edited this page Oct 25, 2016 · 13 revisions

Here are a few examples of sedcat results:

  1. Filter by measures:

SedcatMeasuresFilter

From left to right, the first column corresponds to real projects hosted on github on which has been tested the plugin. The following six columns shows the input metrics for sedcat expert systems. Two last columns represent sedcat metrics, quality of unit testing and improvement actions.


_Input metrics chosen are:_

UTS SUCCESS: is Unit Test Success, main metric in unit testing that represents the percentage of unit tests that has been passed.

COVERAGE: is Unit Test Coverage, another main metric in unit testing that shows the code that has been reached. Its important to understand that coverage does not mean all code that has been achieved is actually tested. That's why even though this metric and the above are important not provide all the information that could be extracted from the context.

MUTATIONS COVERAGE: obtained from Pitest Tool reports. This metric shows the really code coverage that has been tested of achieved code.

UTS: is the number of unit tests.

LOC: is the number of lines of code. This metric is contrasted with the number of tests. So a project with many lines of code and a low number of test is negatively rated by sedcat and the number of test will be a parameter to fix.

CMPX/CLASS: is average complexity by class in project. Complexity is a factor to consider when doing unit testing. A project with high complexity requires hard tests and therefore these tests are more difficult to maintain. If complexity exceeds default value (30) it will be a parameter to fix. However, from general settings the user can change the default value to consider.


_Output sedcat metrics:_

QUALITY OF UNIT TESTING: considering combination of explained input metrics, sedcat computes the percentage of quality based on rule bases using trapezoidal and singleton functions for input metrics types. In other words, the values for input metrics are transformed dynamically to linguistic labels and are classified depending of rule bases on each analysis.

IMPROVEMENT ACTIONS: in the same way that quality of unit testing, sedcat uses another expert system that computes actions. In this case, the system output result is a number, that corresponds with a established singleton set of possibles actions. The associated message to number is extracted from a properties file which can be seen in expertSystemActions.properties

Expert systems have been developed with the fuzzy system development environment Xfuzzy.


Regarding to image projects:

jsoup: coverage is 75.3% and mutations coverage 58.7%, this shows that a part achieved code isn't tested. The remaining parameters are correct so Quality is 45%.

libGDX Parent: coverage is 0.7% and mutations coverage is 0.6% and in addition number of test is slow in relation to code lines. Quality is 5% because unit test success are 100% and complexity is ok.

Moshi (Parent): coverage is 78.5% and mutations 75.8%, number of test is acceptable in relation to total code lines and complexity is ok. Quality in this case is 87.0%, a good value!

mustache.java: unit test success and coverage have high values, but mutation coverage is 2.6%. This means that all achieved code is not tested! Complexity it's ok and quality result is 20.0%.

Okio (Parent): high values for success, coverage and mutations. But in this case, complexity is very high. So quality doesn't rate high (44.7%) and complexity must be the first parameter to fix.

Retrofit (Parent): quality is similarly to last project, but parameters values are different. Unit test success and coverage are lower, however complexity is ok.

ScribeJava OAuth Library: this case is similar to mustache.java project results.

Clone this wiki locally