You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.rst
+19-13Lines changed: 19 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ CompStats
27
27
28
28
Collaborative competitions have gained popularity in the scientific and technological fields. These competitions involve defining tasks, selecting evaluation scores, and devising result verification methods. In the standard scenario, participants receive a training set and are expected to provide a solution for a held-out dataset kept by organizers. An essential challenge for organizers arises when comparing algorithms' performance, assessing multiple participants, and ranking them. Statistical tools are often used for this purpose; however, traditional statistical methods often fail to capture decisive differences between systems' performance. CompStats implements an evaluation methodology for statistically analyzing competition results and competition. CompStats offers several advantages, including off-the-shell comparisons with correction mechanisms and the inclusion of confidence intervals.
29
29
30
-
To illustrate the use of `CompStats`, the following snippets show an example. The instructions load the necessary libraries, including the one to obtain the problem (e.g., digits), three different classifiers, and the last line is the score used to measure the performance and compare the algorithm.
30
+
To illustrate the use of `CompStats`, the following snippets show an example. The instructions load the necessary libraries, including the one to obtain the problem (e.g., digits), four different classifiers, and the last line is the score used to measure the performance and compare the algorithm.
31
31
32
32
>>> from sklearn.svm import LinearSVC
33
33
>>> from sklearn.naive_bayes import GaussianNB
@@ -51,10 +51,10 @@ Once the predictions are available, it is time to measure the algorithm's perfor
The previous code shows the macro-f1 score and, in parenthesis, its standard error. The actual performance value is stored in the `statistic` function.
54
+
The previous code shows the macro-f1 score andits standard error. The actual performance value is stored in the attributes `statistic` function, and `se`
55
55
56
-
>>> score.statistic
57
-
0.9434834454375508
56
+
>>> score.statistic, score.se
57
+
(0.9521479775366307, 0.009717884979482313)
58
58
59
59
Continuing with the example, let us assume that one wants to test another classifier on the same problem, in this case, a random forest, as can be seen in the following two lines. The second line predicts the validation set and sets it to the analysis.
60
60
@@ -63,28 +63,34 @@ Continuing with the example, let us assume that one wants to test another classi
63
63
<Perf(score_func=f1_score)>
64
64
Statistic with its standard error (se)
65
65
statistic (se)
66
-
0.9655 (0.0077) <= Random Forest
67
-
0.9435 (0.0099) <= alg-1
66
+
0.9720 (0.0076) <= Random Forest
67
+
0.9521 (0.0097) <= alg-1
68
68
69
-
Let us incorporate another prediction, now with the Naive Bayes classifier, as seen below.
69
+
Let us incorporate another predictions, now with Naive Bayes classifier, and Histogram Gradient Boosting as seen below.
70
70
71
71
>>> nb = GaussianNB().fit(X_train, y_train)
72
72
>>> score(nb.predict(X_val), name='Naive Bayes')
73
73
<Perf(score_func=f1_score)>
74
74
Statistic with its standard error (se)
75
75
statistic (se)
76
-
0.9655 (0.0077) <= Random Forest
77
-
0.9435 (0.0099) <= alg-1
78
-
0.8549 (0.0153) <= Naive Bayes
76
+
0.9759 (0.0068) <= Hist. Grad. Boost. Tree
77
+
0.9720 (0.0076) <= Random Forest
78
+
0.9521 (0.0097) <= alg-1
79
+
0.8266 (0.0159) <= Naive Bayes
79
80
80
-
The final step is to compare the performance of the three classifiers, which can be done with the `difference` method, as seen next.
81
+
The performance, its confidence interval (5%), and a statistical comparison (5%) between the best performing system with the rest of the algorithms is depicted in the following figure.
82
+
83
+
>>> score.plot()
84
+
85
+
The final step is to compare the performance of the four classifiers, which can be done with the `difference` method, as seen next.
81
86
82
87
>>> diff = score.difference()
83
88
>>> diff
84
89
<Difference>
85
-
difference p-values w.r.t Random Forest
90
+
difference p-values w.r.t Hist. Grad. Boost. Tree
86
91
0.0000 <= Naive Bayes
87
-
0.0120 <= alg-1
92
+
0.0100 <= alg-1
93
+
0.3240 <= Random Forest
88
94
89
95
The class `Difference` has the `plot` method that can be used to depict the difference with respect to the best.
Copy file name to clipboardExpand all lines: docs/source/metrics_api.rst
+26-19Lines changed: 26 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@
27
27
28
28
:py:mod:`CompStats.metrics` aims to facilitate performance measurement (with standard errors and confidence intervals) and statistical comparisons between algorithms on a single problem, wrapping the different scores and loss functions found on :py:mod:`~sklearn.metrics`.
29
29
30
-
To illustrate the use of :py:mod:`CompStats.metrics`, the following snippets show an example. The instructions load the necessary libraries, including the one to obtain the problem (e.g., digits), three different classifiers, and the last line is the score used to measure the performance and compare the algorithm.
30
+
To illustrate the use of :py:mod:`CompStats.metrics`, the following snippets show an example. The instructions load the necessary libraries, including the one to obtain the problem (e.g., digits), four different classifiers, and the last line is the score used to measure the performance and compare the algorithm.
31
31
32
32
>>> from sklearn.svm import LinearSVC
33
33
>>> from sklearn.naive_bayes import GaussianNB
@@ -49,45 +49,52 @@ Once the predictions are available, it is time to measure the algorithm's perfor
The previous code shows the macro-f1 score and, in parenthesis, its standard error. The actual performance value is stored in the :py:func:`~CompStats.interface.Perf.statistic` function.
54
+
The previous code shows the macro-f1 score and, in parenthesis, its standard error. The actual performance value is stored in the attributes :py:func:`~CompStats.interface.Perf.statistic` and :py:func:`~CompStats.interface.Perf.se`
58
55
59
-
>>> score.statistic
60
-
{'alg-1': 0.9332035615949114}
56
+
>>> score.statistic, score.se
57
+
(0.9521479775366307, 0.009717884979482313)
61
58
62
59
Continuing with the example, let us assume that one wants to test another classifier on the same problem, in this case, a random forest, as can be seen in the following two lines. The second line predicts the validation set and sets it to the analysis.
>>> score(hist.predict(X_val), name='Hist. Grad. Boost. Tree')
75
+
<Perf(score_func=f1_score)>
77
76
Statistic with its standard error (se)
78
77
statistic (se)
79
-
0.9756 (0.0061) <= Random Forest
80
-
0.9332 (0.0113) <= alg-1
81
-
0.8198 (0.0144) <= Naive Bayes
78
+
0.9759 (0.0068) <= Hist. Grad. Boost. Tree
79
+
0.9720 (0.0076) <= Random Forest
80
+
0.9521 (0.0097) <= alg-1
81
+
0.8266 (0.0159) <= Naive Bayes
82
+
83
+
The performance, its confidence interval (5%), and a statistical comparison (5%) between the best performing system with the rest of the algorithms is depicted in the following figure.
84
+
85
+
>>> score.plot()
86
+
87
+
.. image:: digits_perf.png
82
88
83
-
The final step is to compare the performance of the three classifiers, which can be done with the :py:func:`~CompStats.interface.Perf.difference` method, as seen next.
89
+
The final step is to compare the performance of the four classifiers, which can be done with the :py:func:`~CompStats.interface.Perf.difference` method, as seen next.
84
90
85
91
>>> diff = score.difference()
86
92
>>> diff
87
93
<Difference>
88
-
difference p-values w.r.t Random Forest
89
-
0.0000 <= alg-1
94
+
difference p-values w.r.t Hist. Grad. Boost. Tree
90
95
0.0000 <= Naive Bayes
96
+
0.0100 <= alg-1
97
+
0.3240 <= Random Forest
91
98
92
99
The class :py:class:`~CompStats.Difference` has the :py:class:`~CompStats.Difference.plot` method that can be used to depict the difference with respectto the best.
0 commit comments