Skip to content

Commit 01bec69

Browse files
authored
quarto
1 parent d536c76 commit 01bec69

1 file changed

Lines changed: 8 additions & 14 deletions

File tree

quarto/CompStats.qmd

Lines changed: 8 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ pip install CompStats
3535
```
3636
:::
3737

38-
# Quick Start Guide
38+
# scikit-learn Users
3939

4040
## Column
4141

@@ -46,7 +46,8 @@ To illustrate the use of `CompStats`, the following snippets show an example. Th
4646
4747
from sklearn.svm import LinearSVC
4848
from sklearn.naive_bayes import GaussianNB
49-
from sklearn.ensemble import RandomForestClassifier, HistGradientBoostingClassifier
49+
from sklearn.ensemble import RandomForestClassifier
50+
from sklearn.ensemble import HistGradientBoostingClassifier
5051
from sklearn.datasets import load_digits
5152
from sklearn.model_selection import train_test_split
5253
from sklearn.base import clone
@@ -65,25 +66,17 @@ m = LinearSVC().fit(X_train, y_train)
6566
hy = m.predict(X_val)
6667
```
6768

68-
## Column
69-
7069
Once the predictions are available, it is time to measure the algorithm's performance, as seen in the following code. It is essential to note that the API used in `sklearn.metrics` is followed; the difference is that the function returns an instance with different methods that can be used to estimate different performance statistics and compare algorithms.
7170

71+
## Column
72+
7273
```{python}
7374
#| echo: true
7475
7576
score = f1_score(y_val, hy, average='macro')
7677
score
7778
```
7879

79-
The previous code shows the macro-f1 score and its standard error. The actual performance value is stored in the attributes `statistic` function, and `se`
80-
81-
```{python}
82-
#| echo: true
83-
84-
score.statistic, score.se
85-
```
86-
8780
Continuing with the example, let us assume that one wants to test another classifier on the same problem, in this case, a random forest, as can be seen in the following two lines. The second line predicts the validation set and sets it to the analysis.
8881

8982
```{python}
@@ -99,7 +92,8 @@ Let us incorporate another predictions, now with Naive Bayes classifier, and His
9992
#| echo: true
10093
10194
nb = GaussianNB().fit(X_train, y_train)
102-
score(nb.predict(X_val), name='Naive Bayes')
95+
_ = score(nb.predict(X_val), name='Naive Bayes')
10396
hist = HistGradientBoostingClassifier().fit(X_train, y_train)
104-
score(hist.predict(X_val), name='Hist. Grad. Boost. Tree')
97+
_ = score(hist.predict(X_val), name='Hist. Grad. Boost. Tree')
98+
score.plot()
10599
```

0 commit comments

Comments
 (0)