Skip to content

Commit 3276219

Browse files
authored
Merge pull request #8 from pythonhealthdatascience/dev
Dev
2 parents 3370c84 + eb7810b commit 3276219

13 files changed

Lines changed: 260 additions & 17 deletions
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
name: python_tests
2+
run-name: Run python tests
3+
4+
on:
5+
push:
6+
branches: [main]
7+
workflow_dispatch:
8+
9+
jobs:
10+
tests:
11+
runs-on: ${{ matrix.os }}
12+
strategy:
13+
fail-fast: false
14+
matrix:
15+
include:
16+
- os: ubuntu-latest
17+
python-version: '3.11'
18+
- os: ubuntu-latest
19+
python-version: '3.12'
20+
- os: windows-latest
21+
python-version: '3.12'
22+
- os: macos-latest
23+
python-version: '3.12'
24+
25+
steps:
26+
- name: Check out repository
27+
uses: actions/checkout@v4
28+
29+
- name: Install python and dependencies
30+
uses: actions/setup-python@v4
31+
with:
32+
python-version: ${{ matrix.python-version }}
33+
cache: 'pip'
34+
35+
- name: Install requirements (Windows)
36+
if: runner.os == 'Windows'
37+
run: python -m pip install -r requirements-test.txt
38+
39+
- name: Install requirements (Unix)
40+
if: runner.os != 'Windows'
41+
run: pip install -r requirements-test.txt
42+
43+
- name: Run tests
44+
run: pytest examples/python_package

.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,5 @@ __pycache__/
77
.pytest_cache/
88
.Rhistory
99
site_libs/
10-
**_files/
10+
**_files/
11+
.coverage

assets/styles.css

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@
3737
background-color: #f5f5f5;
3838
border-radius: 8px;
3939
padding: 1em 1em 0.2em 1em;
40+
margin-bottom: 1em;
4041
}
4142

4243
/* Add space before H2 + H3 */

environment.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ dependencies:
99
- pip=25.3
1010
- pylint=4.0.4
1111
- pytest=9.0.2
12+
- pytest-cov=7.0.0
1213
- python=3.12.12
1314
- scipy=1.17.0
1415
- pip:

images/github_actions.png

97.1 KB
Loading

pages/back_tests.qmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,14 +27,14 @@ We will run back tests using the [dataset we introduced for our waiting times ca
2727
On the test page, we need to import:
2828

2929
```{python}
30-
#| execute: false
30+
#| eval: false
3131
#| file: code/test_back__imports.py
3232
```
3333

3434
## Back test
3535

3636
```{python}
37-
#| execute: false
37+
#| eval: false
3838
#| file: code/test_back__test_reproduction.py
3939
```
4040

pages/functional_tests.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Unlike unit tests, which check each function in isolation, functional tests run
2929
We will need the follow imports in our test script:
3030

3131
```{python}
32-
#| execute: false
32+
#| eval: false
3333
#| file: code/test_functional__imports.py
3434
```
3535

pages/github_actions.qmd

Lines changed: 129 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,3 +11,132 @@ use_condaenv("hdruk_tests", required = TRUE)
1111
{{< include /assets/language-selector.html >}}
1212

1313
<br>
14+
15+
As mentioned on the page [When and why to run tests?](why_test.qmd), you should run tests regularly **after any code or data changes**, as catching errors earlier makes them easier to fix. This practice of re-running tests is called **regression testing**, and it ensures recent changes haven't introduced errors.
16+
17+
GitHub Actions can be a great tool to support this.
18+
19+
## GitHub Actions
20+
21+
GitHub is widely used for hosting research code and managing version control. We have a [tutorial on setting up a repository](https://pythonhealthdatascience.github.io/des_rap_book/pages/guide/setup/version.html) if you are new to GitHUb.
22+
23+
**GitHub Actions** is a built-in automation system that runs workflows directly in your repository. You can access it from the **Actions** tab on your GitHub repository page:
24+
25+
![](/images/github_actions.png)
26+
27+
Workflows are defined using YAML files stored in `.github/workflows/` in your repository. Each workflow can be triggered by one or more events, with common triggers including:
28+
29+
* `push`: run tests on every push to a branch.
30+
* `push: branches: ["main"]`: run tests on every push to the `main` branch.
31+
* `pull_request`: run tests when a pull request is opened or updated.
32+
* `workflow_dispatch`: allows manual runs from the "Actions" tab.
33+
34+
## Workflow to run tests
35+
36+
This workflow will run the tests from our case study via GitHub actions. We explain it step-by-step below.
37+
38+
```{bash}
39+
#| eval: false
40+
#| file: ../.github/workflows/python_tests.yaml
41+
```
42+
43+
### Explaining the workflow
44+
45+
```
46+
name: python_tests
47+
run-name: Run python tests
48+
```
49+
50+
The beginning of the YAML sets the workflow's name and how it appears in the Actions tab.
51+
52+
* `name` is the internal name of the workflow file.
53+
* `run-name` is what is displayed when a run appears in the Actions history.
54+
55+
```
56+
on:
57+
push:
58+
branches: [main]
59+
workflow_dispatch:
60+
```
61+
62+
Next, we define when the workflow is triggered. Here we have chosen:
63+
64+
* `push`: automatically run on pushes to the `main` branch.
65+
* `workflow_dispatch`: allows you to trigger the workflow manually from the GitHub actions interface (*note: it only becomes available after first having been pushed to main*).
66+
67+
```
68+
jobs:
69+
tests:
70+
runs-on: ${{ matrix.os }}
71+
strategy:
72+
fail-fast: false
73+
matrix:
74+
include:
75+
- os: ubuntu-latest
76+
python-version: '3.11'
77+
- os: ubuntu-latest
78+
python-version: '3.12'
79+
- os: windows-latest
80+
python-version: '3.12'
81+
- os: macos-latest
82+
python-version: '3.12'
83+
```
84+
85+
Now we start to define the job that runs our tests. We are using matrix testing as this allows us to check our code across multiple operating systems and Python versions. In this case the tests will run on:
86+
87+
* Python 3.11 (Linux)
88+
* Python 3.12 (Linux, Windows, macOS)
89+
90+
This is good as it allows you to spot any bugs/run issues related to specific operating systems or python versions. They will run in parallel, ensuring a single efficient workflow.
91+
92+
```
93+
steps:
94+
- name: Check out repository
95+
uses: actions/checkout@v4
96+
```
97+
98+
Now we start defining the steps executed within our test job. The first step is typically to check out your repository, so the workflow can access your code.
99+
100+
```
101+
- name: Install python and dependencies
102+
uses: actions/setup-python@v4
103+
with:
104+
python-version: ${{ matrix.python-version }}
105+
cache: 'pip'
106+
```
107+
108+
Next we install the version of Python specified in the matrix and enable pip caching to speed up future runs.
109+
110+
```
111+
- name: Install requirements (Windows)
112+
if: runner.os == 'Windows'
113+
run: python -m pip install -r requirements-test.txt
114+
115+
- name: Install requirements (Unix)
116+
if: runner.os != 'Windows'
117+
run: pip install -r requirements-test.txt
118+
```
119+
120+
Depending on the operating system, the command syntax for installing dependencies differs slightly. We use `requirements-test.txt` instead of `environment.yaml` as we want to use different Python versions. To reduce runtime, our requirements file only contains packages needed for running tests (e.g., excludes our linting packages).
121+
122+
::: {.callout-note title="See `requirements-test.txt`" collapse="true"}
123+
124+
```{bash}
125+
#| eval: false
126+
#| file: ../requirements-test.txt
127+
```
128+
129+
:::
130+
131+
```
132+
- name: Run tests
133+
run: pytest examples/python_package
134+
```
135+
136+
Finally, we run the tests! We call `pytest` on our case study (`examples/python_package/`). If all tests pass, you'll see green ticks for each environment - confirming that your code works consistently across Python versions and operating systems.
137+
138+
### See GitHub actions, in action!
139+
140+
The video below demonstrates this workflow running in GitHub Actions. For the demo, the workflow is triggered manually using `workflow_dispatch`, but it would also run automatically whenever you push changes to `main`.
141+
142+
TODO.

pages/parametrising_tests.qmd

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Let's say we want to verify that our `summary_stats()` function works correctly
2626

2727
```{python}
2828
#| file: code/patient_analysis__summary_stats.py
29-
#| execute: false
29+
#| eval: false
3030
```
3131

3232
:::
@@ -35,14 +35,14 @@ We will need the following imports in our test script:
3535

3636
```{python}
3737
#| file: code/test_intro_parametrised__imports.py
38-
#| execute: false
38+
#| eval: false
3939
```
4040

4141
Instead of writing separate test functions for each case, we can use pytest's `@pytest.mark.parametrize` decorator:
4242

4343
```{python}
4444
#| file: code/test_intro_parametrised__test_summary_stats.py
45-
#| execute: false
45+
#| eval: false
4646
```
4747

4848
## How it works

pages/test_coverage.qmd

Lines changed: 61 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,64 @@ use_condaenv("hdruk_tests", required = TRUE)
1212

1313
<br>
1414

15-
<!-- Test coverage - what it means, how to generate, use, etc. -->
15+
**Coverage** refers to the percentage of your code that is executed when you run your tests. It can help you spot parts of your code that are not included in any tests.
16+
17+
## `pytest-cov`
18+
19+
The [pytest-cov](https://github.com/pytest-dev/pytest-cov) package can be used to run coverage calculations easily alongside `pytest`. You can install it from PyPI or conda:
20+
21+
```{.bash}
22+
pip install pytest-cov
23+
```
24+
25+
```{.bash}
26+
conda install pytest-cov
27+
```
28+
29+
To calculate coverage, you can then simply run tests with the `--cov` flag:
30+
31+
```{.bash}
32+
pytest --cov
33+
```
34+
35+
## Running `pytest --cov` on our example
36+
37+
::: {.callout-note title="Test output"}
38+
39+
```{python}
40+
#| echo: false
41+
import pytest
42+
43+
pytest.main([
44+
"../examples/python_package/",
45+
"--cov=waitingtimes"
46+
])
47+
```
48+
49+
:::
50+
51+
The coverage results are under the banner:
52+
53+
```
54+
================================ tests coverage ================================
55+
```
56+
57+
You can see we get nearly 100% coverage. But what does this actually mean?
58+
59+
## Interpreting coverage
60+
61+
Coverage is telling you whether code was **executed** during testing - but not necessarily whether it has been tested well. A function could run as part of another test without its results or behaviour being properly checked by assertions.
62+
63+
::: {.box-grey}
64+
65+
**Coverage tells you what code ran, not whether it worked correctly**.
66+
67+
:::
68+
69+
It's mostly useful for finding code that **isn't covered by tests at all**. Having parts of your code with no/low coverage means:
70+
71+
* They're not imported or run by any tests.
72+
* They're only used in rare branches or failure conditions.
73+
* They were added recently but have not yet been incorporated into tests.
74+
75+
Rather than try to achieve 100% coverage, you should aim to meaningfully test all your code: every important path, decision and behaviour should be tested at least once.

0 commit comments

Comments
 (0)