Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .github/workflows/build-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,14 @@ on:
- r**
paths:
- "docs/**"
- ".github/workflows/build-docs.yml"
push:
branches:
- main
- r**
paths:
- "docs/**"
- ".github/workflows/build-docs.yml"

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-${{ github.event.label.name || 'main' }}-${{ github.event_name }}
Expand All @@ -42,7 +44,8 @@ jobs:
uses: NVIDIA-NeMo/FW-CI-templates/.github/workflows/_build_docs.yml@v0.80.2
with:
docs-directory: docs/source
sync-all: true
sync-all: false
requirements-file: requirements/requirements_docs_ci.txt

build-docs-summary:
needs: [pre-flight, build-docs]
Expand Down
2 changes: 1 addition & 1 deletion docs/source/asr/datasets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1067,7 +1067,7 @@ One such example are attention encoder-decoder models, where the overall GPU mem
into two main components: input-sequence-length bound (encoder activations) and output-sequence-length bound
(decoder activations).
Classical bucketing techniques only stratify on the input sequence length (e.g. duration in speech),
which leverages encoder effectively but leads to excessive padding on on decoder's side.
which leverages encoder effectively but leads to excessive padding on decoder's side.

To amend this we support a 2D bucketing technique which estimates the buckets in two stages.
The first stage is identical to 1D bucketing, i.e. we determine the input-sequence bucket bins so that
Expand Down
3 changes: 2 additions & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,8 @@
_skipped_autodoc_mock_imports = ['wrapt', 'numpy']

for req_path in sorted(list(glob.glob("../../requirements/*.txt"))):
if "docs.txt" in req_path:
# NB: mocking `coverage` from test requirements results in error with `numba`
if "docs.txt" in req_path or "test.txt" in req_path:
continue

req_file = os.path.abspath(os.path.expanduser(req_path))
Expand Down
9 changes: 7 additions & 2 deletions requirements/requirements_docs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,15 @@ Jinja2
latexcodec
numpy
pydata-sphinx-theme
Sphinx
sphinx>=8.1.3
sphinx-book-theme
sphinx-copybutton
sphinx-copybutton>=0.5.2
sphinxcontrib-bibtex
sphinxext-opengraph
sphinx-autobuild>=2024.10.3
sphinx-autodoc2>=0.5.0
sphinxcontrib-mermaid
urllib3
wrapt
myst-parser>=4.0.1
nvidia-sphinx-theme>=0.0.8
15 changes: 15 additions & 0 deletions requirements/requirements_docs_ci.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# requirements for building docs on CI
-r requirements.txt
-r requirements_asr.txt
-r requirements_audio.txt
-r requirements_common.txt
-r requirements_docs.txt
-r requirements_lightning.txt
-r requirements_run.txt
-r requirements_slu.txt
-r requirements_tts.txt
-r requirements_cu12.txt

# excluded requirements:
# -r requirements_cu13.txt
# -r requirements_test.txt
Loading