Skip to content

Commit 9e083c4

Browse files
authored
[quality] Fix typos (#1972)
1 parent 9c8cfdb commit 9e083c4

8 files changed

Lines changed: 8 additions & 8 deletions

File tree

doctr/models/_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ def invert_data_structure(
184184
dictionary of list when x is a list of dictionaries or a list of dictionaries when x is dictionary of lists
185185
"""
186186
if isinstance(x, dict):
187-
assert len({len(v) for v in x.values()}) == 1, "All the lists in the dictionnary should have the same length."
187+
assert len({len(v) for v in x.values()}) == 1, "All the lists in the dictionary should have the same length."
188188
return [dict(zip(x, t)) for t in zip(*x.values())]
189189
elif isinstance(x, list):
190190
return {k: [dic[k] for dic in x] for k in x[0]}

doctr/models/recognition/crnn/pytorch.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ def ctc_best_path(
8282

8383
def __call__(self, logits: torch.Tensor) -> list[tuple[str, float]]:
8484
"""Performs decoding of raw output with CTC and decoding of CTC predictions
85-
with label_to_idx mapping dictionnary
85+
with label_to_idx mapping dictionary
8686
8787
Args:
8888
logits: raw output of the model, shape (N, C + 1, seq_len)

doctr/models/recognition/crnn/tensorflow.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ def __call__(
5959
top_paths: int = 1,
6060
) -> list[tuple[str, float]] | list[tuple[list[str] | list[float]]]:
6161
"""Performs decoding of raw output with CTC and decoding of CTC predictions
62-
with label_to_idx mapping dictionnary
62+
with label_to_idx mapping dictionary
6363
6464
Args:
6565
logits: raw output of the model, shape BATCH_SIZE X SEQ_LEN X NUM_CLASSES + 1

doctr/models/recognition/master/pytorch.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ def forward(
176176
return_preds: if True, decode logits
177177
178178
Returns:
179-
A dictionnary containing eventually loss, logits and predictions.
179+
A dictionary containing eventually loss, logits and predictions.
180180
"""
181181
# Encode
182182
features = self.feat_extractor(x)["features"]

doctr/models/recognition/master/tensorflow.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@ def call(
165165
**kwargs: keyword arguments passed to the decoder
166166
167167
Returns:
168-
A dictionnary containing eventually loss, logits and predictions.
168+
A dictionary containing eventually loss, logits and predictions.
169169
"""
170170
# Encode
171171
feature = self.feat_extractor(x, **kwargs)

doctr/models/recognition/viptr/pytorch.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ def ctc_best_path(
7070

7171
def __call__(self, logits: torch.Tensor) -> list[tuple[str, float]]:
7272
"""Performs decoding of raw output with CTC and decoding of CTC predictions
73-
with label_to_idx mapping dictionnary
73+
with label_to_idx mapping dictionary
7474
7575
Args:
7676
logits: raw output of the model, shape (N, C + 1, seq_len)

references/detection/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ python references/detection/train_pytorch.py db_resnet50 --train_path path/to/yo
2929

3030
We now use the built-in [`torchrun`](https://pytorch.org/docs/stable/elastic/run.html) launcher to spawn your DDP workers. `torchrun` will set all the necessary environment variables (`LOCAL_RANK`, `RANK`, etc.) for you. Arguments are the same than the ones from single GPU, except:
3131

32-
- `--backend`: you can specify another `backend` for `DistribuedDataParallel` if the default one is not available on
32+
- `--backend`: you can specify another `backend` for `DistributedDataParallel` if the default one is not available on
3333
your operating system. Fastest one is `nccl` according to [PyTorch Documentation](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html).
3434

3535
#### Key `torchrun` parameters:

references/recognition/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ python references/recognition/train_pytorch.py crnn_vgg16_bn --train_path path/t
2929

3030
We now use the built-in [`torchrun`](https://pytorch.org/docs/stable/elastic/run.html) launcher to spawn your DDP workers. `torchrun` will set all the necessary environment variables (`LOCAL_RANK`, `RANK`, etc.) for you. Arguments are the same than the ones from single GPU, except:
3131

32-
- `--backend`: you can specify another `backend` for `DistribuedDataParallel` if the default one is not available on
32+
- `--backend`: you can specify another `backend` for `DistributedDataParallel` if the default one is not available on
3333
your operating system. Fastest one is `nccl` according to [PyTorch Documentation](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html).
3434

3535
#### Key `torchrun` parameters:

0 commit comments

Comments
 (0)