Skip to content

feat(pretrained): add built-in pretrained downloader and alias backend#5277

Open
njzjz-bot wants to merge 11 commits intodeepmodeling:masterfrom
njzjz-bot:feat/pretrained-integration
Open

feat(pretrained): add built-in pretrained downloader and alias backend#5277
njzjz-bot wants to merge 11 commits intodeepmodeling:masterfrom
njzjz-bot:feat/pretrained-integration

Conversation

@njzjz-bot
Copy link
Contributor

@njzjz-bot njzjz-bot commented Mar 1, 2026

Summary

This PR integrates pretrained model support directly into deepmd-kit under deepmd/pretrained, while keeping DeepPot usage unchanged.

Added

  • New command:
    • dp pretrained download <MODEL>
  • New module folder:
    • deepmd/pretrained/
    • includes registry.py, download.py, backend.py, entrypoints.py
  • Built-in model registry (currently):
    • DPA-3.2-5M
    • DPA-3.1-3M
  • Multi-source download strategy:
    • parallel probe over candidate sources
    • rank by response latency
    • fastest-first with automatic fallback on timeout/failure/checksum mismatch
  • SHA256 verification + atomic .part writes
  • .pretrained backend alias support via deepmd/backend/pretrained.py
    • allows DeepPot("DPA-3.2-5M.pretrained") while keeping existing DeepPot API unchanged
    • deep-eval adapter is lazy-loaded to avoid circular import issues

CLI wiring

  • Added pretrained parser/subparser in deepmd/main.py
  • Added dispatch in deepmd/entrypoints/main.py

Tests

  • source/tests/common/test_pretrained_parser.py
  • source/tests/common/test_pretrained_download.py
  • source/tests/common/test_pretrained_backend.py

Formatting / lint

  • Ran uvx prek run --all-files and committed auto-format updates.

Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)

Summary by CodeRabbit

  • New Features

    • New "pretrained" CLI group with pretrained download (optional --cache-dir) to fetch pretrained models.
    • Built-in pretrained model registry (includes DPA-3.2-5M and DPA-3.1-3M).
    • Added a pretrained backend scaffold and a lazy adapter so .pretrained aliases work transparently with existing evaluation flow (some backend hooks intentionally unsupported).
  • Downloads & Caching

    • HTTPS-only downloads with SHA256 verification, multi-source fallbacks, parallel probing, atomic writes, and centralized caching (~/.cache/deepmd/pretrained/models).
  • Tests

    • Added tests for backend detection, alias parsing, download/resolution behavior, URL ranking, and CLI parsing.

- add dp pretrained download <MODEL> CLI command
- move pretrained logic under deepmd/pretrained
- add built-in model registry with multi-source probing and fallback
- register .pretrained backend alias so DeepPot usage stays unchanged
- keep deep-eval adapter lazy to avoid circular imports
- add parser/backend/downloader tests

Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)
The fallback file was only added for local source-tree unittest convenience.\nKeep version behavior aligned with upstream packaging flow (_version.py via build).\n\nAuthored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)
Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 1, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new "pretrained" subsystem: backend registration, CLI subcommand and entrypoint, model registry, secure download/caching with checksum and URL ranking, and a DeepEval adapter resolving *.pretrained aliases to concrete backends. Includes unit tests for parser, backend, and download behavior.

Changes

Cohort / File(s) Summary
Backend
deepmd/backend/pretrained.py
New PretrainedBackend registered as "pretrained"; advertises DEEP_EVAL, provides lazy deep_eval access and explicit NotImplementedError for unsupported hooks.
CLI / Entrypoints
deepmd/main.py, deepmd/entrypoints/main.py
Adds top-level pretrained subcommand and pretrained download subparser; main dispatch routes pretrained to pretrained_entrypoint.
Pretrained package core
deepmd/pretrained/__init__.py, deepmd/pretrained/registry.py, deepmd/pretrained/entrypoints.py, deepmd/pretrained/download.py
New package: model registry (two built-in models), entrypoint handler, and robust download utilities: HTTPS-only, SHA256 verification, candidate URL extraction/deduplication, parallel probing/ranking, atomic downloads, and centralized cache handling.
DeepEval adapter
deepmd/pretrained/deep_eval.py
Adds parse_pretrained_alias and PretrainedDeepEvalBackend that resolves a .pretrained alias to an actual model path/backend and delegates DeepEval operations and getters.
Tests
source/tests/common/test_pretrained_backend.py, source/tests/common/test_pretrained_download.py, source/tests/common/test_pretrained_parser.py
Unit tests for backend detection and lazy deep_eval, URL ranking and download fallback with checksum verification, cache resolution, and CLI parsing including unknown-model rejection.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant CLI as "CLI Parser"
    participant Entrypoint as "pretrained_entrypoint"
    participant Download as "download_model()"
    participant Registry as "MODEL_REGISTRY"
    participant URLRank as "URL Ranker"
    participant HTTP as "HTTP Client"
    participant Cache as "Cache Dir"

    User->>CLI: deepmd pretrained download MODEL
    CLI->>Entrypoint: invoke pretrained_entrypoint(args)
    Entrypoint->>Download: request MODEL, cache_dir
    Download->>Registry: lookup model metadata
    Registry-->>Download: {urls, filename, sha256}
    Download->>URLRank: probe & rank candidate URLs (parallel)
    URLRank->>HTTP: probe URLs
    HTTP-->>URLRank: latencies / reachability
    URLRank-->>Download: sorted URLs
    loop try URLs until success
        Download->>HTTP: download from next URL
        HTTP-->>Download: stream file
        Download->>Download: verify SHA256
    end
    Download->>Cache: atomic write -> final path
    Cache-->>Download: cached file path
    Download-->>Entrypoint: return Path
    Entrypoint-->>User: print model path
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~55 minutes

Suggested reviewers

  • njzjz
  • iProzd
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 22.54% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat(pretrained): add built-in pretrained downloader and alias backend' clearly and accurately summarizes the main changes: introducing pretrained model support with a downloader and backend alias.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
deepmd/pretrained/download.py (1)

196-214: Consider simplifying by delegating to download_model.

The cache check logic (lines 209-212) duplicates what download_model already handles (lines 146-150). Since download_model returns early when the cached file is valid, resolve_model_path could simply call it directly.

♻️ Suggested simplification
 def resolve_model_path(
     model_name: str,
     *,
     cache_dir: Path | None = None,
     logger: logging.Logger | None = None,
 ) -> Path:
     """Resolve model alias to verified local file, downloading if needed."""
-    target_dir = cache_dir or DEFAULT_CACHE_DIR
-    model_info = MODEL_REGISTRY.get(model_name)
-    if model_info is None:
-        available = ", ".join(sorted(MODEL_REGISTRY))
-        raise ValueError(f"Unknown model: {model_name}. Available: {available}")
-
-    output_path = target_dir / str(model_info["filename"])
-    expected_sha256 = str(model_info["sha256"])
-    if output_path.exists() and _sha256sum(output_path) == expected_sha256:
-        return output_path
-
-    return download_model(model_name, cache_dir=target_dir, logger=logger)
+    return download_model(model_name, cache_dir=cache_dir, logger=logger)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pretrained/download.py` around lines 196 - 214, The cache
existence/validation logic in resolve_model_path duplicates download_model;
simplify by looking up the model with MODEL_REGISTRY inside resolve_model_path
(raise the same ValueError if missing), compute target_dir = cache_dir or
DEFAULT_CACHE_DIR, then return download_model(model_name, cache_dir=target_dir,
logger=logger) directly so download_model handles the cached-file SHA256 check
and download; keep symbols resolve_model_path, MODEL_REGISTRY,
DEFAULT_CACHE_DIR, and download_model to locate and update the code.
source/tests/common/test_pretrained_download.py (1)

83-107: Consider adding assertion that download is not triggered for cached files.

The test validates the return path but doesn't verify that download_model was never called. Adding a mock/patch assertion would strengthen coverage.

♻️ Suggested improvement
             with patch.object(
                 dl,
                 "MODEL_REGISTRY",
                 {
                     model_name: {
                         "filename": "model.pt",
                         "sha256": expected,
                         "urls": ["https://a"],
                     }
                 },
-            ):
-                path = dl.resolve_model_path(model_name, cache_dir=cache_dir)
+            ), patch.object(dl, "download_model") as mock_download:
+                path = dl.resolve_model_path(model_name, cache_dir=cache_dir)
+                mock_download.assert_not_called()

             self.assertEqual(path, target)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/common/test_pretrained_download.py` around lines 83 - 107,
Update the test_resolve_model_path_cached test to assert that the download path
is not exercised by patching or mocking dl.download_model and asserting it was
not called; specifically, inside the with patch.object(dl, "MODEL_REGISTRY",
{...}) block wrap or patch dl.download_model (or use unittest.mock.patch.object
on the download_model symbol) and after calling
dl.resolve_model_path(model_name, cache_dir=cache_dir) assert
download_model.assert_not_called() so the test verifies resolve_model_path
returns the cached target and does not invoke download_model.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/pretrained/backend.py`:
- Around line 54-145: PretrainedDeepEvalBackend is missing delegations for
eval_descriptor and eval_fitting_last_layer so calls get the default
NotImplementedError instead of the underlying backend implementation; add
methods eval_descriptor(...) and eval_fitting_last_layer(...) on
PretrainedDeepEvalBackend that forward all parameters and kwargs to
self._backend.eval_descriptor(...) and
self._backend.eval_fitting_last_layer(...) respectively (matching the same
signatures used by DeepEvalBackend) so that backends that implement these
methods (e.g., PyTorch/TensorFlow) are invoked via the _backend proxy.

---

Nitpick comments:
In `@deepmd/pretrained/download.py`:
- Around line 196-214: The cache existence/validation logic in
resolve_model_path duplicates download_model; simplify by looking up the model
with MODEL_REGISTRY inside resolve_model_path (raise the same ValueError if
missing), compute target_dir = cache_dir or DEFAULT_CACHE_DIR, then return
download_model(model_name, cache_dir=target_dir, logger=logger) directly so
download_model handles the cached-file SHA256 check and download; keep symbols
resolve_model_path, MODEL_REGISTRY, DEFAULT_CACHE_DIR, and download_model to
locate and update the code.

In `@source/tests/common/test_pretrained_download.py`:
- Around line 83-107: Update the test_resolve_model_path_cached test to assert
that the download path is not exercised by patching or mocking dl.download_model
and asserting it was not called; specifically, inside the with patch.object(dl,
"MODEL_REGISTRY", {...}) block wrap or patch dl.download_model (or use
unittest.mock.patch.object on the download_model symbol) and after calling
dl.resolve_model_path(model_name, cache_dir=cache_dir) assert
download_model.assert_not_called() so the test verifies resolve_model_path
returns the cached target and does not invoke download_model.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a3db25a and 35f384b.

📒 Files selected for processing (11)
  • deepmd/backend/pretrained.py
  • deepmd/entrypoints/main.py
  • deepmd/main.py
  • deepmd/pretrained/__init__.py
  • deepmd/pretrained/backend.py
  • deepmd/pretrained/download.py
  • deepmd/pretrained/entrypoints.py
  • deepmd/pretrained/registry.py
  • source/tests/common/test_pretrained_backend.py
  • source/tests/common/test_pretrained_download.py
  • source/tests/common/test_pretrained_parser.py

…ments

- rename deepmd/pretrained/backend.py -> deepmd/pretrained/deep_eval.py\n- remove unnecessary lazy import in deepmd/backend/pretrained.py\n- simplify resolve_model_path by delegating directly to download_model\n- update tests per review comments (cached resolve should not trigger _download_file)\n- keep eval_descriptor/eval_fitting_last_layer delegation in adapter\n\nAuthored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
deepmd/pretrained/download.py (1)

196-209: Minor redundancy in model validation.

The registry check at lines 204-207 duplicates the validation already performed in download_model. While not incorrect, this could be simplified by removing the duplicate check here since download_model will raise the same error.

♻️ Optional simplification
 def resolve_model_path(
     model_name: str,
     *,
     cache_dir: Path | None = None,
     logger: logging.Logger | None = None,
 ) -> Path:
     """Resolve model alias to verified local file, downloading if needed."""
     target_dir = cache_dir or DEFAULT_CACHE_DIR
-    model_info = MODEL_REGISTRY.get(model_name)
-    if model_info is None:
-        available = ", ".join(sorted(MODEL_REGISTRY))
-        raise ValueError(f"Unknown model: {model_name}. Available: {available}")
-
     return download_model(model_name, cache_dir=target_dir, logger=logger)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pretrained/download.py` around lines 196 - 209, The resolve_model_path
function currently duplicates model existence validation already performed by
download_model; remove the MODEL_REGISTRY lookup and associated ValueError
branch from resolve_model_path and simply delegate to download_model (keeping
target_dir/cache_dir and logger forwarding) so download_model remains the single
source of truth for verifying model_name (refer to resolve_model_path,
download_model, and MODEL_REGISTRY to locate the logic).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@deepmd/pretrained/download.py`:
- Around line 196-209: The resolve_model_path function currently duplicates
model existence validation already performed by download_model; remove the
MODEL_REGISTRY lookup and associated ValueError branch from resolve_model_path
and simply delegate to download_model (keeping target_dir/cache_dir and logger
forwarding) so download_model remains the single source of truth for verifying
model_name (refer to resolve_model_path, download_model, and MODEL_REGISTRY to
locate the logic).

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 35f384b and 27ac825.

📒 Files selected for processing (5)
  • deepmd/backend/pretrained.py
  • deepmd/pretrained/deep_eval.py
  • deepmd/pretrained/download.py
  • source/tests/common/test_pretrained_backend.py
  • source/tests/common/test_pretrained_download.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • deepmd/backend/pretrained.py
  • source/tests/common/test_pretrained_backend.py

njzjz-bot and others added 2 commits March 1, 2026 21:58
…module

- keep lazy import in deepmd/backend/pretrained.py\n- keep deepmd/pretrained/deep_eval.py as regular (non-lazy) module\n- preserve deep eval delegations for descriptor/fitting-last-layer\n- simplify resolve_model_path and adjust tests for cached path behavior\n\nAuthored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
source/tests/common/test_pretrained_backend.py (1)

36-39: Add a regression test for uppercase .PRETRAINED suffix.

Please add a case like parse_pretrained_alias("DPA-3.2-5M.PRETRAINED") to keep parser behavior aligned with backend suffix detection rules and prevent regressions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/common/test_pretrained_backend.py` around lines 36 - 39, Add a
regression case to the existing test_parse_pretrained_alias_invalid: call
parse_pretrained_alias("DPA-3.2-5M.PRETRAINED") inside the same
assertRaises(ValueError) context so the parser is validated to reject the
uppercase .PRETRAINED suffix; update the test in
source/tests/common/test_pretrained_backend.py alongside
test_parse_pretrained_alias_invalid using the same pattern to ensure behavior
matches backend suffix detection rules.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/pretrained/deep_eval.py`:
- Around line 31-34: The suffix check for pretrained aliases is currently
case-sensitive; change it to compare lowercased values so aliases like
DPA-3.2-5M.PRETRAINED pass. Use alias_lower = alias.lower() (or equivalent) and
test alias_lower.endswith(suffix) while keeping the original alias when slicing
to produce model_name (alias, suffix, model_file, model_name are the identifiers
to update).

---

Nitpick comments:
In `@source/tests/common/test_pretrained_backend.py`:
- Around line 36-39: Add a regression case to the existing
test_parse_pretrained_alias_invalid: call
parse_pretrained_alias("DPA-3.2-5M.PRETRAINED") inside the same
assertRaises(ValueError) context so the parser is validated to reject the
uppercase .PRETRAINED suffix; update the test in
source/tests/common/test_pretrained_backend.py alongside
test_parse_pretrained_alias_invalid using the same pattern to ensure behavior
matches backend suffix detection rules.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 27ac825 and 7303412.

📒 Files selected for processing (3)
  • deepmd/backend/pretrained.py
  • deepmd/pretrained/deep_eval.py
  • source/tests/common/test_pretrained_backend.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/backend/pretrained.py

Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
deepmd/pretrained/deep_eval.py (1)

31-34: ⚠️ Potential issue | 🟡 Minor

Make alias suffix matching case-insensitive.

The suffix guard is still case-sensitive, so uppercase aliases can be detected upstream but rejected here.

💡 Proposed fix
 def parse_pretrained_alias(model_file: str) -> str:
     """Extract model name from ``*.pretrained`` alias string."""
     alias = Path(model_file).name
     suffix = ".pretrained"
-    if not alias.endswith(suffix):
+    if not alias.lower().endswith(suffix):
         raise ValueError(f"Invalid pretrained alias: {model_file}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pretrained/deep_eval.py` around lines 31 - 34, The suffix check is
case-sensitive; update the guard that uses alias and suffix to perform a
case-insensitive comparison (e.g., compare
alias.lower().endswith(suffix.lower())) and when raising the ValueError, report
the offending alias variable (alias) rather than model_file to give the correct
context; adjust the check around the existing suffix, alias, and ValueError
usage in deep_eval.py accordingly.
🧹 Nitpick comments (1)
deepmd/pretrained/deep_eval.py (1)

33-37: Address Ruff TRY003 for repeated inline exception messages.

Consider using a small dedicated exception class and raise that instead of repeating formatted messages inline.

♻️ Proposed refactor
+class InvalidPretrainedAliasError(ValueError):
+    """Raised when a pretrained alias is malformed."""
+
+    def __init__(self, model_file: str) -> None:
+        super().__init__(f"Invalid pretrained alias: {model_file}")
+
+
 def parse_pretrained_alias(model_file: str) -> str:
@@
-    if not alias.lower().endswith(suffix):
-        raise ValueError(f"Invalid pretrained alias: {model_file}")
+    if not alias.lower().endswith(suffix):
+        raise InvalidPretrainedAliasError(model_file)
@@
     model_name = alias[: -len(suffix)]
     if not model_name:
-        raise ValueError(f"Invalid pretrained alias: {model_file}")
+        raise InvalidPretrainedAliasError(model_file)

As per coding guidelines, **/*.py: Always run ruff check . and ruff format . before committing changes or CI will fail.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pretrained/deep_eval.py` around lines 33 - 37, Replace the repeated
inline ValueError messages in deep_eval.py by defining a small dedicated
exception class (e.g., PretrainedAliasError) near the top of the module and
raising that class from the code paths that currently call raise
ValueError(f"Invalid pretrained alias: {model_file}") (both where model_file
validation fails and where model_name is empty); update the two occurrences to
raise PretrainedAliasError(model_file) (or a descriptive message) and run ruff
check . and ruff format . before committing to satisfy lint/format rules.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@deepmd/pretrained/deep_eval.py`:
- Around line 31-34: The suffix check is case-sensitive; update the guard that
uses alias and suffix to perform a case-insensitive comparison (e.g., compare
alias.lower().endswith(suffix.lower())) and when raising the ValueError, report
the offending alias variable (alias) rather than model_file to give the correct
context; adjust the check around the existing suffix, alias, and ValueError
usage in deep_eval.py accordingly.

---

Nitpick comments:
In `@deepmd/pretrained/deep_eval.py`:
- Around line 33-37: Replace the repeated inline ValueError messages in
deep_eval.py by defining a small dedicated exception class (e.g.,
PretrainedAliasError) near the top of the module and raising that class from the
code paths that currently call raise ValueError(f"Invalid pretrained alias:
{model_file}") (both where model_file validation fails and where model_name is
empty); update the two occurrences to raise PretrainedAliasError(model_file) (or
a descriptive message) and run ruff check . and ruff format . before committing
to satisfy lint/format rules.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7303412 and 397a451.

📒 Files selected for processing (1)
  • deepmd/pretrained/deep_eval.py

- parse aliases with case-insensitive suffix check\n- add dedicated InvalidPretrainedAliasError\n- extend backend test to cover uppercase suffix\n\nAuthored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)
@njzjz njzjz requested review from iProzd and wanghan-iapcm March 1, 2026 22:53
@codecov
Copy link

codecov bot commented Mar 1, 2026

Codecov Report

❌ Patch coverage is 69.09871% with 72 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.17%. Comparing base (f959a53) to head (386ff68).
⚠️ Report is 11 commits behind head on master.

Files with missing lines Patch % Lines
deepmd/pretrained/download.py 69.16% 37 Missing ⚠️
deepmd/pretrained/deep_eval.py 65.00% 21 Missing ⚠️
deepmd/pretrained/entrypoints.py 46.15% 7 Missing ⚠️
deepmd/backend/pretrained.py 81.48% 5 Missing ⚠️
deepmd/entrypoints/main.py 33.33% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5277      +/-   ##
==========================================
+ Coverage   82.00%   82.17%   +0.17%     
==========================================
  Files         750      760      +10     
  Lines       75215    76261    +1046     
  Branches     3615     3659      +44     
==========================================
+ Hits        61680    62670     +990     
- Misses      12372    12422      +50     
- Partials     1163     1169       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add doc for how to use dp pretrained

@github-actions github-actions bot added the Docs label Mar 3, 2026
Copy link
Contributor Author

@njzjz-bot njzjz-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed the latest review feedback in commit 60882f5.

  • Added end-user documentation for dp pretrained usage: doc/model/pretrained.md and linked it from doc/model/index.rst.
  • Kept CLI print(path) output intentionally for shell/script usage and added an info log in pretrained_entrypoint.
  • Clarified deepmd/backend/pretrained.py as an internal virtual backend used only for .pretrained alias dispatch (not a user-selectable compute backend).

Thanks for the suggestions.

@njzjz-bot njzjz-bot force-pushed the feat/pretrained-integration branch from 60882f5 to 6b19157 Compare March 3, 2026 09:27
- document dp pretrained download command and alias usage\n- include model/pretrained.md in docs index\n- keep CLI path output and add log message for visibility\n- clarify pretrained backend is an internal virtual alias backend\n\nAuthored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)
@njzjz-bot njzjz-bot force-pushed the feat/pretrained-integration branch from 6b19157 to 00a0ea2 Compare March 3, 2026 09:35
- accept built-in model names without .pretrained suffix in DeepPot/DeepEval\n- register built-in model names as pretrained backend aliases\n- update docs: DeepPot does not require prior dp pretrained download\n- add tests for plain model-name alias resolution\n\nAuthored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.3-codex)
@njzjz njzjz requested a review from wanghan-iapcm March 5, 2026 14:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants