Conversation
6b7c520 to
1b66adf
Compare
|
@lewisjared could you take this a look when you get a chance? There are some log messages that I don't really understand. I rebased my PR to the latest main, I am not sure if that influenced. Log message |
|
What command did you run to get that message? |
That was |
|
Github action pre-commit test fails with > ruff check .
packages/ref/src/cmip_ref/config.py:303:25: RUF009 Do not perform function call `Factory` in dataclass defaults
|
301 | This value is overridden if a value is specified via the CLI.
302 | """
303 | paths: PathConfig = Factory(PathConfig)
| ^^^^^^^^^^^^^^^^^^^ RUF009
304 | db: DbConfig = Factory(DbConfig)
305 | executor: ExecutorConfig = Factory(ExecutorConfig)
|
packages/ref/src/cmip_ref/config.py:304:20: RUF009 Do not perform function call `Factory` in dataclass defaults
|
302 | """
303 | paths: PathConfig = Factory(PathConfig)
304 | db: DbConfig = Factory(DbConfig)
| ^^^^^^^^^^^^^^^^^ RUF009
305 | executor: ExecutorConfig = Factory(ExecutorConfig)
306 | metric_providers: list[MetricsProviderConfig] = Factory(default_metric_providers)
|
packages/ref/src/cmip_ref/config.py:305:32: RUF009 Do not perform function call `Factory` in dataclass defaults
|
303 | paths: PathConfig = Factory(PathConfig)
304 | db: DbConfig = Factory(DbConfig)
305 | executor: ExecutorConfig = Factory(ExecutorConfig)
| ^^^^^^^^^^^^^^^^^^^^^^^ RUF009
306 | metric_providers: list[MetricsProviderConfig] = Factory(default_metric_providers)
307 | _raw: TOMLDocument | None = field(init=False, default=None, repr=False)
|
packages/ref/src/cmip_ref/config.py:306:53: RUF009 Do not perform function call `Factory` in dataclass defaults
|
304 | db: DbConfig = Factory(DbConfig)
305 | executor: ExecutorConfig = Factory(ExecutorConfig)
306 | metric_providers: list[MetricsProviderConfig] = Factory(default_metric_providers)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF009
307 | _raw: TOMLDocument | None = field(init=False, default=None, repr=False)
308 | _config_file: Path | None = field(init=False, default=None, repr=False)
|
Found 4 errors. |
|
@lewisjared I think this PR is ready. I was able to run it locally and produce expected output files. Can you give a review on this? |
|
I added a commit to include a test for this new metric. It doesn't currently pass the bundle validation due to #258, but at least that validates that the metric runs to completion. |
|
@lee1043 I added some additional tests to increase the test coverage to an acceptable level |
Co-authored-by: Jared Lewis <jared@jared.kiwi.nz>
|
Thank you, @lewisjared for the additions, which are very helpful! The metric is now working as expected even for the multiple input netcdf files. Also added some unit tests. I am getting the following error but I think that is expected at the moment, as pointed by @minxu74 (please correct me if I am wrong). ---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[25], line 2
1 definition.output_directory.mkdir(parents=True, exist_ok=True)
----> 2 direct_result = metric.run(definition=definition)
3 assert direct_result.successful
5 prettyprinter.pprint(direct_result)
File [/pscratch/sd/l/lee1043/git/climate-ref/packages/ref-metrics-pmp/src/cmip_ref_metrics_pmp/annual_cycle.py:235](https://jupyter.nersc.gov/user/lee1043/perlmutter-login-node-base/lab/tree/pscratch/sd/l/lee1043/PMP/REF_test/pscratch/sd/l/lee1043/git/climate-ref/packages/ref-metrics-pmp/src/cmip_ref_metrics_pmp/annual_cycle.py#line=234), in AnnualCycle.run(self, definition)
232 runs = [self.provider.run(cmd) for cmd in cmds]
233 logger.debug(f"runs: {runs}")
--> 235 return self.build_metric_result(definition)
File [/pscratch/sd/l/lee1043/git/climate-ref/packages/ref-metrics-pmp/src/cmip_ref_metrics_pmp/annual_cycle.py:207](https://jupyter.nersc.gov/user/lee1043/perlmutter-login-node-base/lab/tree/pscratch/sd/l/lee1043/PMP/REF_test/pscratch/sd/l/lee1043/git/climate-ref/packages/ref-metrics-pmp/src/cmip_ref_metrics_pmp/annual_cycle.py#line=206), in AnnualCycle.build_metric_result(self, definition)
204 png_files = list(png_directory.glob("*.png"))
205 data_files = list(data_directory.glob("*.nc"))
--> 207 cmec_output, cmec_metric = process_json_result(results_files[0], png_files, data_files)
209 return MetricExecutionResult.build_from_output_bundle(
210 definition,
211 cmec_output_bundle=cmec_output,
212 cmec_metric_bundle=cmec_metric,
213 )
File [/pscratch/sd/l/lee1043/git/climate-ref/packages/ref-metrics-pmp/src/cmip_ref_metrics_pmp/pmp_driver.py:103](https://jupyter.nersc.gov/user/lee1043/perlmutter-login-node-base/lab/tree/pscratch/sd/l/lee1043/PMP/REF_test/pscratch/sd/l/lee1043/git/climate-ref/packages/ref-metrics-pmp/src/cmip_ref_metrics_pmp/pmp_driver.py#line=102), in process_json_result(json_filename, png_files, data_files)
100 logger.info(f"cmec_output: {pretty_repr(cmec_output)}")
101 logger.info(f"cmec_metric: {pretty_repr(cmec_metric)}")
--> 103 return CMECOutput(**cmec_output), CMECMetric(**cmec_metric)
File [/pscratch/sd/l/lee1043/git/climate-ref/.venv/lib/python3.11/site-packages/pydantic/main.py:214](https://jupyter.nersc.gov/user/lee1043/perlmutter-login-node-base/lab/tree/pscratch/sd/l/lee1043/PMP/REF_test/pscratch/sd/l/lee1043/git/climate-ref/.venv/lib/python3.11/site-packages/pydantic/main.py#line=213), in BaseModel.__init__(self, **data)
212 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
213 __tracebackhide__ = True
--> 214 validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
215 if self is not validated_self:
216 warnings.warn(
217 'A custom validator is returning a value other than `self`.\n'
218 "Returning anything other than `self` from a top level model validator isn't supported when validating via `__init__`.\n"
219 'See the `model_validator` docs (https://docs.pydantic.dev/latest/concepts/validators/#model-validators) for more details.',
220 stacklevel=2,
221 )
ValidationError: 1 validation error for CMECMetric
Value error, Unknown dimension values: {'mae_xy', 'std_xy_devzm', 'rms_xy', 'std-obs_xyt', 'mean-obs_xy', 'bias_xy', 'mean_xy', 'rms_devzm', 'rms_y', 'std_xy', 'std_xyt', 'rms_xyt', 'std-obs_xy', 'cor_xy', 'std-obs_xy_devzm', 'rmsc_xy'} [type=value_error, input_value={'ACCESS-CM2': {'default'...': {'units': 'mm/day'}}}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/value_error |
|
@lewisjared All CI tests except the codecov/patch are passed. Should I merge the PR or do you prefer to add more tests to increase the test coverage percentage? |
Yes, that is expected. I'd appreciate if you and @minxu74 come up with a fix #258 (personally I'd just drop the calendar months from the results within the REF provider for now). Currently we have a lot of xfails for the integration tests which I'd like to clean up so we can see when the providers don't work as expected. Those integration tests are the most useful tests as they are what users will see. |
|
Ok lets merge this noting that the data requirements aren't actually what is intended yet ,but that requires the Or operator. I'll follow up with that fix |
Description
PR generated for tracking changes (not ready until marked as ready)
Checklist
Please confirm that this pull request has done the following:
changelog/