Skip to content

Added regression metrics and support for multi-modal data using dictionaries#1685

Merged
AntonioCarta merged 6 commits intoContinualAI:masterfrom
spartanjoax:master
Mar 11, 2025
Merged

Added regression metrics and support for multi-modal data using dictionaries#1685
AntonioCarta merged 6 commits intoContinualAI:masterfrom
spartanjoax:master

Conversation

@spartanjoax
Copy link
Copy Markdown

Hi all,

I have created a fork to contribute to Avalanche. My problem is a regression problem using multi-modal data and noticed that Avalanche did not support this natively. I have added RMSE and R2 metrics, as well as forgetting metrics for both, and modified also the library to handle batches that use dictionaries of tensors instead of just tensors, to handle multi-modal data. This second point address issue #1678. I have blackend the code as well. Please let me know if everything looks good.

Best regards,
Joaquín

@spartanjoax
Copy link
Copy Markdown
Author

It seems the readthedocs failed because it is using a prior version of sklearn which uses the deprecated mean_squared_error function to calculate the RMSE and my proposed metric uses the new version which calls directly the new root_mean_squared_error. Should I modify my code to use the deprecated function?

@AntonioCarta
Copy link
Copy Markdown
Collaborator

Thanks, the changes look good. Can you also add the new classes to the API documentation?

Should I modify my code to use the deprecated function?

is it possibile to support both? Either by checking the scikit-learn version or by try/catching the import. Otherwise we can add a version constraint, although it's better to avoid it if possible.

…or when training with jointtraining strategy
@AntonioCarta
Copy link
Copy Markdown
Collaborator

Hi, it seems that there is an import error:

  File "/__w/avalanche/avalanche/avalanche/evaluation/metrics/regression_forgetting.py", line 19, in <module>
    from metrics import TaskAwareRMSE, TaskAwareR2
ModuleNotFoundError: No module named 'metrics'

@spartanjoax
Copy link
Copy Markdown
Author

I'll fix it right away

@spartanjoax
Copy link
Copy Markdown
Author

I have changed RMSE and R2 to be calculated directly with Torch functions. I also fixed an error in the joint training strategy, as it was not training.

@coveralls
Copy link
Copy Markdown

Pull Request Test Coverage Report for Build 13662013602

Details

  • 198 of 443 (44.7%) changed or added relevant lines in 27 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.2%) to 50.807%

Changes Missing Coverage Covered Lines Changed/Added Lines %
avalanche/benchmarks/scenarios/deprecated/classification_scenario.py 2 4 50.0%
avalanche/benchmarks/scenarios/deprecated/lazy_dataset_sequence.py 4 6 66.67%
avalanche/benchmarks/scenarios/detection_scenario.py 2 4 50.0%
avalanche/benchmarks/scenarios/generic_scenario.py 2 4 50.0%
avalanche/benchmarks/utils/data.py 2 4 50.0%
avalanche/benchmarks/utils/data_attribute.py 2 4 50.0%
avalanche/benchmarks/utils/dataset_definitions.py 2 4 50.0%
avalanche/benchmarks/utils/dataset_utils.py 2 4 50.0%
avalanche/evaluation/metric_definitions.py 0 2 0.0%
avalanche/evaluation/metrics/labels_repartition.py 0 2 0.0%
Totals Coverage Status
Change from base Build 13457900900: -0.2%
Covered Lines: 14896
Relevant Lines: 29319

💛 - Coveralls

@spartanjoax
Copy link
Copy Markdown
Author

Hi @AntonioCarta , ¿could you please help me understand what test failed? Thanks in advance.

@AntonioCarta
Copy link
Copy Markdown
Collaborator

Hi, it's just the style checker. I will fix that in a separate commit. Thanks for your contibution

@AntonioCarta AntonioCarta merged commit 9136ac3 into ContinualAI:master Mar 11, 2025
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants