Skip to content

Neural XAI methods (deep learning) #51

@breimanntools

Description

@breimanntools

Problem

Protein language models (PLMs) are powerful but not interpretable.

Goal

Map deep learning representations to CPP-based explanations.

Tasks

  • Implement Integrated Gradients, LRP, DeepLIFT (captum)
  • Extract residue-level importance from PLMs
  • Map embeddings → CPP feature space (scale projection)
  • Apply TCAV on embeddings
  • Compare DL attribution vs CPP attribution

How this improves AAanalysis

  • Makes PLMs usable in interpretable workflows
  • Connects embedding space → physicochemical meaning
  • Enables hybrid models (DL + CPP)

Acceptance criteria

  • PLM predictions explainable via CPP features

Metadata

Metadata

Assignees

Labels

prio:3Still importanttopic:XAIExplainability methods integrated into AAanalysistype:featureImplementation of feature

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions