Hi all,
I would like to ask if I want to use the ebm.eval_terms(.) to get local explanations or the ebm.term_importances(.) to get global explanations, what is the exactly of output score of an individual-level feature k in a classification task? Is it exactly the output of shape functions fk(.), or the a normalized number?
If the feature importance score is a normalized number, is it the output of sigmoid function like this:

Hi all,

I would like to ask if I want to use the ebm.eval_terms(.) to get local explanations or the ebm.term_importances(.) to get global explanations, what is the exactly of output score of an individual-level feature k in a classification task? Is it exactly the output of shape functions fk(.), or the a normalized number?
If the feature importance score is a normalized number, is it the output of sigmoid function like this: