Skip to content

Nuhist cut#958

Open
eduard322 wants to merge 2 commits intoShipSoft:masterfrom
eduard322:nuhist_cut
Open

Nuhist cut#958
eduard322 wants to merge 2 commits intoShipSoft:masterfrom
eduard322:nuhist_cut

Conversation

@eduard322
Copy link
Copy Markdown
Contributor

Fixed the 50 GeV cut for the production of P-Pt histograms of neutrino. I would investigate more what exactly this script does and what was the reason for that before using it. I don't think this is an urgent thing: it does not affect the production at all.

@olantwin
Copy link
Copy Markdown
Contributor

olantwin commented Dec 1, 2025

If this doesn't affect the production, can we run it over the files of the old production and generate the uncut histograms?

@antonioiuliano2
Copy link
Copy Markdown
Contributor

Yes, I agree with Oliver.

I would check it, because I am worried about how the high momentum bins with low statistics would be treated by GenieGenerator during the simulation.

@olantwin
Copy link
Copy Markdown
Contributor

olantwin commented Dec 8, 2025

How do we move forward on this?

@eduard322
Copy link
Copy Markdown
Contributor Author

Actually I have some questions:

  1. Why do we simulate decays using pythia if we have EvtGen now?
  2. I still see many limitations because of the way we generate neutrino signal/bkg:
    a. The GENIE dataset that is used in all the bkg studies is simulated for nu-Fe56 interaction, which is not correct (though give us an undefined confidence that we are safe)
    b. The subsequent number of interactions using the fixed cross-section for DIS is also incorrect: for our energies ~half of the interactions will be resonance scattering of neutrinos that have a different cross-section function. And you are not able to use the basic formula for that as well: the spectrum of interacted neutrinos differ from the initial neutrino spectrum, making the whole pipeline of the neutrino studies intransparent.
    gevgen_fnal would be the best solution for that...

@olantwin
Copy link
Copy Markdown
Contributor

olantwin commented Dec 8, 2025

Actually I have some questions:

  1. Why do we simulate decays using pythia if we have EvtGen now?

Right now it's pretty ad hoc what is done with Pythia and what with EvtGen. This is a physics question, which should be studied and discussed at a physics meeting.

  1. I still see many limitations because of the way we generate neutrino signal/bkg:
    a. The GENIE dataset that is used in all the bkg studies is simulated for nu-Fe56 interaction, which is not correct (though give us an undefined confidence that we are safe)
    b. The subsequent number of interactions using the fixed cross-section for DIS is also incorrect: for our energies ~half of the interactions will be resonance scattering of neutrinos that have a different cross-section function. And you are not able to use the basic formula for that as well: the spectrum of interacted neutrinos differ from the initial neutrino spectrum, making the whole pipeline of the neutrino studies intransparent.
    gevgen_fnal would be the best solution for that...

Then let's try to do this properly. Again, probably something we should probably discuss at a physics meeting, but here it's much clearer that we need to change what we do.

@anupama-reghunath
Copy link
Copy Markdown
Contributor

Are we sure this is the step(or the only step) for the production of these histograms? I also see the same weird cut-off in extractNeutrinosAndUpdateWeight.py.

@antonioiuliano2
Copy link
Copy Markdown
Contributor

Are we sure this is the step(or the only step) for the production of these histograms? I also see the same weird cut-off in extractNeutrinosAndUpdateWeight.py.

You are right, thank you! The extractNeutrinos fills only the histograms, while the MakeDecay does all the processing chain. So the first script is the one we should use to test the effect of the cut on the already existing past simulation files.

For all the other comments, I agree on using gevgen_fnal. Ideally, we would want to use the same procedure adopted in SND@LHC, where GENIE takes care of the whole pipeline at once, from the incoming neutrino flux produced in the main target to the interacting neutrino spectra in the target geometry.

However when I tried it a few years ago I could not load the geometry, due to all SHiP volumes....

@olantwin
Copy link
Copy Markdown
Contributor

However when I tried it a few years ago I could not load the geometry, due to all SHiP volumes....

Did you make note of what exactly the issues are? The geometry is in our control, if the way we describe it causes problems, we can fix that.

Please check with @eduard322 , he presented this and other issues today: https://indico.cern.ch/event/1594362/#2-neutrino-simulation-and-part

@olantwin
Copy link
Copy Markdown
Contributor

@antonioiuliano2,

Did you make note of what exactly the issues are? The geometry is in our control, if the way we describe it causes problems, we can fix that.

@olantwin
Copy link
Copy Markdown
Contributor

@eduard322 @antonioiuliano2 how do we proceed here?

@antonioiuliano2
Copy link
Copy Markdown
Contributor

Still no clue from my side, unfortunately. I was not able to find any relevant note on the gevgen_fnal attempt, except that we had some issue both in the Advanced SNDLHC configuration and the SHiP configuration due to high number of volumes to process (high granularity)?.
The best is just to do a new attempt and take notes on everything weird I see. I can do it next week, between the SHiP Computing Workshop sessions.

@olantwin
Copy link
Copy Markdown
Contributor

Still no clue from my side, unfortunately. I was not able to find any relevant note on the gevgen_fnal attempt, except that we had some issue both in the Advanced SNDLHC configuration and the SHiP configuration due to high number of volumes to process (high granularity)?. The best is just to do a new attempt and take notes on everything weird I see. I can do it next week, between the SHiP Computing Workshop sessions.

Any progress on this?

@antonioiuliano2
Copy link
Copy Markdown
Contributor

Still no clue from my side, unfortunately. I was not able to find any relevant note on the gevgen_fnal attempt, except that we had some issue both in the Advanced SNDLHC configuration and the SHiP configuration due to high number of volumes to process (high granularity)?. The best is just to do a new attempt and take notes on everything weird I see. I can do it next week, between the SHiP Computing Workshop sessions.

Any progress on this?

Hello, I have made some test gevgen_fnal simulations and so far I have not encountered any showstopper. What I am doing is:

  • launching a dummy Particle Gun muon simulation to have a geometry file of our detector;
  • converting the geometry into GDML file;
  • make the splines with gmkspl with this geometry file as input;
  • launching gevgen_fnal providing as input the neutrino flux histogram and the geometry file;

Then, steps are as before:

  • Convert the output ghep into gst format with gntpc;
  • Add the TH2D histogram to the ROOT file with the gst TTree;
  • Launch the usual run_simScript.py simulation with GENIE input file

I am now going to have a look if there is something out of place, and I will then move to compare the flux histograms with and without the energy cut.

@olantwin
Copy link
Copy Markdown
Contributor

olantwin commented Mar 5, 2026

Thank you for this update!

@olantwin
Copy link
Copy Markdown
Contributor

@antonioiuliano2 Any news?

@antonioiuliano2
Copy link
Copy Markdown
Contributor

Hello.

Yes, I was able to launch gevgen_fnal directly with Hanae's test background production (The 33 and 34 folders of CERN_combined_weight) as input flux.

The best approach is to skip the TH2D histogram entirely as input, and provide the flux as a GSimple Ntuple.
This both solves the energy cut in the histogram, and allows gevgen_fnal to compute normalization directly. For example, for this test I generated all neutrino interactions for 1 year of data taking (4e+19 POT).

I will report the output of my tests on Monday, and then I will prepare the Pull Request according to comments/input received there.

@olantwin
Copy link
Copy Markdown
Contributor

Thanks a lot for the update, @antonioiuliano2 . Looking forward to the presentation Monday.

@olantwin olantwin added this to the 26.04 milestone Mar 31, 2026
@olantwin
Copy link
Copy Markdown
Contributor

olantwin commented Apr 9, 2026

How do we proceed with this @eduard322 @antonioiuliano2 ?

@antonioiuliano2
Copy link
Copy Markdown
Contributor

How do we proceed with this @eduard322 @antonioiuliano2 ?

Even if we do not plan to use them anymore for the full production simulations,
I would still have a look at the old and new production neutrino momentum 2D histograms with and without this cut, since they are useful info for quick comparisons of neutrino distributions.

@olantwin
Copy link
Copy Markdown
Contributor

How do we proceed with this @eduard322 @antonioiuliano2 ?

Even if we do not plan to use them anymore for the full production simulations, I would still have a look at the old and new production neutrino momentum 2D histograms with and without this cut, since they are useful info for quick comparisons of neutrino distributions.

Ok. What does this mean for the pull request?

@antonioiuliano2
Copy link
Copy Markdown
Contributor

How do we proceed with this @eduard322 @antonioiuliano2 ?

Even if we do not plan to use them anymore for the full production simulations, I would still have a look at the old and new production neutrino momentum 2D histograms with and without this cut, since they are useful info for quick comparisons of neutrino distributions.

Ok. What does this mean for the pull request?

For the pull request, as pointed by @anupama-reghunath we first need to ensure the same change is applied to all related code, otherwise we leave some scripts with the cut and some without it. By looking in the scripts I see it is applied in:

They all use the exact same cut, for the same histograms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants