Conversation
|
If this doesn't affect the production, can we run it over the files of the old production and generate the uncut histograms? |
|
Yes, I agree with Oliver. I would check it, because I am worried about how the high momentum bins with low statistics would be treated by GenieGenerator during the simulation. |
|
How do we move forward on this? |
|
Actually I have some questions:
|
Right now it's pretty ad hoc what is done with Pythia and what with EvtGen. This is a physics question, which should be studied and discussed at a physics meeting.
Then let's try to do this properly. Again, probably something we should probably discuss at a physics meeting, but here it's much clearer that we need to change what we do. |
|
Are we sure this is the step(or the only step) for the production of these histograms? I also see the same weird cut-off in extractNeutrinosAndUpdateWeight.py. |
You are right, thank you! The extractNeutrinos fills only the histograms, while the MakeDecay does all the processing chain. So the first script is the one we should use to test the effect of the cut on the already existing past simulation files. For all the other comments, I agree on using gevgen_fnal. Ideally, we would want to use the same procedure adopted in SND@LHC, where GENIE takes care of the whole pipeline at once, from the incoming neutrino flux produced in the main target to the interacting neutrino spectra in the target geometry. However when I tried it a few years ago I could not load the geometry, due to all SHiP volumes.... |
Did you make note of what exactly the issues are? The geometry is in our control, if the way we describe it causes problems, we can fix that. Please check with @eduard322 , he presented this and other issues today: https://indico.cern.ch/event/1594362/#2-neutrino-simulation-and-part |
|
|
@eduard322 @antonioiuliano2 how do we proceed here? |
|
Still no clue from my side, unfortunately. I was not able to find any relevant note on the gevgen_fnal attempt, except that we had some issue both in the Advanced SNDLHC configuration and the SHiP configuration due to high number of volumes to process (high granularity)?. |
Any progress on this? |
Hello, I have made some test gevgen_fnal simulations and so far I have not encountered any showstopper. What I am doing is:
Then, steps are as before:
I am now going to have a look if there is something out of place, and I will then move to compare the flux histograms with and without the energy cut. |
|
Thank you for this update! |
|
@antonioiuliano2 Any news? |
|
Hello. Yes, I was able to launch gevgen_fnal directly with Hanae's test background production (The 33 and 34 folders of CERN_combined_weight) as input flux. The best approach is to skip the TH2D histogram entirely as input, and provide the flux as a GSimple Ntuple. I will report the output of my tests on Monday, and then I will prepare the Pull Request according to comments/input received there. |
|
Thanks a lot for the update, @antonioiuliano2 . Looking forward to the presentation Monday. |
|
How do we proceed with this @eduard322 @antonioiuliano2 ? |
Even if we do not plan to use them anymore for the full production simulations, |
Ok. What does this mean for the pull request? |
For the pull request, as pointed by @anupama-reghunath we first need to ensure the same change is applied to all related code, otherwise we leave some scripts with the cut and some without it. By looking in the scripts I see it is applied in:
They all use the exact same cut, for the same histograms. |
Fixed the 50 GeV cut for the production of P-Pt histograms of neutrino. I would investigate more what exactly this script does and what was the reason for that before using it. I don't think this is an urgent thing: it does not affect the production at all.