-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New ht thcovmat #2126
base: master
Are you sure you want to change the base?
New ht thcovmat #2126
Conversation
Greetings from your nice fit 🤖 !
Check the report carefully, and please buy me a ☕ , or better, a GPU 😉! |
n3fit/src/n3fit/layers/DIS.py
Outdated
fktable_arr, | ||
dataset_name, | ||
boundary_condition=None, | ||
operation_name="NULL", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
keyword arguments (kwargs) you don't use at this level, you don't have to specify. That's what **kwargs is for.
n3fit/src/n3fit/layers/DIS.py
Outdated
This function is very similar to `compute_ht_parametrisation` in | ||
validphys.theorycovariance.construction.py. However, the latter | ||
accounts for shifts in the 5pt prescription. As of now, this function | ||
is meant to work only for DIS NC data, using the ABMP16 result. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe reference Eq. 6 of https://arxiv.org/pdf/1701.05838
n3fit/src/n3fit/layers/DIS.py
Outdated
x = self.exp_kinematics['kin1'] | ||
y = self.exp_kinematics['kin3'] | ||
Q2 = self.exp_kinematics['kin2'] | ||
N2, NL = 1#compute_normalisation_by_experiment(self.dataname, x, y, Q2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this function? Also I think you can do N2=NL=1, but not NL,N2=1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DIS.py is not in sync with the new way shifts are computed. Hence, that function that adds the shift in the theory predictions can be removed. I'll work on that later on, as we discussed. I should have deleted them.
if (sam_t0 := file_content.get('sampling')) is not None: | ||
SETUPFIT_FIXED_CONFIG['separate_multiplicative'] = sam_t0.get( | ||
'separate_multiplicative', False | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember why I implemented that...I think I can remove it
# NOTE: from the perspective of the fit scalevar and ht uncertainties are the same since | ||
# they are saved under the same name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, we may want to start thinking of a long term solution as well. A tree of if-statments that grows exponentially for each source of theory uncertainty (scales, alphas, HT) is not super nice
28662e8
to
6b71fae
Compare
d75f1b9
to
8e1dc74
Compare
348be41
to
9ac473c
Compare
Hi @roy, I've updated the code so that now the covmat for power corrections can be constructed as in the case of scale variations. Please, look and let me know if something is unclear. I'd like if you could double-check the functions that compute the shifts, in particular when normalisation factors and conversion factors are used. There are few things that I still don't like here and there, but you're free to propose modifications. |
return s | ||
|
||
|
||
@check_correct_theory_combination | ||
def covs_pt_prescrip(combine_by_type, point_prescription): | ||
def covs_pt_prescrip( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to add additional dependencies here for the power correction case. I don't know if there is a clever solution, maybe extracting these quantities from the context?
9ac473c
to
84642cb
Compare
point_prescription, | ||
pdf: PDF, | ||
power_corr_dict, | ||
pc_included_prosc, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the tests fail because there is not default for pc_included_procs
(typo) and pc_excluded_exps
. What do you suggest to do? Should I add two functions in config.py that default these two values when not specified in the runcard?
Greetings from your nice fit 🤖 !
Check the report carefully, and please buy me a ☕ , or better, a GPU 😉! |
39f5640
to
ef59af1
Compare
fdab4c9
to
9f286c6
Compare
3c8b923
to
de6497e
Compare
a7e265a
to
49bca4f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @achiefa thanks for this.
I've left a few comments on the code. The 1000line higher_twist_functions
I haven't look into in much detail, but one thing that I'd like to mention is please avoid using commondata_table
. This is basically an object containing all commondata information like in the old days which was necessary to add for some of validphys functions but the idea is to instead get only the information you want.
For instance, the process type is part of the metadata now (no need to load the data and uncertainties!)
Same for the kinematics (although by the time you need the kinematics you often have already read the whole commondata). You can get the kinematics directly with cd.metadata.load_kinematics()
and then you already have x
, Q
, and y
for DIS as god intended!
# fit: | ||
# from_: reference | ||
# speclabel: | ||
# from_: reference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove the comments (and modify the comments at the top of the file to explain what this runcard is).
The meta information is also missing.
GEV_CM2_CONV = 3.893793e10 | ||
GF = 1.1663787e-05 # Fermi's constant [GeV^-2] | ||
Mh = 0.938 # Proton's mass in GeV/c^2 | ||
MW = 80.398 # W boson mass in GeV/c^2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This information should come from the theory I believe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree. But I'm afraid that fetching this information from the theory would overcomplicate the code. But maybe you know a better way to do so (?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from validphys.loader import Loader
tid = Loader().check_theoryID(40_000_000)
mw = tid.get_description()["MW"]
You can also load the theories directly without checking them (which will download the full theory) but I guess you have already some theory locally to use these functions?
This PR allows for the inclusion of theory uncertainties due to the effect of power corrections. The theory covariance matrix is constructed by computing the shifts for the theoretical predictions as done for the MHOUs. The shift is computed at the level of the structure functions. Then, the shifts for the structure functions are combined to reconstruct the shift for the xsec. For this reason, the calculation of the shift depends on the dataset. Currently, only 1-JET and DIS (NC and CC) are supported.
At the level of the runcard, power corrections are specified as follows
For each process/sf implemented, we need to specify a series of information:
ht
. This is useful to identify the type of power correction, in particular when computing the posterior.yshift
are the magnitudes of the prior.nodes
contains the points where the prior is shiftedThe array
pc_included_procs
specifies the processes for which the shifts are computed. I've also implemented the possibility to exclude some particular datasets within the selected processes, and this can be done by specifying the names inpc_excluded_exps
.The key
func_type
is temporary and will be deleted once we decide which function to use to construct the prior.All the relevant details for this PR are contained in the module
higher_twist_functions.py
. For each observable for which the shift has to be computed, I implemented a factory that constructs a function which will then compute the shift. I thought it was kind of necessary to make the shift dependent on the shift parameters (yshift
andnodes
) and on the prescription according to which we vary the parameters (which for now is fixed), and not on the kinematics (I think this is known as currying in computer science).TO DO
func_type
power_corrections
is specified but the parameters are not given?ht
toname
(?)