ZPrime Combine Manual

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 13

ZPrimeCombine
Interface to the Higgs Combine Tool for the
Z0`` search
User Manual
Jan-Frederik Schulte
Version 2.0.0, 2018/17/07
Contents Contents
Contents
1 Introduction 3
2 Setup 3
3 Usage 3
3.1 Inputcreation ................................... 4
3.2 Running statistical procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.1 Binnedlimits ............................... 10
3.2.2 Single mass/Λpoints ........................... 11
3.2.3 Spin-2limits................................ 11
3.2.4 Signal injection and Look Elsewhere Effect . . . . . . . . . . . . . . . . 11
3.2.5 Contact Interaction limits . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.6 Jobsubmission .............................. 12
3.3 Outputprocessing................................. 12
2
Contents 1 Introduction
1 Introduction
The ZPrimeCombine package, to be found on Gitlab, provides an interface between the exper-
imental results of the Z0`` analysis, as well as the non-resonant interpretation in Contact
Interactions, and the Higgs Combine Tool. This tool, referred to simply as “combine” going
further, is in turn an interface to the underlying statistical tools provided by RooStats. This
document aims to summarize the functionality of the the tool and give instructions how to use
it to derive limits and significances for the analysis.
2 Setup
The current implementation in the package is based on version v7.0.10 of combine. CMSSW_8_1_0
is used to set the environment, but this is only to ensure a consistent version of ROOT, Com-
bine does not rely on CMSSW itself. Combine is installed using the following commands
export SCRAM_ARCH=slc6_amd64_gcc530
cmsrel CMSSW_8_1_0
cd CMSSW_8_1_0/src
cmsenv
git clone https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit.git HiggsAnalysis/CombinedLimit
cd HiggsAnalysis/CombinedLimit
cd $CMSSW_BASE/src/HiggsAnalysis/CombinedLimit
git fetch origin
git checkout v7.0.10
scramv1 b clean; scramv1 b # always make a clean build
Detailed documentation of combine can be found on this Twiki page.
To finish the setup, just clone the ZPrimeCombine repository from the link given above. Note
that this is only possible for CMS members subscribed to the Z0e-group. This documentation
refers to version 1.0 of the framework, which can be checked out using git checkout v1.0.
At the moment of writing this, all developments are merged into the master branch. This
might change in the future, so be aware that you might have to check out a different branch
to find the functionality you need.
3 Usage
The central entry point to the framework is the script runInterpretation.py. It steers both
the creation of the inputs given to combine as well as the execution of it, either locally or via
batch/grid jobs. Let’s have a look at its functionality:
Steering tool for Zprime -> ll analysis interpretation in combine
optional arguments:
-h, --help show this help message and exit
-r, --redo recreate datacards and workspaces for this
3
Contents 3 Usage
configuration
-w, --write create datacards and workspaces for this configuration
-b, --binned use binned dataset
-s, --submit submit jobs to cluster
--signif run significance instead of limits
--LEE run significance on BG only toys to estimate LEE
--frequentist use frequentist CLs limits
--hybrid use frequenstist-bayesian hybrid methods
--plc use PLC for signifcance calculation
-e, --expected expected limits
-i, --inject inject signal
--recreateToys recreate the toy dataset for this configuration
--crab submit to crab
-c CONFIG, --config CONFIG
name of the congiguration to use
-t TAG, --tag TAG tag to label output
--workDir WORKDIR tells batch jobs where to put the datacards. Not for
human use!
-n NTOYSEXP, --nToysExp NTOYSEXP
number of expected limits to be calculated in one job,
overwrites then expToys option from the config file.
Used for CONDOR jobs at LPC so far
-m MASS, --mass MASS mass point
-L LAMBDA, --Lambda LAMBDA
Lambda values
--CI calculate CI limits
--usePhysicsModel use PhysicsModel to set limtsi of Lamda
--singlebin use single bin counting for CI. The mass parameter now
designates the lower mass threshold
--Lower calculate lower limits
--spin2 calculate limits for spin2 resonances
--bias perform bias study
In the following the use of these options for different purposes is described. The most important
parameter is -c, which tells the framework, which configuration file to use for steering. It is the
only argument which is mandatory to give, and the name of the configuration will be used to
tag all inputs and results. The configuration files themselves are discussed in the next section.
Another universal option is the -t option which can used to tag the in- and output of the tool.
3.1 Input creation
The first task of the framework is to create the datacards and workspaces used as inputs for
combine. To provide the framework with the necessary information, the user has to provide
two different types of inputs. All experimental information is located in the input/ directory.
Here, for each channel of the analysis, there is one channelConfig_channelName.py file. For
example, for the barrel-barrel category in the dimuon channel for the 2016 result (EXO-16-047),
4
Contents 3 Usage
it looks like the example shown below. Important things are commented throughout.
5
Contents 3 Usage
import ROOT ,sys
ROOT .gROOT .Se tB at ch (True )
ROOT .gErrorIgnoreLevel = 1
from ROOT import
from muonResolution import g etR eso lu t io n as get Res #e x t e r n a l s o u r c e f o r -
p a r a m e t r i z a t i o n o f dimuon r e s o l u t i o n
nBkg =1
dataFile =" input /dimuon_Mordion2017_BB . tx t "
def addBkgUncertPrior (ws ,label ,channel ,uncert ) :
beta_bkg =R oo Re alV ar ('beta_%s_%s '%(label ,channel) , 'beta_%s_%s '%(label ,-
channel) ,0 , 5 , 5 )
getattr(ws ,'import ') ( beta_bkg ,ROOT .RooCmdArg ( ) )
uncert = 1 . + uncert
bkg_kappa =RooR ea lV ar ('%s_%s_kappa '%(label ,channel) , '%s_%s_kappa '%(label-
,channel) , uncert )
bkg_kappa .setConstant ( )
getattr(ws ,'import ') ( bkg_kappa ,ROOT .RooCmdArg ( ) )
ws .factory (" PowFunc :: % s_%s _ n u i s (%s_%s_kappa , bet a_%s_%s ) "%(label ,channel ,-
label ,channel ,label ,channel) )
ws .factory (" pr o d : :% s_%s _ f o r U s e (%s_%s , %s_%s _ n u i s ) " %(label ,channel ,label ,-
channel ,label ,channel) )
#f u n c t i o n a l i t y t o add a u n c e r t a i n t y t o a ba c k g ro un d f i t p a ra m e t e r
def provideSignalScaling(mass ,sp in2=Fa lse ) :
nz = 53134 #From A l e x a n d e r (8 0X Mo ri ond ReReco )
nsig _s cal e = 1376.0208367514358 # p r e s c a l e / e f f _ z ( 1 6 7 . 7 3 6 9 4 / 0 . 1 2 1 9 ) -
>d e r i v e s t h e l u mi
eff =signalEff(mass ,spin2 )
result = ( n si g_ sc al e nz eff )
return result
#p r o v i d e s t h e s c a l i n g o f t h e s i g n a l c r o s s s e c t i o n t o t h e Z p ea k s o t h a t we c an -
s e t l i m i t s on th e c r o s s s e c t i o n r a t i o
def signalEff(mass ,spi n2=Fal se ) :
i f spin2 :
eff_a = 1 . 0 20 3 82
eff_b =1166.881533
eff_c = 1468.989496
eff_d = 0 . 0 00 0 44
return eff_ a +eff _b / ( mass +eff_ c )massef f_d
else :
i f mass <= 6 0 0 :
a= 2 . 1 2 9
b= 0.1268
c= 1 1 9 . 2
d= 2 2 . 3 5
e=2.386
f=0.03619
from math import exp
return abexp ((mass c) / d) + emass f
else :
eff_a = 2 . 8 9 1
eff_b =2.291e+04
eff_c = 8294.
eff_d = 0.0001247
return eff_ a +eff _b / ( mass +eff_ c )massef f_d
#ma ss d e p e n d i n g s i g n a l e f f i c i e n c y f o r t h e r e s o n a n t s e a r c h
def signalEffUncert(mass ) :
6
Contents 3 Usage
i f mass <= 6 0 0 :
a= 2 . 1 2 9
b= 0.1268
c= 1 1 9 . 2
d= 2 2 . 3 8
e=2.386
f=0.03623
from math import exp
eff_default =abexp ((mass c) / d) + emass f
else :
eff_a = 2 . 8 9 1
eff_b =2.291e+04
eff_c = 8294.
eff_d = 0.0001247
eff_default =eff_a +eff_b / ( mass +eff_c )masseff_d
i f mass <= 6 0 0 :
a= 2 . 1 3
b= 0.1269
c= 1 1 9 . 2
d= 2 2 . 4 2
e=2.384
f=0.03596
from math import exp
eff_syst =abexp ((mass c) / d) + emass f
else :
eff_a = 2.849
eff_b =2.221e+04
eff_c = 8166.
eff_d = 0.0001258
eff_syst =ef f_a +e ff_ b / ( mass +ef f_c )masseff_d
effDown =eff_default/ef f_syst
return [ 1 . / effDown ,1.0]
#ones i d e d s i g n a l e f f i c i e n c y u n c e r t a i n t y from h i g h momentum e f f i c i e n c y l o s s
def provideUncertainties(mass ) :
result = {}
result [" s i g E f f " ] = signalEffUncert(mass )
result ["massScale" ] = 0 . 0 1
result ["bkgUncert" ] = 1 . 4
result [" r e s " ] = 0 . 1 5
result [" bkgParams " ] = { " bkg_a " :0.0008870490833826137 ," bkg_b "-
:0.0735080058224163 ,"bkg_c" :0.020865265760197774 , " bkg_d "-
:0.13546622914957615 ," bkg_e " :0.0011148272017837235 , "bkg_a2"-
:0.0028587764436821044 ," bkg_b2 " :0.008506113769271665 , " bkg_c2 "-
:0.019418985270049097 ," bkg_e2 " :0.0015616866215512754}
return result
# p r o v i d e s a l l t he s y s t e m a t i c u n c e r t a i n t i e s f o r t h e r es on a n t a n a l y s i s
def provideUncertaintiesCI(mass ) :
result = {}
result [" t r i g " ] = 1.003
result [" zPeak " ] = 1 . 0 5
result ["xSecOther" ] = 1 . 0 7
result [" j e t s " ] = 1 . 5
result [" l u m i " ] = 1.025
result [" s t a t s " ] = 0 . 0 ##dummy v a l u e s
result ["massScale" ] = 0 . 0 ##dummy v a l u e s
result [" r e s " ] = 0 . 0 ## dummy v a l u e s
result [" pd f " ] = 0 . 0 ## dummy v a l u e s
result [" ID " ] = 0 . 0 ## dummy v a l u e s
7
Contents 3 Usage
result ["PU" ] = 0 . 0 ## dummy v a l u e s
return result
# s i m i l a r t o above , bu t t h i s ti m e f o r t h e nonr e s o n a n t a n a l y s i s . A v a l u e o f 0 -
i n d i c a t e s t h a t t he se u n c e r t a i n t i e s a re massd ep en de nt and w i l l be p r o v i d e d a s-
e x t e r n a l h i s t o g r a m s
def getResolution(mass ) :
result = {}
params =getRes(mass)
result ['alphaL '] = params ['alphaL '] [ 'BB ']
result ['al p h a R '] = params ['a l p h a R '] [ 'BB ']
result ['r e s '] = params ['sigma '] [ 'BB ']
result ['scale '] = params ['scale '] [ 'BB ']
return result
# r ep a c k ag e s t h e mass d e pen d e nt r e s o l u t i o n i n t o t h e f o r m a t used i n t he l i m i t t o o l
def loadBackgroundShape(ws ,useShapeUncert=False ) :
bkg_a =Ro oRe al Va r ('bkg_a_dimuon_Moriond2017_BB ','-
bkg_a_dimuon_Moriond2017_BB ', 3 3 . 8 2 )
bkg_b =Ro oRe al Va r ('bkg_b_dimuon_Moriond2017_BB ','-
bkg_b_dimuon_Moriond2017_BB ',0.0001374)
bkg_c =Ro oRe al Va r ('bkg_c_dimuon_Moriond2017_BB ','-
bkg_c_dimuon_Moriond2017_BB ',1.618e07)
bkg_d =Ro oRe al Va r ('bkg_d_dimuon_Moriond2017_BB ','-
bkg_d_dimuon_Moriond2017_BB ', 3 . 6 5 7 E12)
bkg_e =Ro oRe al Va r ('bkg_e_dimuon_Moriond2017_BB ','-
bkg_e_dimuon_Moriond2017_BB ',4.485)
bkg_a2 =RooR ea lVa r ('bkg_a2_dimuon_Moriond2017_BB ','-
bkg_a2_dimuon_Moriond2017_BB ', 1 7 . 4 9 )
bkg_b2 =RooR ea lVa r ('bkg_b2_dimuon_Moriond2017_BB ','-
bkg_b2_dimuon_Moriond2017_BB ',0.0188 1 )
bkg_c2 =RooR ea lVa r ('bkg_c2_dimuon_Moriond2017_BB ','-
bkg_c2_dimuon_Moriond2017_BB ', 1 . 2 2 2 e05)
bkg_e2 =RooR ea lVa r ('bkg_e2_dimuon_Moriond2017_BB ','-
bkg_e2_dimuon_Moriond2017_BB ',0.8486)
bkg_a .setConstant ( )
bkg_b .setConstant ( )
bkg_c .setConstant ( )
bkg_d .setConstant ( )
bkg_e .setConstant ( )
bkg_a2 .setConstant ( )
bkg_b2 .setConstant ( )
bkg_c2 .setConstant ( )
bkg_e2 .setConstant ( )
getattr(ws ,'import ') ( bkg_a ,ROOT .RooCmdArg ( ) )
getattr(ws ,'import ') ( bkg_b ,ROOT .RooCmdArg ( ) )
getattr(ws ,'import ') ( bkg_c ,ROOT .RooCmdArg ( ) )
getattr(ws ,'import ') ( bkg_d ,ROOT .RooCmdArg ( ) )
getattr(ws ,'import ') ( bkg_e ,ROOT .RooCmdArg ( ) )
getattr(ws ,'import ') ( bkg_a2 ,ROOT .RooCmdArg ( ) )
getattr(ws ,'import ') ( bkg_b2 ,ROOT .RooCmdArg ( ) )
getattr(ws ,'import ') ( bkg_c2 ,ROOT .RooCmdArg ( ) )
getattr(ws ,'import ') ( bkg_e2 ,ROOT .RooCmdArg ( ) )
# ba ck g r o und s y s t e m a t i c s
bkg_ sy st_ a =Ro oR eal Va r ('bkg_ s yst_a ','bk g _syst _ a ',1.0)
bkg_ sy st_ b =Ro oR eal Va r ('bkg_syst_b ','bkg_syst_b ',0.0)
#bkg_syst_b = RooRealVar ( 'bkg_syst_b ','bkg_syst_b ',0.00016666666666)
bkg_ sy st_ a .setConstant ( )
bkg_ sy st_ b .setConstant ( )
getattr(ws ,'import ') ( bkg_syst_a ,ROOT .RooCmdArg ( ) )
getattr(ws ,'import ') ( bkg_syst_b ,ROOT .RooCmdArg ( ) )
# backg r o u n d s h a pe
i f useShapeUncert :
bkgParamsUncert =provideUncertainties (1000) [" bkgParams " ]
8
Contents 3 Usage
f o r uncert i n bkgParamsUncert :
addBkgUncertPrior (ws ,uncert ,"dimuon_Moriond2017_BB" ,-
bkgParamsUncert [uncert ] )
ws .factory (" ZPrimeMuonBkgPdf2 : : bkgpdf_dimuon_Moriond2017_BB ( -
mass_dimuon_Moriond2017_BB , -
bkg_a_dimuon_Moriond2017_BB_forUse , -
bkg_b_dimuon_Moriond2017_BB_forUse , -
bkg_c_dimuon_Moriond2017_BB_forUse ,-
bkg_d_dimuon_Moriond2017_BB_forUse ,-
bkg_e_dimuon_Moriond2017_BB_forUse ,-
bkg_a2_dimuon_Moriond2017_BB_forUse , -
bkg_b2_dimuon_Moriond2017_BB_forUse , -
bkg_c2_dimuon_Moriond2017_BB_forUse ,-
bkg_e2_dimuon_Moriond2017_BB_forUse , bkg_syst _a , b kg_s yst_ b ) " )
ws .factory (" ZPrimeMuonBkgPdf2 : : b k g p d f _ f u l l R a n g e ( m as s Fu ll R an ge , -
bkg_a_dimuon_Moriond2017_BB_forUse , -
bkg_b_dimuon_Moriond2017_BB_forUse , -
bkg_c_dimuon_Moriond2017_BB_forUse ,-
bkg_d_dimuon_Moriond2017_BB_forUse ,-
bkg_e_dimuon_Moriond2017_BB_forUse , -
bkg_a2_dimuon_Moriond2017_BB_forUse , -
bkg_b2_dimuon_Moriond2017_BB_forUse , -
bkg_c2_dimuon_Moriond2017_BB_forUse ,-
bkg_e2_dimuon_Moriond2017_BB , bkg_syst_a , bkg _sys t_b ) " )
else :
ws .factory (" ZPrimeMuonBkgPdf2 : : bkgpdf_dimuon_Moriond2017_BB ( -
mass_dimuon_Moriond2017_BB , bkg_a_dimuon_Moriond2017_BB , -
bkg_b_dimuon_Moriond2017_BB , bkg_c_dimuon_Moriond2017_BB , -
bkg_d_dimuon_Moriond2017_BB , bkg_e_dimuon_Moriond2017_BB , -
bkg_a2_dimuon_Moriond2017_BB , bkg_b2_dimuon_Moriond2017_BB , -
bkg_c2_dimuon_Moriond2017_BB , bkg_e2_dimuon_Moriond2017_BB , -
bkg_sy st_a , bkg_s yst_ b ) " )
ws .factory (" ZPrimeMuonBkgPdf2 : : b k g p d f _ f u l l R a n g e ( m as s Fu ll R an ge , -
bkg_a_dimuon_Moriond2017_BB , bkg_b_dimuon_Moriond2017_BB , -
bkg_c_dimuon_Moriond2017_BB , bkg_d_dimuon_Moriond2017_BB , -
bkg_e_dimuon_Moriond2017_BB , bkg_a2_dimuon_Moriond2017_BB , -
bkg_b2_dimuon_Moriond2017_BB , bkg_c2_dimuon_Moriond2017_BB , -
bkg_e2_dimuon_Moriond2017_BB , bkg_syst_a , bkg _sys t_b ) " )
return ws
# p r o v i d e s t o b a c kgro u n d sh a pe s , one i n t he mass window f o r t he t e s t e d r es o n a nc e -
mass , t h e o t h e r f o r t h e f u l l m ass r a n g e . l o g nor ma l p r i o r s can be added to -
t h e sh a pe p a r a m e t e r s i f d e s i r e d
For each channel of the analysis (i.e. for each subcategory of the dielectron and dimuon
channels), one such config has to be provided. The other input to the tool is located in the
cfgs/ directory. Here, the scanConguration_ConfigName.py files contain all information
needed to steer the actual interpretation, setting the channels to be considered, the mass range
to be scanned, and similar features. Given here is the example for the combination of all four
subcategories for the 2016 result.
leptons =" elmu " # d i l e p t o n c om b in a ti on , c an a l s o be e l e l or mumu
systematics = [ " s i g E f f " ,"bkgUncert" ,"massScale" ,'r e s '," bkgParams " ]# l i s t o f -
s y s t e m a t i c u n c e r t a i n t i e s t o be c o n s i d e r e d
correlate =Fals e #s ho u l d u n c e r t a i n t i e s be t r e a t e d as c o r r e l a t e d betwe e n c h a n n el s-
?
masses = [ [ 5 , 2 0 0 , 1 0 0 0 ] , [ 1 0 , 1 00 0 , 20 0 0 ] , [ 2 0 , 2 0 0 0 , 5 5 0 0 ] ] #mass s c a n p a r a m e t e r s f o r -
o b s e r v e d l i m i t s /pVa l ue s c a n s
massesExp = [ [ 1 0 0 , 2 0 0 , 6 0 0 , 5 0 0 , 4 , 5 0 0 0 0 0 ] , [ 1 0 0 , 6 0 0 , 1 0 0 0 , 2 5 0 , 8 , 5 0 0 0 0 0 ] , -
[ 2 5 0 , 1 0 0 0 , 2 0 0 0 , 1 0 0 , 2 0 , 5 0 0 0 0 ] , [ 2 5 0 , 2 0 0 0 , 5 6 0 0 , 1 0 0 , 2 0 , 5 0 0 0 0 0 ] ] #mass sc an -
p a r a m e t e r s f o r e x p e c t e d l i m i t s
9
Contents 3 Usage
libraries = [ " ZPrimeMuonBkgPdf2_cxx . so " ,"ZPrimeEleBkgPdf3_cxx . so " ," PowFunc_cxx . s o-
"," R o o C r u i j f f _ c x x . s o " ]#l i b r a r i e s t o be added to t h e combine c a l l
channels = [ "dielectron_Moriond2017_EBEB" ,"dielectron_Moriond2017_EBEE" ,"-
dimuon_Moriond2017_BB" ,"dimuon_Moriond2017_BE" ]# l i s t o f c h a n n e l s t o be -
considered
#Markov C ha in p a r a m e t e r s
numInt = 500000
numToys = 6
exptToys = 1000
width = 0 . 0 0 6 #s i g n a l w id t h ( h e r e 0.6%)
submitTo ="FNAL" #com put i n g r e s o u r c e s u se d f o r b at ch j o b s . R i gh t now Purdue and -
t h e LPC Co nd or c l u s t e r a r e s u p p o r t e d
LPCUsername =" j s c h u l t e " # user nam e a t LPC , n e c e s s a r y t o run CONDOR j o b s t h e r e
binWidth = 10 #b i n w i d t h f o r b i n n e d l i m i t s
CB =True # u se nonG a us s i a n s i g n a l r e s o l u t i o n sh a p e . Does n o t n e c e s s a r i l y have -
to CB anymore
signalInjection = { " mass " : 7 5 0 , " wi d th " :0.1000 ,"nEvents" : 0 , "CB" :True}#parameters -
f o r t oy g e n e r a t i o n f o r MC s t u d i e s
Using this input, the framework will create first the datacards for the single channels and
afterwards combined datacards. For local running, this can be triggered by running with the
-w or -r options. In the first case, the datacards are produced and the program is exited
without performing any statistical procedures. In the latter case, the datacards are reproduced
on the fly before performing statistical interpretations. If a local batch system is used, the input
will be created inside the individual jobs to increase performance. When tasks are submitted
to CRAB, the input is created locally.
3.2 Running statistical procedures
If not called with the -w option (which will only write datacards, see above), the default
behaviour of runInterpretation.py is to calculate observed limits using the Bayesian ap-
proach. For this, the mass binning and the configuration of the algorithm given in the scan
configuration is given. There are numerous command line options to modify the statistical
methods used
-e switches the limit calculation to expected limits
--signif switches to calculation of p-Values using an approximate formula. For full
calculation with the ProfileLikelihoodCalculator, use this option in conjuction with --plc
--frequentist uses frequentist calculations for limits or p-Values
--hybrid uses Frequentist-Bayesian Hybrid method for the p-Values
Apart from these fundamental options, there are several further modifications that can be
made
3.2.1 Binned limits
The --binned option triggers the use of binned instead of unbinned datasets. For this pur-
pose, binned templates are generated from the background and signal PDFs. The binning is
10
Contents 3 Usage
hardcoded within the createInputs.py script. The advantage of this approach is a large
improvement in speed, the disadvantage is a very long time needed to generate the templates
in the first place.
3.2.2 Single mass/Λpoints
To run a single mass (or Λin case of CI) point instead of the full scan, the option -m mass
(-L Lambda) can be used.
3.2.3 Spin-2 limits
If run with the --spin2 option, the signal efficiency for spin-2 resonances will be used
3.2.4 Bias study
If run with the --bias option, a bias study will be performed. Two sets of toy datasets are
generated based on the datacards, with signal strength µ= 0 and µ= 1. These datasets are
then fit with the background + signal model and the fitted ˆµis recorded. These fit results
can then be used to determine if there are biases in the modelling, as we expected the average
<ˆµ > to be 0 and 1, respectively.
3.2.5 Signal injection and Look Elsewhere Effect
For performance studies, pseudo-data can be generated in which the statistical interpretation
is then performed. When run with the --inject option, pseudo background and signal events
are generated according to the respective PDFs. The background is normalized to the yield
observed in data in each channel, but can be scaled to a desired luminosity. The signal
parameters used for the injection are taken from the scan configuration. The signal events
are distributed between the sub-channels according to the signal efficiencies. The default
behaviour is that a toy dataset for a given configuration is not over-written if the program is
rerun, so that the same toy dataset can be processed with the same configuration. The option
--recreateToys can be used to force the dataset to be overwritten.
To account for the look elsewhere effect, the --LEE option can be used. Many background only
datasets will be generated and p-Value scans will be performed. The tool readPValueToys.py
can be used to harvest the large number of resulting result cards.
3.2.6 Contact Interaction limits
So far most options discussed were mostly focused on the statistical analysis for the resonant
analysis. To switch the program to perform the analysis for the CI signal, the option --CI can
be used. This will by default run the multibin shape analysis for constructive interference. With
11
Contents 3 Usage
the option --singlebin, it can be switched to single bin counting about a certain threshold.
The threshold has to be chosen with the -m option.
Setting limits not on the signal cross section but on the CI scale Λwill be possible with the
--usePhysicsModel option, combined with the --Lower option to convert the limits from
upper into lower limits (not supported in combine for MarkovChainMC calculation). This is
still under development at this stage.
3.2.7 Job submission
As the calculations used for the statistical interpretations, parallelization is unavoidable. The
framework supports two options for it, submission to local batch systems and CRAB. The
-s option triggers submission to batch system. At the moment, only the Purdue system is
supported. However, the job configurations can be easily used for any qsub system and should
be adaptable to others system as well.
Less specific and giving access to much more computing resources is submission via crab.
At the moment, only expected and observed Bayesian limits are supported. On the upside,
submission is very easy, just run the tool with the --crab option. A valid GRID certificate is
required.
3.3 Output processing
The output of the combine tool are root files which contain the resulting limit or p-Value
as entries in a ROOT tree. The script createLimitCard.py is available to convert these
files into simple ascii files. This tool takes a variety of arguments, very similar to the main
runInterpretation.py script:
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
configuration name (default: )
--input INPUT folder with input root files (default: )
-t TAG, --tag TAG tag (default: )
--exp write expected limits (default: False)
--signif write pValues (default: False)
--injected injected (default: False)
--binned binned (default: False)
--frequentist use results from frequentist limits (default: False)
--hybrid use results from hybrid significance calculations
(default: False)
--merge merge expected limits first (default: False)
--CI is CI (default: False)
12
Contents 3 Usage
Basically you have to match up the configuration to the one used to create the output. Then
you have the choice of either providing the location of the output to be processed with the
--input option or leave the tool to figure it out for itself. In the latter case, if will take the
newest result produced on a local batch system matching the configuration.
Results produced via CRAB have to be downloaded from the respective resource, with the
script harvestCRABResults.py, which will download and properly rename/merge the files so
they can be used with the createLimitCard.py script
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
configuration name (default: )
-t TAG, --tag TAG tag (default: )
-u USER, --user USER name of the user running the script (default: )
--obs renamae obeserved limits (default: False)
--merge merge expected limits (default: False)
The plot scripst makeLimitPlot.py,makeLimitPlotWidths.py,makeLimitPlotCI.py,
makeRLimitPlotCI.py, and makePValuePlot.py can be used to create plots from the ascii
files previously created.
13

Navigation menu