========================================= February 7th 2017 - Charge mult. 1 - CMS ========================================= Emily chairs. ------------ Topics: - Purpose of doing this study (in general) - CMS tracking and detetcor operation with no magnet - Detector tunes - Poor text descriptors - Eta dependence ----------- Details: - Why are we doing the Modeling in MC simulation, low-energy QCD stuff is all phenomenological - What does phenom mean in sim? No solid way to do it, perturbative - Asking about shape: dependency - the reason we do it is to understand the modeling and PU contributions, ALICE does it for understanding nuclear interactions.[spoilers - actually not true] - discussion of CMS’s magnet being off - what is considered forwards here? Outside of tracker - agglomerative vertex reco: also discussion of track reco algorithms having to be removed - what is a geometric distribution? This is not clear from the text. - What's the main difference between Pythia and EPOS here? Pythia split sec into each process and then percentage of events whereas in EPOS : no cut-off , floating pom - Why is there such a massive dependence on eta? [spoilers] This is actually answered really well by the ALICE paper. Essentially diffractive processes show up in the more forward region whereas hard scatter, which is dependent on CoM energy is predominatnly central. - Why don’t experiments share event tunes? e.g. CUETP8S1: normally they would. many WZ or Z-jets are standard. - pg. 145: last section of 4.1, why do they correct for geometry by taking the per-bin ratio of number of trackless in eta,z bins. This is part of their measurement isn’t it? I also don’t understand why they’r referring to dead modules as having ‘a large correction factor’. Why aren’t the dead modules vetoed? - tracks; "only uses clusters whose z-length is compatible with a charged particle originating from nominal collision point” but you don’t know where that is in Z, so it seems almost negligible since their pixel size is 150 um in Z. So 750 um. They comment later that it would have to be 5x greater than what would be expected from reconstrcuted PV. - why does tracking get worse with pT when there is no B-field? It doesn't necessarily, the paper does not explain their tacking algorith, It's more likely that the particles will decay on a shorter timescale if they have less energy and so can't be reco'd. - How are duplicate tracks made? They are defined as: having a very small angle. sharing hits. Again, wihout understanding CMS tracking it's not clear. - The EPOS LHC predictions in Figure 2 have ‘no uncertainty associated with its parameters’?! section 5 - pythia tunes ar based on data from several sources, **************************************** NEXT WEEK: Charge multiplicity 2 - ATLAS **************************************** ---------- CliffNotes: - Production of charged hadrons is driven by perturbation & non-perturbative QCD: saturation of parton density, hadronization, soft diffractive scattering - A key point of this study is to understand the eta dependence changing with collision energy - And how it relates to soft and hard scattering contributions (so Figure 3b is the money plot) - Hard scattering scales with collision energy - Used to tune MC - They steer the beams to be 3-sigma displaced to lower the lumi. - BPTX: https://arxiv.org/pdf/0905.3648.pdf http://ajbell.web.cern.ch/ajbell/Documents/Home/IEEE.pdf CFD w/ 50 ps timing resolution - events are selected by: - coincidence in BPTX - on vertex reconstructed - Track analysis: - uses all vertices - only uses clusters whose z-length is compatible with a charged particle originating from nominal collision point: see questions. Difference has to be >= 5 pixels. - again, go inside out. Match hits between layer 1 and 2, then layer 2 and 3. angles between hits in layers 1&2 and 2&3 must be small. alpha for all 3 - fit 3 hits with Newtons method, with d0 set to 0. - constraint is put on tracks to be within 20cm of where the nominal beamspot is - there is a multiplicative factor: #truth particles/#sim reco tracks to correct each eta bin. - third hit helps with background rejection - narrower viable eta range than tracklet method, because of this. - tracklet analysis: - Basic idea: there should be a very small ∆phi ∆eta for hits on consecutive layers from a real track - only uses PV - uses pairs of of hits and background subtraction from Fig. 1b. - goes inside out: first picking a hit in layer 1 and then matching it to layer 2. - hits with smallest ∆eta are paired first. No hit used more than once. - then, backwards extrapolated to beam pipe to form a vertex candidate - after the collection is made, if any two vertex-candidates are closer than 1.2 mm in Z then cluster. - the vertex cluster with largest number of vertex candidates is PV