-
Poster Presentations
1. DeeLeMa: Missing information search with Deep Learning for Mass estimation
BAN, Kayoung (Yonsei University)
We present a deep learning network for building an optimal searching for events with missing kinematic information at the colliders. Our network can be usefully used for precise measurement of the standard model or to find new physics beyond the standard model in events involving neutrino or dark matter. To be specific, we reconstruct the invisible momenta in $t\bar{t}$-like antler event topology and reconstruct the particle mass spectrum of the event only with the decay topology information. We show our deep learning network named as \deelema can powerfully improve the reconstructing not only mass but also momentum. -
2. Development of a hybrid-photodetector for the DARWIN experiment
-
HASEGAWA, Tomoya (Nagoya University)
Dark matter, such as Weakly-Interacting-Massive Particles (WIMPs), could interact with nuclei in ordinal matter, and direct detection experiments try to search for such interactions between dark matter and detector medium. In particular, experiments using liquid xenon as a detector target are leading the searches for heavy WIMPs. However, neutron originating from radioactive material contaminated in photomultiplier tube(PMT) can be an irreducible background, and it limits the discovery potential of WIMPs. Therefore, it is important to develop a new photodetector with lower radioactivity. In this study, I will present about R&D results for the hybrid photodetector which we are now developing. -
3. CMB Constraints on the Early Dark Energy Phase Transition
- HAYASHI, Shintaro (Nagoya University)
Recently, early dark energy(EDE) has been intensively studied as the solution to the Hubble tension. We focus on the EDE model in which the EDE starts to decay by the phase transition. In this case, the new density perturbation is generated because the decay of the EDE happens stochastically. We show the evolution of the perturbations directly generated by the phase transition and the observational constraints on such perturbation and an EDE model by the CMB data. -
4. BAO measurement of three-dimensional correlation function for photometric surveys
- ISHIKAWA, Keitaro (Nagoya University)
- Galaxy surveys are of two types: spectroscopic and photometric. Spectroscopic observations provide the exact redshift of galaxies, but only bright galaxies can be observed. On the other hand, photometric observations have the advantage which could image faint galaxies at the same time, so we can expect statistical accuracy because we can secure an adequate number of samples. However, photometric observations have a large degree of uncertainty in estimating the redshift to galaxies. For that reason, in our study, we use Baryon Acoustic Oscillation to show the levels we have to achieve systematic error (photo-z error) associated with the photometric observation. To this end, we modeled the photo-z effect by incorporating the photo-z Gaussian distribution integrated into the three-dimensional two-point correlation function and verified the effect using mocks. As a result, the following three things were found. First, in the case that the magnitude of photo-z error associated with the data is known, the position of the BAO peak would be restricted to 4% with a 2σ statistical error when the simulation data including photo-z error equivalent to 50 Mpc/h is used because the photo-z error effect can be integrated into the models. Even if the magnitude of the photo-z error is not known, fitting with the spec-z template or the photo-z 1% template will not bias the BAO location (although the statistical error will be larger). Next, up to a photo-z error corresponding to about 50 Mpc/h, the 3D two-point correlation function can be used for cosmological tests.
Finally, even if the photo-z distribution is not Gaussian but skewed non-Gaussian, we showed that skewness doesn’t affect the BAO measurement as long as the mean and variance of the distribution are reproduced correctly. In summary, in our presentation, we will arrange the condition to measure BAO robustly, and then discuss the levels required for photometric observations. -
5. Simulation of stochastic inflation
- MIZUGUCHI, Yurino (Nagoya University)
- Inflation can resolve some problems of the Big Bang cosmology, such as the horizon problem, the flatness problem, etc. It can further make initial density fluctuations as seeds of the present cosmic structures. If one wants to adopt the linear perturbation theory for the growth of the fluctuations, one has to assume that they are much smaller than the spatially homogeneous background fields. However, the fluctuations are not necessarily small and the conventional perturbation theory may not be a good approximation if they are larger than the background. We hence introduce the stochastic formula, which understands the stack of classical superhorizon modes as the background called IR fields.
The fluctuations in the linear perturbation theory correspond to the stochastic noise terms in the formula.This allows us to treat quantum fluctuations in the classical theory.
In this work, we numerically simulate the growth of fluctuations as random processes in various models. We also consider its extension to the study of primordial black holes. -
6. RSD analysis with Lyman alpha forest
- NAKASHIMA, Koichiro (Nagoya University)
- The Lyman alpha forest (LAF), a series of HI absorption lines in the quasar spectra, can be a great tool for cosmology at redshifts (z>2) that are generally hard to access with other probes. We present a measurement of the LAF anisotropic power spectrum from the hydrodynamic simulations and analyze the full shape to measure the growth rate of the structure through redshift space distortions.We test the validity of the models presented in previous research under different maximum wavenumbers used in the fit. In addition, we indicate the requirements of survey parameters in the assumption of Subaru Prime Focus Spectrograph.
-
7. The fate of subhalos in the pre-reionization epoch
- NARUSE, Genki (Nagoya University)
- The 21 cm forest is a promising tool to trace the structure of the universe before the reionization epoch. It was found that in the pre-reionization epoch, the 21 cm optical depth could be enhanced due to the existence of subhalos in a host halo if such subhalos could survive and keep a low temperature. In this work, using hydro simulation, we investigate how the subhalos can survive during their dynamic motion such as dynamical friction and tidal force. In a gas-rich host halo, the subhalos fall into the center in ~Myr time scale due to the dynamical friction. It is expected that such subhalos are destroyed by tidal force and could not contribute to the 21 cm forest signals. We further discuss the thermal evolution of subhalos in detail.
-
8. Testing gravity by combining weak lensing, clustering and RSD
- NAKASAWA, Noriaki (Nagoya University)
- In this talk, I will talk about testing gravity using gravitational lensing, redshift space distortion (RSD), and projected galaxy clustering. Accelerated expansion of the universe is one of the most mysterious problems in the field of cosmology. The standard cosmological model assumes the existence of unknown energy, called dark energy, to explain the cosmic acceleration. Another approach is to modify general relativity to explain cosmic acceleration. RSD records the motion of individual galaxies as a ratio of linear growth rate and galaxy bias. The galaxy bias of a sample used for the Red measurement can be estimated by combining galaxy-galaxy clustering and lensing. Thus, the combination of these three probes, the so-called E_G, enables us to extract the linear growth rate, which is sensitive to the properties of gravity. In this work, we will constrain E_G with the BOSS CMASS sample and the HSC S19A data.
-
9. A study on the relation between formation history and observables of galaxy clusters
- YOON, SEONGWHAN (Nagoya University)
- Since galaxy clusters are the most massive self-gravitating system in the Universe, the halo number density per unit volume per unit mass is one of the most powerful probes to constrain cosmological models. However, to estimate the halo number density as a function of halo mass, one should understand systematics stemming from the relationship between halo properties and cluster observables. Otherwise, model ingredients of the halo number density, such as selection function and mass-observable relations, will be biased. In this presentation, I report the connection between one of the halo properties, i.e., the halo mass accretion history, and the baryonic physics inside of the halo based on hydrodynamical simulations. Since this connection is quite complicated, machine learning plays a key role in establishing the connection.
-
10. Testing general relativity with the joint analysis of weak lensing and galaxy clustering from HSC-Y3 and BOSS
- TANIDA, Koki (Nagoya University)
-
Einstein’s theory of general relativity has been successful in describing various phenomena caused by gravity, such as black holes and gravitational lensing. It has been confirmed that general relativity is consistent with the experiments in the Solar System, whereas it has not been exclusively examined on cosmological scales. In particular, the discovery of cosmic acceleration has recently motivated the investigation of a number of proposals for modified gravity models. Such alternative models can produce the late-time acceleration without the cosmological constant involved in the standard \Lambda CDM model. In our study, we aim to test the modified gravity models by combining the three two-point correlation functions (3×2pt) of the large-scale structure: cosmic shear, galaxy-galaxy lensing, and galaxy clustering. We use measurements from the third-year HSC (HSC-Y3) weak lensing sample and SDSS spectroscopic galaxy catalog.a. We employ a phenomenological model of gravity for structure formation, covering broad types of modified gravity models. In this poster presentation , as the first step of our studywe report the development of a software pipeline to compute a the theoretical model of the correlation functions and confirm whether it can reproduce input cosmological parameters through analyzing mock data.
-
11. Dark photon search using B->Kllll decay at Belle
KIM, Yongkyu (Yonsei University) The Belle II detector is electron positron asymmetric collider experiment with 8 GeV of electron beam and 3.5 GeV of positron beam with Upsilon(4S) resonance center of mass energy at KEKB accelerator, Tsukuba, Japan. In this presentation, we present a search for dark photon using B->Kllll decay. We expect during b to s transition, we expect a intermediate state, something like higgs-strahlung process ( B->Kh’ ), then it decays to 2 dark photons (h’->A’A’), and the dark photon decays to 2 leptons ( A’->ll ). In this study we used a data sample corresponds to 711fb^-1 which corresponds to Belle Integrated Luminosity, and we used 10 streams of BBbar, 6 streams of qqbar, 50 streams of rareB, 20 streams of ulnu monte carlo samples.
In this presentation, we present our recent update of dark photon search including signal extraction, control sample study and expected upper limit of branching fraction. -
12. Primordial black holes and gravitational waves induced by exponential-tailed perturbations
INUI, Ryoto (Nagoya University)
In recent years, primordial black hole (PBH) has been attracting much attention
as a candidate for dark matter (DM). PBHs are produced by the gravitational collapse of the large primordial density perturbations in the radiation dominated universe. PBH has a wide mass range in [10^{-18}M_{\solar}, 10^3M_{\solar}] and there are many observational constraints (Hawking radiation, microlensing, GWs…etc). However, there is a possibility that PBHs account for 100% of DM in the mass range between 10^{-15}M_{\solar} and 10^{-11}M_{\solar}. The large primordial density perturbations required for PBH production can also induce the primordial stochastic gravitational waves (GWs) through a second-order interaction between tensor and scalar metric perturbations. Previous studies have shown that the 2nd-order scalar-induced GWs would be detectable in the foreseeable space-based GW interferometer called LISA when PBHs exist as the whole DM. Stochastic GWs can be utilized as indirect evidence that PBHs account for 100% of DM. The primordial perturbations statistically follow almost the Gaussian distribution, however, the primordial perturbations which have a heavier tail probability distribution function (PDF) have been investigated. The PBH abundance is quite sensitive to the tail behavior of PDF and much enhanced by such heavy tail distribution as discussed in several previous works. In this work, we focus on the exponential tail curvature perturbations which are well-motivated in the ultra-slow-roll inflation models. Then we investigated the 2nd-order scalar-induced GWs associated with PBH as the whole DM. We have found that the GWs would still be detectable in LISA sensitivity though the amplitude is slightly lower than that induced by the Gaussian curvature perturbations. We also found that the footprints of the non-Gaussianity appear in the high-frequency region. Although this feature emerges under the LISA sensitivity, it might be possible to obtain information about the non-Gaussianity from GW observation with a deeper sensitivity such as the DECIGO mission. -
13. The Neutron Veto of XENONnT: Performances and first results
MANCUSO, Andrea (University of Bologna)
The Neutron Veto of the XENONnT experiment is a Gd-loaded water Cherenkov Detector designed to detect the radiogenic neutrons coming from the detector materials, in order to reduce one of the main Nuclear Recoil background components in the XENONnT Time Projection Chamber. The Neutron Veto (NV) is instrumented with 120 (8″ Hamamatsu R5912) photomultiplier tubes, featuring high QE and low radioactivity, installed in a high light collection volume delimited by ePTFE reflector panels all around the cryostat. The XENONnT experiment is currently taking Science Data, with the NV operating in demi-water; in this poster, I will overview the NV working principle and its initial performances in terms of PMT main parameters and in terms of neutron tagging efficiency, as resulting from the calibration and first months of data-taking.