Fall Research Expo 2020

Statistical Properties of Elemental Abundances in Solar Analogs and the Possible Link to Planet Formation

The Sun is depleted in refractory (rock-forming) elements by∼10% relative to nearby solar analogs, suggesting a potential indicator of planet formation. Previous works have explored trends with stellar abundances as possible indicators of planet formation using high-resolution, high signal-to-noise stellar spectra to determine elemental abundances of nearby stars with unprecedented accuracy. We present an alternative, likelihood-based approach that can be applied to much larger samples of stars with lower precision abundance determinations. We utilize measurements of solar analogs from the Apache Point Observatory Galactic Evolution Experiment (APOGEE-2) and the stellar parameter and chemical abundance pipeline (ASPCAP DR16). ASPCAP elemental abundances relative to iron are typically determined with uncertainties too large to allow for a star-by-star comparison. Instead, our approach enables us to place constraints on the statistical properties of the elemental abundances, including correlations with condensation temperature and the fraction of stars with elemental depletions. For a population of ~1500 solar analogs, we find correlations with condensation temperature in agreement with higher precision surveys of smaller samples of stars. Such trends, if linked to the formation of planetary systems, provide an exciting approach to the study of extrasolar planets over large samples of Milky Way stars.

PRESENTED BY
Grants for Faculty Mentoring Undergraduate Research
College of Arts & Sciences 2021
Advised By
Bhuvnesh Jain
Join Jacob for a virtual discussion
PRESENTED BY
Grants for Faculty Mentoring Undergraduate Research
College of Arts & Sciences 2021
Advised By
Bhuvnesh Jain

Comments

Hi Jacob,

Your project is really fascinating and seems to have the potential for really cool future uses in discovering exoplanets! I was curious how you made your models described in your poster. I understand that they are statistical models of some sort, but how did you create them? Are they coded in a language like Python, or did you use another way of creating them, such as a neural network? 

Hi, glad you found the project interesting! You are right, the model is purely statistical and can actually be expressed analytically (i.e. can be written down). Basically, the model attempts to describe the data using a set of probability distributions with a large number of parameters. Using python, we ask the question "given this is what our data looks like, what are the underlying model parameters that best describe the data?"

This turns out to be a computationally intensive question to answer, since the data is complicated and our model has many parameters! For this reason, we use Markov Chain Monte Carlo (MCMC) techniques to derive the set of model parameters that best describe the data. As stated above, this is all accomplished in python and can take many long hours to run. So no neural networks here :)