The following article is Open access

The DECam Ecliptic Exploration Project (DEEP). V. The Absolute Magnitude Distribution of the Cold Classical Kuiper Belt

, , , , , , , , , , , , , , , , , , , , , and

Published 2024 February 27 © 2024. The Author(s). Published by the American Astronomical Society.
, , Focus on the DECam Ecliptic Exploration Project Citation Kevin J. Napier et al 2024 Planet. Sci. J. 5 50 DOI 10.3847/PSJ/ad1528

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

2632-3338/5/2/50

Abstract

The DECam Ecliptic Exploration Project (DEEP) is a deep survey of the trans-Neptunian solar system being carried out on the 4 m Blanco telescope at the Cerro Tololo Inter-American Observatory in Chile using the Dark Energy Camera (DECam). By using a shift-and-stack technique to achieve a mean limiting magnitude of r ∼ 26.2, DEEP achieves an unprecedented combination of survey area and depth, enabling quantitative leaps forward in our understanding of the Kuiper Belt populations. This work reports results from an analysis of 20, 3 deg2 DECam fields along the invariable plane. We characterize the efficiency and false-positive rates for our moving-object detection pipeline, and use this information to construct a Bayesian signal probability for each detected source. This procedure allows us to treat all of our Kuiper Belt object (KBO) detections statistically, simultaneously accounting for efficiency and false positives. We detect approximately 2300 candidate sources with KBO-like motion with signal-to-noise ratios > 6.5. We use a subset of these objects to compute the luminosity function of the Kuiper Belt as a whole, as well as the cold classical (CC) population. We also investigate the absolute magnitude (H) distribution of the CCs, and find consistency with both an exponentially tapered power law, which is predicted by streaming instability models of planetesimal formation, and a rolling power law. Finally, we provide an updated mass estimate for the CC Kuiper Belt of ${M}_{{CC}}({H}_{r}\lt 12)\,=\,{0.0017}_{-0.0004}^{+0.0010}{M}_{\oplus }$, assuming albedo p = 0.15 and density ρ = 1 g cm−3.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Beyond the orbits of the major planets, our solar system hosts a large population of minor bodies known as Kuiper Belt objects (KBOs). In the 30 years since the observational establishment of the Kuiper Belt (Jewitt & Luu 1993), several surveys (e.g., Millis et al. 2002; Bernstein et al. 2004; Petit et al. 2011; Bannister et al. 2018; Bernardinelli et al. 2022) have pushed the inventory of known objects to nearly 4000. These bodies, which are left over from the birth of our planetary system, provide constraints on its formation and dynamical evolution. When taken in aggregate, their dynamics, compositions, and sizes enable us to infer details about the dynamical evolution of the planets, the composition of our solar system's protoplanetary disk, and even the physical processes by which planetesimal formation occurred (see, e.g., Nesvorný 2018; Gladman & Volk 2021).

In particular, the size distribution of the so-called cold classicals (CCs)—which are thought to be relics of the birth of the solar system, relatively untouched and uncontaminated in the ∼4.5 Gyr since their formation—is a sensitive probe of the process of planetesimal formation. If the CCs are truly a quiescent population, a measurement of their size distribution can provide us with a unique opportunity to directly compare a primordial size distribution with predictions made by planetesimal formation models. Such a comparison will enable us to hone our formation models, and better understand the details of the physical processes at play in planetesimal formation, as well as the specific conditions of our own protoplanetary disk.

Recently, the streaming instability (SI; Abod et al. 2019) has begun to emerge as a leading theory of planetesimal formation. Numerical SI simulations predict an exponentially tapered power-law absolute magnitude (H) distribution, enabling a direct comparison between theory and observation. Kavelaars et al. (2021) found that the absolute magnitude distribution of the CCs detected by the Outer Solar System Origins Survey (OSSOS; Bannister et al. 2018) is consistent with an exponentially tapered power law. However, the CCs used by Kavelaars et al. (2021) only went as faint as Hr ∼ 8.3, leaving faint-end consistency with the SI as an open question. Existing literature on measurements of the faint end of the CC absolute magnitude distribution seems to be in weak tension with SI models of planetesimal formation. In particular, Fraser et al. (2014) find a marginally steeper faint-end size distribution than is predicted by SI simulations. However, a dearth of observed objects at the faint end of the distribution, along with the fact that Fraser et al. (2014) did not fit to an exponentially tapered power law, limits the usefulness of such comparisons. We require a larger, deeper set of CC detections by a survey with well-understood biases to thoroughly test any planetesimal formation theory.

The DECam Ecliptic Exploration Project (DEEP) is the first survey with sufficient depth and areal coverage to settle the tension in the shape of the faint end of the H distribution of the CCs. In this paper, we analyze data from 20 DECam fields (comprising an area of approximately 60 deg2), reaching a mean limiting magnitude of mr ∼ 26.2. We use a subset of our data to reconstruct the luminosity function of the Kuiper Belt as a whole, as well as the luminosity function of the CC population. As the main scientific result of this paper, we use our results to reconstruct the underlying absolute magnitude distribution of the CCs, and find consistency with models of planetesimal formation via the SI (Abod et al. 2019; Kavelaars et al. 2021).

This paper is organized as follows. In Section 2 we outline our observational strategy and discuss the data used in the subsequent analysis. Next we describe our image preprocessing pipeline (Section 3) and the pipeline used to carry out the object search (Section 4). We present an overview of our detections in Section 5. In Section 6 we calculate our detection efficiency using implanted synthetic objects. We compute the luminosity function for our KBOs as a whole in Section 7. In Section 8 we isolate a sample of CCs from our detections. Our main scientific result, presented in Section 9, is a calculation of the absolute magnitude distribution for the CC population. In Section 10 we calculate an estimate of the total mass of the CCs. In Section 11 we test the consistency of our absolute magnitude distributions with the results from Bernstein et al. (2004). The paper concludes in Section 12 with a summary of our results and a discussion of their implications.

2. DEEP Survey Strategy and Data

DEEP was carried out with the Dark Energy Camera (DECam) on the 4 m Blanco telescope located at Cerro Tololo Inter-American Observatory in Chile from 2019–2023, targeting four patches of sky along the invariable plane (see Trilling et al. 2024; Trujillo et al. 2024; hereafter Papers I and II, respectively, for more details). This paper focuses on the data taken in one of those four patches, our so-called B1 fields, from 2019–2021. These data consist of 20 individual DECam pointings targeting a progressively larger area of sky from 2019–2021, with significant overlap between years in order to enable the tracking of KBOs (see Figure 1). Table 1 gives the pointing for each night of data, as well as the detection efficiency parameters, calculated in Section 6. Note that in order to avoid double counting of single-epoch detections we only use a subset of our fields (indicated in Table 1) to derive the constraints in Sections 7 and 9.

Table 1. DEEP Telescope Pointings Used for the Long-stare Image Sequences Described in This Analysis

NightFieldJDR.A. (J2000)Decl. (J2000) Nexps η0 m50 σ KBO fitsCC fits
2019-08-27B1c2458723.710352.92015−3.623141030.9426.110.31nono
2019-08-28B1a2458724.711351.87745−5.035721020.9526.080.32nono
2019-08-29B1b2458725.711353.61898−5.297441010.9326.650.31nono
2019-09-26B1a2458753.513351.38115−5.24000950.9126.020.36nono
2019-09-27B1b2458754.511353.12082−5.50267970.9426.090.33nono
2019-09-28B1c2458755.513352.42444−3.82797960.9226.420.36nono
2020-10-15B1b2459138.499352.86350−5.609641030.8826.220.29yesyes
2020-10-16B1c2459139.510352.16593−3.93469960.9026.470.35yesyes
2020-10-17B1e2459140.511353.90520−4.19542960.9026.230.34yesyes
2020-10-18B1a2459141.514351.12145−5.34617910.9026.570.39yesyes
2020-10-19B1d2459142.503354.60670−5.86869990.9225.990.32nono
2020-10-20B1f2459143.503353.20607−2.52153910.9325.920.30yesno
2020-10-21B1d2459144.503354.60635−5.86972990.9226.050.36yesyes
2021-09-27B1d2459485.512354.80112−5.78794890.9325.960.36nono
2021-10-01B1b2459489.526353.05852−5.52967810.9026.100.29nono
2021-10-02B1f2459490.511353.40057−2.44119890.9426.010.52nono
2021-10-03B1i2459491.501355.13703−2.698921000.9326.260.30nono
2021-10-04B1c2459492.499352.36053−3.854361020.9226.120.33nono
2021-10-05B1h2459493.502355.83970−4.37136970.9226.210.33nono
2021-10-06B1e2459494.504354.09928−4.11514970.8526.080.27nono

Note. The positions of the fields in each epoch aim to track as many objects as possible by accounting for the effects of Earth's reflex motion and KBO shear. Each exposure is 120 s long and is taken in the VR band. The next three columns show the best fit of each night to Equation (5). The final two columns indicate whether a field was used to derive constraints for either the full KBO population or the CC population.

Download table as:  ASCIITypeset image

A DEEP exposure sequence, designed with a cadence ideal for a technique called shift-and-stack (Tyson et al. 1992; Gladman & Kavelaars 1997; Allen et al. 2001; Bernstein et al. 2004; Holman et al. 2004; Parker & Kavelaars 2010; Heinze et al. 2015; Whidden et al. 2019), typically consists of ∼100 consecutive 120 s VR-band exposures of the target field. 11 In the shift-and-stack approach, single-epoch images are shifted at the rate of a moving object (rather than at the sidereal rate) so that a moving object appears as a point source in the coadded stack.

There are two primary reasons why the shift-and-stack technique is preferable to long exposures for the discovery of moving objects. First, since the rate and direction of an object's motion are not known a priori, we are able to stack our images in a grid of velocity and direction that spans the space of possible KBO motions. The second benefit is the preservation of the signal-to-noise ratio (S/N) of moving objects. For stationary sources in astronomical images, S/N goes like t1/2. A source is effectively stationary if its apparent position changes by less than the size of the point-spread function (PSF) over the course of the exposure. This sets an upper limit on the useful exposure time when searching for moving objects, which we will call ${t}_{\max }$. Given DECam's typical VR-band seeing of ∼0farcs9 and the typical KBO rate of motion of a few arcseconds per hour, our value of ${t}_{\max }$ is on the order of several minutes. When $t\geqslant {t}_{\max }$, the S/N of stationary sources continues to go like t1/2, while smearing causes the S/N of moving sources to go like t0. Thus moving objects fade into the background while the S/Ns of background sources continue to grow. Because DECam's CCDs have negligible read noise, we lose no sensitivity by adding together many short images, and thus the S/Ns of the moving objects continue to increase like t1/2.

3. Image Preprocessing

In this section we describe how our images are processed in preparation for our shift-and-stack pipeline (described in Section 4). The following steps take place after the images have gone through preliminary reductions with the DECam community pipeline (Valdes et al. 2014).

3.1. Synthetic TNOs

To enable studies of our efficiencies, we generated a population of several thousand synthetic sources to plant in our images. These synthetic sources were not meant to emulate a realistic population, but rather to test efficiency across the space of all possible bound orbits in the Kuiper Belt. They span distances from ∼20 au to a few hundred astronomical units, and include fully retrograde orbits. To enable studies of efficiency as a function of brightness, we have given our synthetic sources apparent magnitudes as bright as 20, and as faint as 27.2, as well as sinusoidal rotation curves with amplitudes as large as 0.5 mag and rotation periods between a few hours and a few days.

3.2. Flux Calibration and Synthetic Source Injection

To calibrate the flux of our synthetic sources we calculate the photometric zero-point for each individual CCD image by cross-matching the nonstreaked sources (ellipticity < 0.8 12 ) against Pan-STARRS sources (Magnier et al. 2013) with rSDSS magnitude 13 between 15 and 21. We then use the Python package SpaceRocks (Napier 2020) to calculate the sky position, sky motion, and brightness of each synthetic TNO, including a rotation lightcurve. With the sky motion of each object and the PSF of the image, we generate a streak model for each synthetic TNO. With the brightness, streak model, and photometric zero-point specified, we inject the synthetic TNOs into the image. 14 Along with the synthetic TNOs, we also inject 12 stationary synthetic point sources with an r-band magnitude of 21 into each CCD image, in order to enable calibration after difference imaging. We add these 12 stationary sources at fixed pixel locations in the images (i.e., not fixed sky positions), so they do not appear in the template, and thus remain in the difference images.

3.3. Difference Imaging

After we implant synthetic sources, we prepare the images for the shift-and-stack pipeline. To do this, we must remove every stationary source, even the faintest sources that are not visible in single exposures. We apply the High Order Transform of Psf ANd Template Subtraction (hotpants) code (Becker 2015), which implements and improves upon the method of Alard & Lupton (1998) to create difference images. This code formed the basis for the Dark Energy Survey (DES)'s supernova search pipeline (Kessler et al. 2015), and has consequently been thoroughly exercised on DECam data.

We first assemble a collection of exposures from the same observing run, including images from both the long and short stares (see Papers I and II for details). We generate three different templates by median combining the single-epoch images with seeing (by measuring the FWHM of the in-frame stars) in the top, middle, and bottom thirds of the ensemble. We require the minimum time separation between observations to be longer than 0.01 days (14.4 minutes) to ensure that the templates contain minimal flux from slow movers. The hotpants algorithm then performs seeing matches between science images and the template with the closest match to the image's seeing to generate difference images. The better-seeing images (either single epoch or template) are convolved to match the images with the worst seeing, and the bright sources (pixel counts > 3000) are masked before performing image subtraction. The final step is masking bright Gaia sources (G > 18) and regions where there is a contiguous group of at least 5 pixels above/under the ±2σ level. This step usually masks fewer than 1% of all pixels and not only cleans out most of the artifacts generated by the difference imaging process, but also removes streaks from artificial satellites, thus greatly reducing the false-detection rate in the shift-and-stack images. However, this masking comes with the caveat of masking bright sources. In practice, we find that sources brighter than VR ∼ 24–24.5 are masked. To ensure that we recover the bright objects, we also write out difference images in which we do not mask the pixels above/under the ±2σ level. While the unmasked images produce significantly more spurious sources after shifting and stacking, we simply ignore the faint sources produced by these images, opting to only consider sources with S/N ≥ 30, thus finding all of the bright sources with minimal additional effort.

Finally we use the stationary magnitude 21 fakes to rescale each difference image such that it has a zero-point of 30. After rescaling, we compute weight images as the inverse of the variance in the difference images. Using weight images enables us to optimize the S/N of our stacks without manually rejecting images. 15

4. The Detection Pipeline

4.1. The Grid

To carry out our search, we developed a novel method of generating the shift-and-stack grid. We begin by computing the grid bounds. A unique grid is required for each field, as its exact shape depends on the epoch and sky position of a pointing. We must also select the range of topocentric distances (Δ) of interest. For the search described in this paper, we consider Δ ∈ [35, 1000] au. 16 With R.A., decl., and Δ fixed, an object's position vector in the topocentric frame, x T , is uniquely determined. We then change the origin from the topocentric to the barycentric frame so that we have x B . 17 After changing the origin to the barycentric frame, we assign a velocity vbound such that an object at position x B would be barely bound to the solar system (specifically, we use the speed appropriate for semimajor axis a = 200,000 au). We then uniformly sample a collection of velocity vectors v B on the surface of the sphere of radius vbound. With the velocity vectors specified, the state vectors are fully determined, allowing for the computation of the corresponding R.A. rates and decl. rates as observed in the topocentric frame. 18 We repeat this process at several discrete values of Δ, and then use the Python package alphashape to draw a concave hull bounding the computed rates. This hull encompasses the full region of physically possible sky motions for the distances of interest.

Figure 1.

Figure 1. The DEEP "B1" TNO search fields used in this analysis. Each hexagonal shaded area represents the DECam focal plane with its 61 active CCDs. The B1a–c fields were observed with integrated exposure times of ∼3.5 hours in 2019 August (left), and reobserved at suitably shifted positions (not plotted) in 2019 September. In 2020 October we observed the B1a–f fields (center). In 2021 October we observed the B1b–f, h, and i fields. The larger areas in 2020 and 2021 account for diffusion of the 2019 detections. The plotted points represent our putative KBO detections. Note that Neptune is near the B1 fields, meaning that there should be relatively few resonant objects among our detections.

Standard image High-resolution image

Once we have computed the boundary of the concave hull for a given pointing, we choose a finite set of rates at which to stack our data. Toward this end, we employ a new method in which we fill the hull with a large sample of random points, and then use a K-means clustering algorithm to divide the region into N clusters. One can use dimensional analysis to determine that NAhull/epsilon2, where Ahull is the area of the hull and epsilon is the desired grid spacing to minimize the maximum source trailing (roughly determined by the PSF width divided by the duration of the exposure sequence). We take the centroid of each cluster as a grid point. When compared to a rectangular grid, this method allows us to use ∼20% fewer grid points, and simultaneously reduce both the mean and maximum distance of any given point in the hull to the nearest grid point. As a result, we achieve more even coverage of physically possible rates, while minimizing the computational cost and opportunities for false positives. Although the process is random by nature, a sufficient number of random samples makes it nearly deterministic. We show an example of a grid for a 4 hr exposure sequence with 1'' seeing in Figure 2.

Figure 2.

Figure 2. Sample grid for a 4 hr exposure sequence with 1'' seeing. The black teardrop-shaped boundary encompasses the space of possible bound orbits with topocentric distance between 35 and 1000 au. The red points are the sample rates computed using the procedure described above.

Standard image High-resolution image

4.2. The Shift-and-Stack Procedure

After computing the shift-and-stack rates, we proceed with the stacking. We designate the first exposure in an exposure sequence as the reference image. We then compute the R.A. and decl. at the center of the reference image, and use the R.A. rate and decl. rate to compute the amount by which we have to shift each image to match with the reference image's center. In other words, we stack the pixels along the path taken by the center of the reference image for the given R.A. and decl. rates. We do not consider variations in the focal plane geometry across the chip, as the solid angle of a single chip is rather small (we thus assume that the chips are locally flat), and all of the images are SWarped (Bertin et al. 2002) for the difference imaging process. We perform separate stacks for both the weighted signal (i.e., the signal multiplied by the weight) and weighted images, and then obtain the full stack by dividing the stacked weighted signal image by the stacked and weighted image.

After each stack we use the Python package sep (Bertin & Arnouts 1996; Barbary 2016) to extract all sources with at least 3 pixels with values above 1.5σ. 19 Each stack is contaminated with of order a few thousand spurious sources mostly consisting of cosmic rays, dead pixels, oversaturated pixels close to bright stars, and residuals from poorly subtracted stars and galaxies. Since we use approximately 100 stacks per chip, the total number of spurious sources per chip is close to 105. Because the vast majority of spurious sources are not PSF-like, we trained convolutional neural networks (CNNs) using tensorflow (Abadi et al. 2015) to reject them automatically. We trained one CNN on synthetic sources superimposed on a background made from DEEP difference images, and another on the autoscan training set that was used to train a random forest algorithm for background rejection in the DES supernova search (Goldstein et al. 2015). Both CNNs retain nearly all of the signal and fail to reject different types of background, thus enabling a significant performance gain by requiring a source to be classified as good by both CNNs. This procedure cuts the number of sources per stack by three orders of magnitude, down to a few hundred. After all of the stacks are completed for a given chip, we consider the complication that most objects are bright enough to be detected in adjacent stack rates. To eliminate this redundancy, we employ the DBSCAN (Ester et al. 1996) clustering algorithm in pixel space to group detections associated with the same object.

The grid spacing in the initial shift-and-stack is good enough for source detection, but is too coarse to provide the best values for the position and rate for a given source. To refine the parameters, we use a Markov Chain Monte Carlo (MCMC) approach in which we perform targeted stacks on our candidates to maximize the S/N. These targeted stacks are still restricted to the parameter space of bound orbits, but are now continuous in R.A. and decl. rates. This procedure enables us to obtain refined R.A. and decl. rates with uncertainties, while simultaneously optimizing the measured R.A. and decl. of the source. We use the uncertainties in the R.A. rate and decl. rate (which are typically about 0farcs1 hr−1 in each dimension) to classify our detections probabilistically in Section 8. After refining the parameters of our detections, we discard all sources with rates slower than 3 pixels hr–1 (0farcs79 hr−1, or distance ≳ 150 au), as such slow rates tend to accumulate false positives due to subtraction artifacts much more quickly than faster rates. We feed our remaining candidates through a final CNN that reduces the number of sources per chip to ∼10. The images we show to the CNN are similar to the right panel in Figure 3, and contain more information than the cutouts we show to the first CNN. Good sources tend to show a characteristic radial pattern, while false sources do not.

Figure 3.

Figure 3. Example of the image format used for visual inspection of our candidates. The detection shown in this figure is a synthetic source with an r-band magnitude of 27.0. The left panel is what we call an MCMC plot. The black teardrop-shaped region is the boundary of the space of possible KBO rates of motion. The gray region at slow rates represents the rate cut we made on our detections. The points are sampled from our MCMC, with colors corresponding to the S/N of the object in the stack (this object had a peak S/N of 8). Each point represents a targeted stack at the given rate. As a stack approaches the correct rate, causing the source to appear optimally point-like, the S/N increases in this characteristic manner. The inset labeled "Diff Coadd" is the stationary stack; the image shows no discernible signal. The inset labeled "Best Stack" is the stack at the best rate, as determined by the MCMC. In this image the KBO is quite apparent. The right panel shows a grid of stacks centered at the best rate, and offset in increments of 1 pixel hr–1 in R.A. rate and decl. rate.

Standard image High-resolution image

Once we have refined our sources' rates and positions, we compute their fluxes and flux uncertainties using sep. We use these values to calibrate the magnitudes of our detections against the known magnitudes of our implanted sources, as well as obtaining magnitude uncertainties.

Finally we do a reverse stack on our data, in which we repeat the above procedure with negated R.A. and decl. rates. Because no physical KBO would appear as a point source when stacked at these rates, all sources that result from this stack are false positives. This reverse stack enables the critical step of accounting for false positives in our detections. In Figure 4 we show differential histograms of the number of sources resulting from the forward and reverse shift-and-stack as a function of S/N, both before and after applying weights and various cuts. In Figure 5 we show a scatter plot of the R.A. and decl. rates of all detections from the forward and reverse shift-and-stack, prior to human vetting.

Figure 4.

Figure 4. Number of sources resulting from the forward and reverse shift-and-stack as a function of S/N, both prior to human vetting, and after applying weights from human vetting and various cuts. It is clear that false positives drop off quickly with increasing S/N, and that our vetting procedure was highly effective at removing false positives.

Standard image High-resolution image
Figure 5.

Figure 5. Rate vectors for all detections resulting from the forward (black) and reverse (red) shift-and-stack, prior to human vetting. There is a clear excess of detections in the forward shift-and-stack, corresponding to the Kuiper Belt.

Standard image High-resolution image

4.3. Candidate Vetting

The final step in the discovery pipeline is the visual inspection of the grids that were labeled as good by the final CNN. This step includes the implanted synthetic sources and the sources from the reverse shift-and-stack. To do this visual inspection we developed a website where users vote yes, no, or maybe on a candidate and have their votes recorded in a database. The images are presented to the vetters in a blind manner, meaning that the vetter has no indication of whether an object is an implanted source, a source from the reverse shift-and-stack, or a true candidate. By blindly vetting all sources, we can reliably compute a voter's true and false positive rates for yes, no, and maybe votes. We require votes from three unique vetters for each object, and then combine the votes into a probability that a source is "real" using the following framework.

The odds (i.e., betting odds) of a source being good given a vote are

Equation (1)

where O represents odds and P represents probability. The + symbol means that the object is truly a good source, and the − symbol means that the object is truly a bad source. We calculate the prior O(+) using the excess in the number of sources in the forward shift-and-stack over the number of sources in the reverse shift-and-stack. The fraction on the right-hand side of Equation (1) is the Bayes factor

Equation (2)

The quantity P(vote∣+) is calculated as the probability of assigning a given vote to an implanted source. Similarly, the quantity P(vote∣−) is calculated as the probability assigning a given vote to a source from the reverse shift-and-stack.

Given multiple votes, we can simply update the information by taking the product of Bayes factors as

Equation (3)

where Bi is the Bayes factor of the ith voter. We can then convert the odds from Equation (3) into the probability that a source is real as

Equation (4)

We assign the values calculated by Equation (4) as a weight (w) for each of our detections. This treatment allows us to take a probabilistic approach in studying our detections in Sections 69. 20

In principle, P(vote∣+), P(vote∣−), and O(+) can all vary with magnitude and rate of motion. In practice, however, we found that P(vote∣+) and P(vote∣−) did not vary much over the range of brightnesses and rates that we considered for our fits in Sections 7 and 9, so we considered them to be constants. Similarly, parameterizing O(+) had little effect on our detections' weights after human inspection, so we chose to treat it as constant. 21 See Table 2 for a tabulation of our vetters' Bayes factors, and see Figure 6 for correlations between the vetters' votes.

Figure 6.

Figure 6. Correlations between vetters' votes on detections from the reverse stack (top left), implanted sources in the forward stack (top right), and candidate sources in the forward stack (bottom). The numbers in each square tabulate the number of times the square's outcome occurred. Note that yes votes on sources from the reverse stack are not strongly correlated.

Standard image High-resolution image

Table 2. Anonymized Bayes Factors of Each of Our Vetters

PersonB(yes)B(no)B(maybe)
Vetter 017.500.120.49
Vetter 176.960.101.40
Vetter 2849.500.103.17
Vetter 342.030.191.09
Vetter 423.540.351.76

Download table as:  ASCIITypeset image

5. Detections

In this section we qualitatively analyze our detections; we do more thorough quantitative analyses in Sections 69. In our 20 nights of data we detected a weighted sum of 2297.9 objects with weights greater than 0.01, corresponding to 2896 unique sources. We have elected to omit 3698 sources with weights less than 0.01, as such detections are rather unlikely to be real, and their omission does not change the results of our analysis. While the majority of our remaining sources have weights close to 1, there are some more ambiguous cases. We show a mosaic of all detections with weights ≥ 0.4 in Figure 7.

Figure 7.

Figure 7. Mosaic of DEEP B1 detections with weights ≥ 0.4.

Standard image High-resolution image

Based on our distribution of observed magnitudes (shown in Figure 8), it appears that our detection efficiency begins to fall off at magnitudes fainter than r ∼ 26 (though we explore this in detail in Section 6).

Figure 8.

Figure 8. Weighted distribution of the apparent magnitudes of our detections (where the weights are given by Equation (4)).

Standard image High-resolution image

It is also informative to examine the sky moving rates of our detections, which we display in Figure 9. In this figure, the size of each marker is proportional to its weight. The most apparent feature here is a large population of objects moving at approximately 3'' hr−1, mostly corresponding to CCs (see Section 8). Based on the increased density of small points at slow rates of motion, we note a propensity for slow-moving false positives to pass our CNNs, but to be later given low weights after human inspection (there is a noticeable overdensity of points at slow rates in Figure 5 which is absent in Figure 9). At faster rates, points with low weights appear to be evenly spread. The presence of such features provides further evidence that a great deal of care in avoiding false positives is required when making statistical use of single-night detections. When such detections can be linked to several epochs, single-night false positives will be less problematic.

Figure 9.

Figure 9. Sky moving rates of our candidate detections. The size of each marker is proportional to its weight, as calculated by Equation (4). The dense cloud of points corresponds mostly to CC detections (see Section 8).

Standard image High-resolution image

6. Detection Efficiency

In order to make use of our detections, we must understand the efficiency with which we recover our implanted synthetic sources. We parameterize the detection efficiency as a function of apparent magnitude using a single hyperbolic tangent function, given by the following equation

Equation (5)

where η0 is the peak detection efficiency, m50 is the magnitude at which the detection efficiency drops to η0/2, and σ is the width of the hyperbolic tangent function. 22 We weight each of the detections in our fit using Equation (4), and maximize the likelihood given by

Equation (6)

where θ is the vector of function parameters, and undetected fakes receive w = 0. We display the best fit for each night in Figure 10, and list the fit parameters in Table 1.

Figure 10.

Figure 10. Recovery efficiency for implanted sources as a function of r-band magnitude. The combined efficiency for all 20 nights of data is given by the dashed line, while individual nights' efficiencies are given by the gray lines. The average m50 for our entire survey is mr ∼ 26.2, with individual nights ranging from 25.92–26.65. Our average peak efficiency, η0, is ≳0.92, with individual nights ranging from 0.85–0.95. For reference, we also show limits from DES (Bernardinelli et al. 2022), OSSOS (Bannister et al. 2018), and the Large Synoptic Survey Telescope (Ivezić et al. 2019).

Standard image High-resolution image

7. The Luminosity Function

We use our characterized detections to compute the differential sky density Σ of the Kuiper Belt as a function of apparent magnitude. 23 We reiterate that for this analysis we are using only single-epoch data (i.e., we have not linked these detections across multiple epochs), so we must treat each night as an independent survey. However, because our survey was designed to detect objects multiple times, our nights are not all statistically independent. As such, we selected a subset of our data consisting of the statistically independent set of nights {B1b 20201015, B1c 20201016, B1e 20201017, B1a 20201018, B1f 20201020, B1d 20201021} to do our fits. This subset offers the best combination of survey area and depth among all possible subsets of our data. 24 Note that when fitting our data, we truncate our detection efficiency at m50, and ignore all fainter detections.

For a given probability distribution Σ(m), the expected number of detections by a survey is given by

Equation (7)

where Ω is the survey's areal coverage and η(m) is its detection efficiency. The variable θ is a vector of function parameters. Next, the probability of randomly drawing an object with magnitude m from Σ(m) is given by

Equation (8)

where epsilon is a functional representation of the magnitude uncertainty for which we have adopted a Gaussian centered at m, with a width of δ m. 25

We calculate the underlying luminosity function (Σ) of the Kuiper Belt by maximizing the likelihood (${ \mathcal L }$) given by

Equation (9)

(see, e.g., Loredo 2004; Fraser et al. 2014) where the index k runs through each night of data and the index j runs through the survey's detections. The value wj,k denotes the weight of the jth object detected by the kth survey, as calculated by Equation (4). 26

Several previous works have studied the form of the luminosity function of KBOs (Bernstein et al. 2004; Petit et al. 2006; Fraser et al. 2008; Fraser & Kavelaars 2009; Fuentes et al. 2009). Following the example of these studies, we fit our data with functional forms of varying complexity. We first try a single power law given by

Equation (10)

where m0 is the magnitude at which the density of objects is one per square degree and α is the power-law slope. We next try a rolling power law given by

Equation (11)

where Σ23 is the number of objects with mr = 23 per square degree, while α1 and α2 control the shape of the function. We fit a broken power law given by

Equation (12)

were m0 is a normalization parameter, mB is the magnitude at which the break occurs, and α1 and α2 are the bright and faint slopes, respectively. Finally we fit an exponentially tapered power law with the functional form

Equation (13)

where α is the faint-end power-law slope, β is the strength of the exponential taper, m0 is a normalization parameter, and mB is the magnitude at which the exponential taper begins to dominate.

In each case, we obtain a best fit using an optimizer, and then estimate our uncertainties by running an MCMC. We then use a survey simulation technique to test the quality of our fits. We randomly sample a population of objects from our best-fit Σ(mθ), and then impose the detection criteria of our survey to simulate detections. After we simulate our detections, we construct a simulated empirical distribution, E(m). By repeating this process many times, we end up with an ensemble of simulated empirical distribution functions, {Ei (m)}. This ensemble is representative of the distribution of possible outcomes for our survey under the assumption that our best-fit Σ(mθ) is the underlying truth. Next we calculate the upper and lower limits for the central 95th percentile of {Ei (m)} as a function of m, which we call u(m) and l(m), respectively. We use these values to define a test statistic that measures the fraction of the interval m over which Ei (m) is an outlier at the 95% level. The statistic, which we call S, is given by

Equation (14)

where

Equation (15)

We calculate S for each simulated survey to assemble a set {Si }, and then for our actual detections, Sd . We then compute the quantile of Sd among {Si }, which we call Qoutlier. Values of Qoutlier > 0.95 indicate a poor fit. Finally, we compute the Bayes information criterion (BIC) for each of our fits. In general, lower values of BIC indicate a preferred model. We demonstrate our results in Figure 11, and summarize them in Table 3.

Figure 11.

Figure 11. Best-fit differential distributions for the single, rolling, broken, and exponentially tapered power laws in purple, blue, yellow, and red, respectively. The points and 1σ error bars represent the differential distribution of our detections, corrected for efficiency and weights. The points are meant only as a visual aid—we always use the maximum likelihood technique to fit our data. Note that our fits are not normalized with respect to latitude, but rather they represent the average sky density among the fields in our pointing mosaic.

Standard image High-resolution image

Table 3. Best-fit Parameters and Statistics for Each of the Distributions Tested on the Full KBO Sample

Functional FormBest-fit ParametersBIC Qoutlier
Single Power Law $\langle \alpha ,{m}_{0}\rangle =\langle {0.60}_{-0.02}^{+0.02},{23.31}_{-0.10}^{+0.09}\rangle $ 5.20.99
Rolling Power Law $\langle {\alpha }_{1},{\alpha }_{2},{{\rm{\Sigma }}}_{23}\rangle =\langle {0.89}_{-0.10}^{+0.11},-{0.07}_{-0.03}^{+0.02},{0.38}_{-0.09}^{+0.11}\rangle $ 00.19
Broken Power Law $\langle {\alpha }_{1},{\alpha }_{2},{m}_{0},{m}_{B}\rangle =\langle {0.90}_{-0.12}^{+0.16},{0.49}_{-0.05}^{+0.04},{23.59}_{-0.1a}^{+0.09},{24.59}_{-0.37}^{+0.26}\rangle $ 2.10.18
Tapered Power Law $\langle \alpha ,\beta ,{m}_{0},{m}_{B}\rangle =\langle {0.21}_{-0.08}^{+0.10},{0.14}_{-0.04}^{+0.05},{13.95}_{-11.40}^{+5.33},{28.74}_{-2.18}^{+2.71}\rangle $ 7.90.83

Note. The BICs have been normalized such that the minimum value among the distributions is 0.

Download table as:  ASCIITypeset image

First we note that like several studies before us, we strongly rule out the single power law. The rolling, broken, and exponentially tapered power laws all provide acceptable fits. Judging by the BIC, the rolling power law is marginally preferable to the broken and exponentially tapered power laws, but we contend that a dearth of detections brightward of mr = 24 limits the usefulness of such comparisons. There is also no overwhelming physical motivation for choosing between the distributions. While Kavelaars et al. (2021) showed that the absolute magnitude distribution of the CCs is well described by an exponentially tapered power law, there is no reason a priori that it should fit the full Kuiper Belt luminosity function. In fact, we might expect the full Kuiper Belt luminosity function to be fit poorly by an exponentially tapered power law, because it is likely a mix of multiple distributions. However, as we show in Section 8, our detections are dominated by CCs. It is therefore unsurprising that the exponentially tapered power law yields a reasonable fit. Given these considerations, we opt not to choose a preferred model, and instead claim for the time being that all three distributions provide acceptable fits to the DEEP detections.

8. Isolating a Sample of CCs

While our single-night detections do not provide nearly enough information for secure dynamical classification, we can use the detections' rates of motion to make assumptions about their dynamical classes. Consider the space of allowable sky motions for bound orbits shown in Figure 12. As before, the large black outline contains the space of possible rates of motion for objects on bound orbits at topocentric distances of 35–1000 au. The blue region is the allowable parameter space for CCs (42.4 au < a < 47.7 au, 0 < e < 0.2, and 0° < i < 4°), and the red region is the allowable parameter space for a somewhat arbitrary definition of hot classicals (HCs) that we are only using for the purpose of demonstration (42.4 au < a < 47.7 au, 0 < e < 0.2, and 4° < i < 45°). Note that the exact shape and orientation of these regions vary as functions of sky position and epoch.

Figure 12.

Figure 12. Sky motion parameter space for simulated CCs and HCs in B1a on 2020 October 18. The dashed line shows the CC selection region for this night.

Standard image High-resolution image

In practice, there is some overlap in the rates of motion between different dynamical classes, which is accentuated by the uncertainties in our rate measurements. We use survey simulations to isolate the CCs in our detections. Using the OSSOS++ Kuiper Belt model, 27 we calculate which objects our survey would have detected, and apply a smear in the R.A. and decl. rates consistent with our real detections. For each simulation we use a kernel density estimator to draw a region containing 95% of the CCs. We find that we can isolate a fairly pure sample (on average ∼70% purity; see Table 4) of CCs from our single-night data. This purity is somewhat serendipitous, as the DEEP B1 fields happen to be in a patch of sky that is relatively uncontaminated by objects in resonance with Neptune. We speculate that the uncertainties inherent to the distribution of objects in the OSSOS++ model are larger than the purely statistical uncertainties listed in Table 4. Nevertheless, these values should provide a realistic estimate of the purity of the CC samples that we attempt to isolate for the analysis in Section 9.

Table 4. Weighted Number of Detections in Each of Our 20 Fields, Along with Relative Dynamical Classification Probabilities

NightFieldTotal DetectionsPotential CCsCC FractionHC FractionResonant FractionCC fits
2019-08-27B1c89.9469.910.68 ± 0.060.19 ± 0.040.12 ± 0.05no
2019-08-28B1a78.3964.560.74 ± 0.050.15 ± 0.030.11 ± 0.03no
2019-08-29B1b160.83131.180.74 ± 0.040.15 ± 0.030.10 ± 0.02no
2019-09-26B1a81.5865.100.75 ± 0.050.14 ± 0.040.11 ± 0.04no
2019-09-27B1b77.8556.960.74 ± 0.060.16 ± 0.050.09 ± 0.03no
2019-09-28B1c118.2088.980.68 ± 0.060.18 ± 0.040.13 ± 0.04no
2020-10-15B1b68.7253.860.71 ± 0.040.18 ± 0.040.10 ± 0.03yes
2020-10-16B1c106.5485.130.66 ± 0.050.19 ± 0.050.13 ± 0.04yes
2020-10-17B1e120.69103.620.75 ± 0.050.15 ± 0.040.09 ± 0.03yes
2020-10-18B1a124.90102.370.71 ± 0.040.17 ± 0.040.11 ± 0.02yes
2020-10-19B1d54.5042.090.66 ± 0.070.21 ± 0.050.12 ± 0.04no
2020-10-20B1f56.7238.710.57 ± 0.080.23 ± 0.050.18 ± 0.06no
2020-10-21B1d76.8553.240.66 ± 0.060.21 ± 0.050.11 ± 0.04yes
2021-09-27B1d68.3258.310.68 ± 0.060.19 ± 0.050.12 ± 0.04no
2021-10-01B1b110.3385.490.71 ± 0.050.18 ± 0.050.10 ± 0.02no
2021-10-02B1f80.0256.980.55 ± 0.070.26 ± 0.070.18 ± 0.06no
2021-10-03B1i114.1092.960.69 ± 0.050.18 ± 0.030.12 ± 0.04no
2021-10-04B1c84.9561.390.68 ± 0.060.18 ± 0.030.13 ± 0.05no
2021-10-05B1h109.6790.020.74 ± 0.040.15 ± 0.040.10 ± 0.03no
2021-10-06B1e103.8381.730.73 ± 0.060.16 ± 0.050.10 ± 0.04no

Note. Note that we have ignored all objects fainter than the m50 of the night in which they were detected. The fraction of objects with rates consistent with being CCs was calculated using survey simulations. The final column indicates whether a field was used to derive constraints for the CC population.

Download table as:  ASCIITypeset image

Finally, we obtain a distance estimate and uncertainty for each of our CC detections. These estimates rely on the fact that in the regime of the CCs, the relationship between an object's heliocentric distance and the inverse of its apparent rate of motion can be well approximated as linear. For each night of data, we project our CC population model (obtained from the OSSOS++ model) into the space of R.A. rate and decl. rate, and fit a line relating distance to the inverse of the apparent sky motion. We then use the resulting relationship to compute the heliocentric distance of each of our detections. By sampling from the covariance of our detections' rates, we obtain Gaussian distance uncertainties of at most 1–2 au, which end up being precise enough to fit an absolute magnitude distribution (Section 9).

9. The Absolute Magnitude Distribution of the CCs

Determining the absolute magnitude (H) of an object requires simultaneous knowledge of its apparent magnitude and its heliocentric distance. While our detections have reasonably well-constrained apparent magnitudes (∼±0.1 mag), their distances are not as well constrained from single-night detections a priori. However, as we found in Section 8, any true CCs are within a narrow range of r, with uncertainties of only 1–2 au, leading to uncertainty of only 0.1–0.2 mag in H.

For a given probability distribution Σ(H), the expected number of detections by a survey is given by

Equation (16)

where Ω is the survey's areal coverage and Γ is the underlying radial distribution of objects in the survey's field of view. Note that we use the OSSOS++ Kuiper Belt model (Kavelaars et al. 2021) to compute a kernel density for Γ. Since our fields are all observed near opposition, it is a good approximation to use

Equation (17)

Next, the probability of randomly drawing an object with magnitude m and heliocentric distance r from the distribution Σ(H) is given by

Equation (18)

where epsilon is a functional representation of the magnitude uncertainty and γ is a functional representation of the uncertainty in the heliocentric distance. Note that we have taken epsilon to be a Gaussian centered at m, with a width of δ m, and γ to be a Gaussian centered at r, with a width of δ r.

We again use a modified version of the method described by Fraser et al. (2014) in which we maximize the likelihood given by

Equation (19)

where the index k runs through the surveys (in our case individual nights of data), and the index j runs through the survey's detections.

We again fit single, rolling, broken, and exponentially tapered power laws. Note, however, that we have changed the definition of the rolling power law to

Equation (20)

where Σ8 is the number of objects per square degree per magnitude at Hr = 8.

For these fits, we must also account for the contamination in our sample from non-CCs. We do this by sampling from each night's detections that have rates consistent with being a CC, accepting each detection with a probability given by the CC fraction column in Table 4. Note that we omit the B1f field due to the projected low purity of the isolated CC sample. We then do the maximum likelihood fit, and compute our uncertainties as in the previous sections. We repeat this procedure 100 times, and then aggregate all of our all samples from all MCMCs, resulting in a combined posterior. We then calculate the best-fit parameters and uncertainties from the aggregation. For the fitting statistics, we now report the means of BIC and Qoutlier from our simulations as 〈BIC〉 and 〈Qoutlier〉, respectively. We show our fits in Figure 13, and present our results in tabular form in Table 5.

Figure 13.

Figure 13. Best-fit differential distributions for the single, rolling, broken, and exponentially tapered power laws in purple, blue, yellow, and red, respectively. The points and 1σ error bars represent the differential distribution of our detections, corrected for efficiency and weights. Note that our fits are not normalized with respect to latitude, but rather they represent the average sky density among the fields in our pointing mosaic.

Standard image High-resolution image

Table 5. Best-fit Parameters and Statistics for Each of the Distributions Tested for the Absolute Magnitude Distribution of the CC Subsample of Our Detections

Functional FormBest-fit Parameters〈BIC〉Qoutlier
Single Power Law $\langle \alpha ,{H}_{0}\rangle =\langle {0.60}_{-0.04}^{+0.04},{7.43}_{-0.13}^{+0.12}\rangle $ 4.00.86
Rolling Power Law $\langle {\alpha }_{1},{\alpha }_{2},{{\rm{\Sigma }}}_{8}\rangle =\langle {0.77}_{-0.08}^{+0.10},-{0.10}_{-0.05}^{+0.04},{2.26}_{-0.33}^{+0.35}\rangle $ 00.20
Broken Power Law $\langle {\alpha }_{1},{\alpha }_{2},{H}_{0},{H}_{B}\rangle =\langle {1.02}_{-0.23}^{+0.48},{0.50}_{-0.09}^{+0.07},{7.57}_{-0.19}^{+0.14},{8.20}_{-0.67}^{+0.51}\rangle $ 5.20.66
Tapered Power Law $\langle \alpha ,\beta ,{H}_{0},{H}_{B}\rangle =\langle {0.26}_{-0.07}^{+0.08},{0.19}_{-0.06}^{+0.08},{2.22}_{-3.72}^{+2.42},{10.62}_{-1.14}^{+1.86}\rangle $ 5.90.56

Note. The BIC values have each been rescaled such that the minimum value among the distributions is 0.

Download table as:  ASCIITypeset image

While the BIC favors the rolling power law, we do not believe that we can give preference to any distribution. This lack of constraining power is largely due to our dearth of detections at the bright end of the H distribution (we have no CC detections brighter than Hr ∼ 6.3). In contrast, Kavelaars et al. (2021) found that the OSSOS++ detections were inconsistent with rolling and broken power laws due to a taper at the bright end of the H distribution. The OSSOS++ model sensitivity was only possible due to the high purity of their sample and its well-characterized orbits. Our inability to rule out the rolling and broken power laws may be due to some combination of a relatively small survey area and contamination in our CC sample. On the other hand, the OSSOS++ sample only went as deep as Hr ∼ 8.3 (at least two magnitudes brighter than DEEP), requiring the assumption of a faint-end power law. In contrast, we constrain the faint end of the H distribution rather well. Our measured faint-end slope, $\alpha ={0.26}_{-0.07}^{+0.08}$, is consistent with SI simulations (Abod et al. 2019). In Figure 14 we show a comparison of our exponentially tapered power-law fit with that of Kavelaars et al. (2021). The two results are consistent (i.e., the OSSOS result is within our 95% confidence region), and both surveys are consistent with the result of Bernstein et al. (2004), which is the deepest KBO survey to date.

Figure 14.

Figure 14. Absolute magnitude distribution of the CCs in our sample compared to those in Kavelaars et al. (2021). The black line is our best-fit exponentially tapered power law, and the dark gray region represents the 95% confidence interval. Note that we have no detections brighter than Hr ∼ 6.3. The red line is the best fit from Kavelaars et al. (2021), and the white and green circles were taken from the same work. The white circles represent where the inventory of the CCs is considered complete, and the green circle is computed using the detections from Bernstein et al. (2004).

Standard image High-resolution image

The consistency of our detections with a rolling power law warrants careful consideration. First, we note that for appropriate parameters, a rolling power law is functionally equivalent to a Gaussian. All rolling power-law fits we have presented in this work satisfy these criteria, and can therefore be equally well represented as normal distributions. If the CCs are normally distributed in H, it would imply that they have a characteristic size, after which the H distribution turns over. While we currently have no reason to suspect that the size distribution turns over, our data cannot rule it out as a possibility. Furthermore, the only survey in the literature with data beyond our limit (Bernstein et al. 2004) cannot distinguish between the models, as it is consistent with extrapolations of both the rolling and exponentially tapered power laws. While more precise measurements of the CC H distribution from forthcoming DEEP data will help to bring the true distribution into clearer focus, an even deeper targeted CC survey may be necessary to distinguish between models.

10. Mass of the CC Kuiper Belt

Given our absolute magnitude distribution, we can calculate the total mass of CCs as

Equation (21)

where F is an estimate of the average fraction of the total CC population per square degree in one of our fields (we calculate F ≈ 3.27 × 10−4 using the OSSOS++ model), and M is the mass of a body as a function of H given its geometric albedo, p, and mass density, ρ. If we assume that ρ and p are constant among the population of CCs, we can manipulate Equation (21) to pull the ρ and p dependence out of the integral, as

Equation (22)

Note that the quantity

gives the volume of CCs per square degree in units of km3 deg−2. We find that MCC (Hr < 10.5) (i.e., the mass of the CCs up to our detection limit) is (at 95% CL)

Equation (23)

If we extrapolate our fits our to Hr = 12, we find

Equation (24)

To facilitate a comparison, we can modify the form of the mass estimate reported by Bernstein et al. (2004) to

Equation (25)

where r is the average heliocentric distance of a CC. 28 While the mass estimate from Bernstein et al. (2004) goes slightly deeper than our extrapolation, the extra mass is negligible, so the two mass estimates are in excellent agreement. Finally, we note that since our H distribution is consistent with that found by Kavelaars et al. (2021), our mass estimates must also be in agreement.

11. Consistency with Deeper Surveys

The only survey in the literature that is significantly deeper than the present work is that of Bernstein et al. (2004), which reached m50 = 29.02 29 over a search area of 0.019 deg2. Although the survey area is quite small, we can use it as a powerful lever arm to determine whether our fits remain valid down to Hr ∼ 12. To do so, we simulate the survey of Bernstein et al. (2004), using our Σ(H) fits (and their uncertainties) as the true underlying CC H distribution. For each H form, we simulate the survey 103 times, and then ask whether the true number of detections made by the survey (N = 3) is commensurate with the suite of simulations. In particular, we calculate P(≤N), the probability that the survey would have made fewer than or exactly three detections. For the broken power law we find P(≤3) < 0.02, for the exponentially tapered power law we find P(≤3) = 0.16, and for the rolling power law we find P(≤3) = 0.65.

First, we note that our best-fit exponentially tapered and rolling power laws are both consistent with the Bernstein et al. (2004) detections. We are hesitant to rule out the broken power law because we find that dropping the m50 of Bernstein et al. (2004) from r ∼ 29.02 to r ∼ 28.82 yields P(≤3) = 0.05 for the broken power law. While this result is still indicative of a marginal fit, it is not strong enough to rule out the broken power law altogether. Although the Bernstein et al. (2004) survey had sensitivity as faint as r ∼ 29.02, its faintest detection was r ∼ 28.23, so it is possible that the reported m50 was overestimated. If, on the other hand, the reported efficiency of Bernstein et al. (2004) is accurate, it is possible that the H distribution of the CCs flattens out (and possibly turns over) somewhere between the limits of DEEP and Bernstein et al. (2004). While DEEP will not be able to resolve this issue, a very deep targeted CC survey should be able to answer the question (Stansberry et al. 2021).

12. Discussion and Conclusions

In this paper we have presented our single-night detections from 20 nights of data in the DEEP B1 field. By using a shift-and-stack technique we were able to achieve an r-band depth of ∼26.2 over approximately 60 deg2 of sky. Our data yielded 2297.9 single-epoch candidate detections, including 1849.8 detections fainter than mr ∼ 25—the most detections fainter than mr ∼ 25 ever reported in a single survey by more than an order of magnitude.

Our claim of fractional discoveries is a first for KBO science, and as such may seem peculiar. However, as we have shown in Section 4.2, a weighted treatment of our detections allows us to properly account for false positives. By accounting for false positives, we are able to make full use of the data from even our deepest nights, whose faintest detections cannot be recovered in another epoch. Additionally, our statistics remain reliable near the detection limit where false positives tend to accumulate.

Using 554.4 single-night detections from six unique DECam pointings, we computed the luminosity function of the Kuiper Belt as a whole (Section 7). Then using ∼280.0 CC detections from five unique DECam pointings we calculated the luminosity function of the CC population (Appendix 12), down to mr ≳ 26.5. In both cases, we were able to confirm that a single power law is not suitable to describe the underlying distribution. We found that rolling, broken, and exponentially tapered power laws yielded acceptable fits, though low discrimination power at the bright end prevented us from giving overwhelming preference to any model.

The most significant scientific result of this work is a measurement of the absolute magnitude distribution of the CCs down to Hr ∼ 10.5. While a dearth of bright objects limits our constraining power at the bright end, our plethora of faint detections enable us to tightly constrain the faint end of the distribution. Our detections are consistent with an exponentially tapered power law with a faint-end slope $\alpha ={0.26}_{-0.07}^{+0.08}$. This faint-end slope is marginally shallower than previous measurements in the literature, but is consistent with simulations of planetesimal formation via the SI. This is of particular interest because the theory predicts a physically motivated size distribution (as opposed to other size distributions which have simply been engineered to adequately describe observations) of CCs that matches observations over the full range of sizes probed to date. However, we urge that these conclusions must be approached with caution. Of particular consequence to the conclusions from this work, the limited resolution in SI simulations leaves the theoretical faint-end slope somewhat uncertain (see Kavelaars et al. 2021 for an in-depth discussion of the outstanding problems with the SI as a complete theory of planetesimal formation). While the exponentially tapered power-law H distribution is a good fit to our CC detections, we cannot rule out rolling or broken power laws. Both the exponentially tapered and rolling power law distributions are consistent with the results of Bernstein et al. (2004), implying that the H distribution continues to flatten out beyond the DEEP limit. In contrast, our broken power law may be in tension with the the results of Bernstein et al. (2004). Interestingly, our rolling power law fit would imply a characteristic size for the CCs, beyond which the probability density rolls over.

Finally, we note some limitations of this work that can be improved upon in future studies. The most obvious limitation is our use of single-night detections. Although we developed robust new techniques to account for false positives, linked detections with well-constrained orbits are still preferable. Since we used single-night detections in this work, our selection of a CC subsample of our detections was rather rough, relying heavily on the OSSOS++ solar system model and the serendipity that Neptune was near these fields, meaning that they were necessarily relatively devoid of resonant objects. Linked orbits will enable proper dynamical classification, thus reducing the uncertainties in our studies of individual dynamical classes. In forthcoming works, we will analyze three additional fields similar to the B1 field. We will link our detections, yielding well-determined orbits for the smallest known KBOs. Our catalog of discoveries will enable studies of Kuiper Belt populations to unprecedented depth, providing deep insight to the formation and evolution of our planetary system.

Acknowledgments

We thank Gary Bernstein and an anonymous referee for insightful and engaged reviews that have improved the clarity and fidelity of this paper.This work is based in part on observations at Cerro Tololo Inter-American Observatory at NSF's NOIRLab (NOIRLab Prop. ID 2019A-0337; PI: D. Trilling), which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.This work is supported by the National Aeronautics and Space Administration under grant Nos. NNX17AF21G and 80NSSC20K0670 issued through the SSO Planetary Astronomy Program and by the National Science Foundation under grant Nos. AST-2009096 and AST-2107800. This research was supported in part through computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor. This work used the Extreme Science and Engineering Discovery Environment (Towns et al. 2014), which is supported by National Science Foundation grant number ACI-1548562. This work used the XSEDE Bridges GPU and Bridges-2 GPU-AI at the Pittsburgh Supercomputing Center through allocation TG-AST200009.K. J. Napier is supported by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program.H. Smotherman acknowledges support by NASA under grant No. 80NSSC21K1528 (FINESST). H. Smotherman, M. Jurić P. Bernardinelli, and C. Chandler acknowledge the support from the University of Washington College of Arts and Sciences, Department of Astronomy, and the DiRAC Institute. The DiRAC Institute is supported through generous gifts from the Charles and Lisa Simonyi Fund for Arts and Sciences and the Washington Research Foundation. M. Jurić wishes to acknowledge the support of the Washington Research Foundation Data Science Term Chair fund, and the University of Washington Provost's Initiative in Data-Intensive Discovery. This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the US Department of Energy, the US National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute for Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo á Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of California at Los Angeles, the University of Cambridge, Centro de Investigaciones Enérgeticas, Medioambientales y Tecnológicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciències de l'Espai (IEEC/CSIC), the Institut de Física d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universität München and the associated Excellence Cluster Universe, the University of Michigan, NSF's NOIRLab, the University of Nottingham, the Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.C.F. acknowledges support from the BASAL Centro de Astrofísica y Tecnologías Afines (CATA) ANID BASAL project FB210003 and from the Ministry of Economy, Development, and Tourism's Millennium Science Initiative through grant IC120009, awarded to The Millennium Institute of Astrophysics (MAS).

Appendix: The Luminosity Function of the CCs

Here we calculate the luminosity function the subset of our detections that have rates consistent with being CCs (see Section 8). For these fits, we follow the same general sampling procedure as Section 9. We summarize our results in Table 6 and show the fits in Figure 15.

Figure 15.

Figure 15. Best-fit differential distributions for the single, rolling, broken, and exponentially tapered power laws in purple, blue, yellow, and red, respectively. The points and 1σ error bars represent the differential distribution of our detections, corrected for efficiency and weights. Note that our fits are not normalized with respect to latitude, but rather they represent the average sky density among the fields in our pointing mosaic.

Standard image High-resolution image

Table 6. Best-fit Parameters and Statistics for Each of the Distributions Tested for the Luminosity Function of the CC Subsample of Our Detections

Functional FormBest-fit Parameters〈BIC〉Qoutlier
Single Power Law $\langle \alpha ,{m}_{0}\rangle =\langle {0.59}_{-0.04}^{+0.04},{23.72}_{-0.13}^{+0.12}\rangle $ 1.90.93
Rolling Power Law $\langle {\alpha }_{1},{\alpha }_{2},{{\rm{\Sigma }}}_{23}\rangle =\langle {0.97}_{-0.17}^{+0.20},-{0.09}_{-0.04}^{+0.04},{0.19}_{-0.07}^{+0.10}\rangle $ 00.21
Broken Power Law $\langle {\alpha }_{1},{\alpha }_{2},{m}_{0},{m}_{B}\rangle =\langle {0.99}_{-0.21}^{+0.43},{0.48}_{-0.08}^{+0.07},{23.86}_{-0.18}^{+0.14},{24.55}_{-0.66}^{+0.43}\rangle $ 3.60.55
Tapered Power Law $\langle \alpha ,\beta ,{m}_{0},{m}_{B}\rangle =\langle {0.16}_{-0.05}^{+0.09},{0.17}_{-0.05}^{+0.07},{12.23}_{-8.05}^{+5.73},{28.02}_{-1.50}^{+2.47}\rangle $ 7.90.80

Note. The BIC values have each been rescaled such that the minimum value among the distributions is 0.

Download table as:  ASCIITypeset image

Here a single power law provides a very marginal fit. As with the full set of KBOs, we cannot rule out the rolling, broken, or exponentially tapered power laws.

Footnotes

  • 11  

    In this work, we treat VR and r equally, as we found that the color term between them was very small. In future work we will measure the VR – r color term and do a proper transformation between the two filters.

  • 12  

    Most of the images can successfully match with enough sources with smaller ellipticity. However, a small number of them may fail due to some issues with the images, e.g., bad telescope guiding. This value ensures that every image can be processed using the same pipeline parameter settings. Since galaxies may also be included during this process, we clip photometric outliers to purify the sources for calibration.

  • 13  

    The rSDSS magnitudes were converted from Pan-STARRS rP1 and gP1 magnitudes using Tonry et al. (2012).

  • 14  

    This step is performed prior to template construction.

  • 15  

    The weighting is not quite optimal because the variance of the difference images is suppressed by the smoothing that occurs from interpolating the images onto a grid, and because the seeing differs from exposure to exposure, but it works well.

  • 16  

    Note that since we are searching through retrograde orbits, we are sensitive to objects at closer distances.

  • 17  

    For bodies in the inner solar system one should transform to the heliocentric frame, though in practice the distinction makes little difference.

  • 18  

    At this point, one can make cuts on the objects to disallow certain kinds of orbits as a function of Keplerian elements. For example, making a cut to consider only prograde orbits will drastically reduce the size of the grid.

  • 19  

    For completeness, we note that we used the matched filter for our detections. However, the details are of little consequence, as even a simple peak finding algorithm works well for this purpose.

  • 20  

    It is only valid to treat these weights as probabilities if the prior expectation agrees reasonably well with the posterior, which was true in this work.

  • 21  

    We find that our true positive rate falls off for fainter sources, but since our fits neglect sources fainter than 50% of the maximum efficiency, the true and false positive rates are fairly constant over the full range of brightness. The prior odds also fall off for fainter sources, but since our detections are dominated by faint sources, the estimation of a uniform prior only results in a slight underestimate the odds of bright sources, which end up having weights close to 1 anyway.

  • 22  

    More complicated functional forms do not improve the fit, and do not change the results of our analysis.

  • 23  

    This quantity is colloquially referred to as the luminosity function.

  • 24  

    We also experimented with other subsets of our data, and all yielded results consistent with the results presented here.

  • 25  

    For completeness, we note that the distribution of uncertainties in the magnitudes is not exactly Gaussian. More specifically, we estimate that the true normalized distribution is slightly narrower than a true Gaussian, with slightly more power in the tails (where the distributions are not shown here). The difference in FWHM between the two distributions is only about 10%. As a result, we use the Gaussian approximation for simplicity.

  • 26  

    By applying a weight to each of our candidate detections, we are properly accounting for false positives in our data. This weight term, which is not present in other KBO analyses of this nature, was first derived by Manski & Lerman (1977).

  • 27  

    The OSSOS++ model is designed to be a realistic model of the Kuiper Belt, including correct relative population sizes of the different Kuiper Belt dynamical classes (Kavelaars et al. 2021; Crompvoets et al. 2022).

  • 28  

    Bernstein et al. (2004) measured a luminosity function rather than an H distribution, and thus had to estimate r ≈ 42. Recent CC detections from both OSSOS (Bannister et al. 2018) and DES (Bernardinelli et al. 2022) have shown that a more appropriate approximation is r ≈ 43.8 au.

  • 29  

    We are using the correction rmF606W = 0.15.

Please wait… references are loading.
10.3847/PSJ/ad1528