Abstract
Ideal projective quantum measurement makes the system state collapse in one of the observable operator eigenstates , making it a powerful tool for preparing the system in the desired pure state. Nevertheless, experimental realisations of projective measurement are not ideal. During the measurement time needed to overcome the classical noise of the apparatus, the system state is often (slightly) perturbed, which compromises the fidelity of initialisation. In this paper, we propose an analytical model to analyse the initialisation fidelity of the system performed by the single-shot readout. We derive a method to optimise parameters for the three most used cases of photon counting based readouts for NV colour centre in diamond, charge state, nuclear spin and low temperature electron spin readout. Our work is of relevance for the accurate description of initialisation fidelity of the quantum bit when the single-shot readout is used for initialisation via post-selection or real-time control.
Export citation and abstract BibTeX RIS
Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction and background
Readout and initialisation operations in quantum information systems are of paramount importance in any application. For initialisation several methods were considered such as optically pumped or cryogenic relaxation [1, 2], algorithmic codes [3, 4] as well as readout based [5] methods. The latter is widely used for quantum computational platforms, since efficient qubit readout allows for a rapid system initialisation with high fidelity [5]. Quantum projective measurements are intrinsically probabilistic, with the probability of obtaining an eigenvalue α reads . However, once α was measured, the system collapses to the state leading to measurement based state preparation. Moreover, successive projective measurements of observable have probability to give the same output α and hence same post-measurement state. Due to the preservation of the previously measured state, its experimental implementation is denoted as a quantum non-demolition measurement, which was previously demonstrated in many systems, e.g. trapped ion, dopants in diamond, superconducting qubits [6–15]. In many cases, measurements of quantum systems involve mapping of quantum to the macroscopically interpretable classical signal, typically represented by the count of photons or electrons. Consequently the counting process is inherently stochastic and classical, unrelated to the quantum characteristics of the system being observed. This counting process is the root cause for measurement noise, often modelled with a Poisson distribution, with an average count denoted as . Various states of the system generate distributions of meter outputs with various average values . Due to the finite width of the distributions (), there is always an overlap between them, which causes the uncertainty in the state estimation and discrimination [16]. With a cycling non-demolition measurement, the signal could be acquired as long as needed to achieve the desired fidelity. In particular, when the signal-to-noise ratio given by obtained during the measurement time of the system exceeds unity, it is referred to as corresponding to fidelity exceeding 79% [17]. In practice, most experimental settings subject the system states to the additional decay channels with rate γ, limiting the available measurement time and reducing the signal to noise ratio. Moreover, these channels perturb the state of the system during measurement, resulting in a loss of fidelity in measurement based state preparation methods. Accurate estimation of the initialisation fidelity is essential for benchmarking and optimising the quantum hardware. It became crucial for estimating the feasibility and performance of envisioned quantum algorithms such as surface codes [18]. Although such fidelity could be optimised experimentally [12], a universal analytical solution is desirable for fast optimisation of readout parameters.
When the system spontaneously changes its state during the measurement, the macroscopic outputs probability distributions () are not Poissonian anymore. Additionally, the system changes its states during the readout, and the moment t at which the system was in or state shall be specified to calculate the accurately. One approach is to fix the time at the beginning of the measurement t = 0 and derive the . Having a meter reading λi using the maximum likelihood method, one could estimate in which state system was at the moment t = 0 [12, 19–21]. Using techniques of statistical signal analysis [16], the fidelity of readout could be optimised by choosing the optimum detection threshold λth , as well as using filtering techniques [19, 20, 22, 23]. However, when estimating the state of the system at the end of the measurement t = T, one has to make additional assumptions about state decay during the measurement.
In this work, we follow an approach of directly using measurement statistics to infer the fidelity of the final state. By setting the aforementioned time when defining the PDFs to the end of the measurement t = T, we calculate the and directly apply the maximum likelihood method for the readout of the final state. This approach allows for more accurate fidelity estimation of state preparation using finite demolition readout in the presence of noise. We apply this model to the NV centre low-temperature electron spin readout, room temperature charge state readout, and nuclear spin single shot readout and use experimental data to build the model and optimise the parameters.
2. Theory
Assuming a multi-level open quantum system with incoherent pumping, the master equation for the density matrix in the steady state can be simplified to the rate equations for each level. To model this system with an effective two level system, we consider each excitation and relaxation results either the same state with a certain constant emission rate or the other state with a certain switching rate originating from intermediate processes. Hence, we denote the emission and switching rates from the state () by constant parameters λ0, γ0 (λ1, γ1). We assume a constant switching and emission rates for most practical applications [7, 9, 12, 19, 20, 24] for a single open two level systems, whose value is controlled by the intensity of the excitation lasers, magnetic field, etc, which to a high degree could be kept constant in the experiment. We recall the main expressions for dynamics of the two-state Markov process with exponentially distributed switching times. By using the properties of the point Poisson process (see appendix
To infer the photon counting probability distribution conditioned on the final state after the measurement, we look into the switching dynamics using Bayes' conditional probability rule. Knowing the probability distributions of time spent in state and we obtain:
And from the rules of conditional probability, it follows:
The probability of having initial state in the general case could be inferred by fitting photon counting statistics with model conditional on initial states (A11). Probabilities in the case of the continuous measurements in the steady-state are . could be obtained by e.g. integrating . We compare the probability distributions of photon counts conditioned on the final state estimated using analytical expression (2) and simulated numerically via the Monte Carlo method (figure 2). For this case, we considered a probable scenario with 1 ms readout time and decay rates of 500 Hz and 300 Hz with photon counts of 5 kHz and 40 kHz (see appendix
Download figure:
Standard image High-resolution imageKnowing the analytical expression for probability density function for detector output conditioned on the final state of the system, we derive a likelihood function , where θ represents a discrete set of final states 0 and 1. By introducing the one attributes the final state to 0 if and to 1 if else, the state is attributed to randomly tossed coin result in case . We note that if the probability of having a certain final state depends on measurement parameters, the expressions should be weighted by the probability of a certain final state θ: . The error rate of such a decision is thus the number of cases where the decision was falsely made with such a strategy; hence it is the overlapping area under the probability distributions. This area could be numerically estimated and minimised with respect to the parameters of the readout, such as readout duration and, if possible, switching rates and photon emission rates, when using a known models for optical excitation. As an example, we estimate the fidelity numerically for the realistic case of single shot-readout of the nuclear spin at room temperature [7] for the various nuclei with spin decay rates. The photon count rates of NV in solid immersion lens normalized to the average duty cycle of the excitation laser result in rates kHz for state and kHz for state . As seen in figure 3 the error rate reaches the minimum for realistic cases at T = 2.5 ms and T = 1.5 ms accordingly for γ = 10 and γ = 100 Hz.
Download figure:
Standard image High-resolution imageTo see the advantage of using such a method for estimating fidelity, we calculate the error rate of post-selection when using the decision-making condition . By increasing nth , one can select part of the distribution corresponding to the bright state , such that the tail of dark state is excluded. This method thus allows to increase the fidelity by sacrificing measurement efficiency. The region of intersection is excluded from the decision, thus reducing the sample volume of the dataset. Although for the case of initialization by measurement, the initial state is known with high precision, to estimate the final state, one had to estimate the probability that the system stayed unperturbed. In a simple case, this could be done by multiplying the fidelity of estimating the initial state by the exponential decay, which approximates the probability of the system not relaxing. As seen in figure 4, this sets the lower estimate of fidelity for lower data usage. In practice, when using a posterior estimation, much higher fidelities could be reached, with error rates of several orders of magnitude less, although it could be achieved at low-efficiency rates.
Download figure:
Standard image High-resolution image3. Results
We applied the developed theoretical framework for optimizing the experimental parameters of readout for initialization fidelity in three real scenarios of the well-studied model system of NV centre in diamond. We consider the case of room temperature charge state initialisation, room temperature single-shot readout of strongly coupled nuclear spin, and low-temperature resonant readout of the electron spin.
3.1. Electron spin readout at low temperature
We start by considering the case of low-temperature resonant electron spin readout. In this case, the switching rates follow , so we can neglect the γ1. As a result, the distributions could be significantly simplified:
We first calibrate the optical parameters in the experimental setup and extract the decay rate and emission parameters of the NV system (figure 5) The numerical simulation of the distributions conditional on initial and final states for a set of the excitation laser and readout time is presented in figure 6. This plot shows how conditional distributions transform with time at which the condition of certain state is taken from t = 0 to t = T. At t = 0, the distributions present a well-known shape [9], which we observe in our experiments by preparing the initial state into . The distribution transforms by moving the conditioning to the measurement's end. If the final state is , the distribution becomes purely Poissonian with average since no jump occurred. While if the final state is , the distribution is mixed and can be calculated as integral. To reach high fidelities, reading (selecting) only of the state is applied. We simulate the fidelity based on the formula , where B is the area under the distribution above the threshold of the bright state , and D is the area under the distribution of dark state . Depending on the readout power and duration, we find the necessary threshold that guarantees a target fidelity of 99%. For the readout of state 0, we see that already threshold 0 is enough in most cases to achieve the desired fidelity (figure 7). Next, we plot the average number of attempts, which is inversely proportional to the ratio between the selected area above the threshold and the overall area under the distributions. The average number of attempts determines the success rate of the readout. It is used to find the optimal readout parameters that minimize the time necessary to measure a single data point with a target fidelity. We find that it is 2.55 ms for the optimal parameters laser intensity of 85 nW and the readout time of 4.26 µs assuming one sequence is 1 ms on average.
Download figure:
Standard image High-resolution imageDownload figure:
Standard image High-resolution imageDownload figure:
Standard image High-resolution imageNow we consider the case of preparing the desired state by the measurement (figure 8). In this case, we again visualise fidelity, but the drastic change is the necessary threshold, which is needed to be applied in order to achieve the desired fidelity. We note that the distributions conditional on the final state, in this case, are weighted by the probabilities of the final state, which tend to decay towards a steady state upon readout , . In this case, the success rate of initialisation is significantly reduced, and the desired fidelity is achieved in 9.22 µs with optimal parameters 6.89 nW and 0.5 µs, which differ from the case of readout.
Download figure:
Standard image High-resolution image3.2. NV charge state readout at room temperature
Next, we consider the case of the charge state readout of the NV centre at room temperature. We use orange laser (594 nm) excitation and long pass 650 nm filter to exclude the NV0 fluorescence, achieving high contrast between states and NV0. We calibrate the fluorescence photon counting rate and state switching rate as a function of laser intensity (figure 9).
Download figure:
Standard image High-resolution imageWe consider the case of charge state initialisation. It is commonly done by applying a short green laser pulse and a weak orange or red probe readout pulse. Depending on the photon counts during the orange probe, the state can be assigned to be in . A feedforward operation for on-demand state initialisation could be applied [21]. In this section, we optimise readout parameters concerning the preparation time of the charge state. We consider several target fidelities for preparation. We present the case of while other cases could be estimated similarly.
Using the formulas and the model of the defect charge switching and photon emission, for each parameter of orange laser power and duration of the readout, we calculate the probability distribution function and estimate fidelity represented in figure 10(a). To reach a target fidelity, we apply the exclusion principle and increase the photon number threshold, thus reducing the efficiency of the readout, which leads to an increase in the number of attempts of a successful measurement. Accordingly, we plot a required increase in the threshold in figure 10(b). Then using the success rate of a single measurement, we estimate the average number of attempts (figure 10(c)) and required time (figure 10(d)) to initialise the state. We find that for the fidelity , our method of estimation fidelity favours for the short time and high laser power, while the method accounting on initial state would be giving slightly different time, and underestimate the fidelity.
Download figure:
Standard image High-resolution image3.3. Nuclear spin readout at room temperature
We apply our approach for the strongly coupled nuclear spins near the NV centre at room temperature used as qubits. We consider two scenarios. In the first case, the initialisation is done by measurement for the qubit, and then the sequence, e.g. for sensing, is used. Second, the initialisation is done until success (on demand), followed by the execution of the main sequence (sensing or quantum algorithm). We extract the decay rates for nuclear spins under the readout using the autocorrelation method of the nuclear single shot readout time traces [7]. Similar to the case of the charge state, we optimise the number of repetitive readouts to reach the target initialisation fidelity to perform a single successful measurement in the shortest time. By varying the number of repetitions of the CNOT gate with a green laser pulse [7] and adjusting the threshold to reach the targeted fidelity of 99%, we analyse the required number of attempts and the preparation time for each nuclear spin in the register expressed in the figure 11. We find that in the first case of postselection, the optimal point is 2.6, 1.6, and 0.5 s for nuclei 14 N, , , while for the case of on-demand preparation the average preparation time is shorter by 2–3 times 0.85, 0.59, 0.26 s correspondingly. The required time to generate one data point with 10 µs microwave time, 50 µs RF time, 10 ms average sequence time, and 10 ms readout time. Moreover, the optimum parameters for nuclear spins with the dynamical and postselection methods differ. We notice that the dynamical real-time on-demand preparation method requires a smaller number of SSR repetitions, indicating that the time cost of a single attempt is lower. We note that on figure 11 after the optimum lower point, the curve further goes up. It could be explained by first increasing of the measurement time, and due to growing possibility of a flip during the measurement to reduced success rate of each measurement.
Download figure:
Standard image High-resolution image4. Conclusion
Not only counting the number of photons but also considering photons' arrival time and their correlations will potentially provide additional information, which leads to better initialisation fidelities as was already shown for the readout [19, 20, 22, 23]. As opposed to the readout of the initial state, for initialisation, the photons that arrive later carry more information about the final state. The exponentially growing linear and nonlinear filter methods of inferring the final state could improve the fidelity and could be studied. Additionally, Bayesian schemes [22], as well as active control of rates of switching and emission, as well as accounting of switching rates cross-talks in multiqubit systems could be further studied [25]. The readout error mitigation techniques [26, 27] tackle the bias error in expectation values of the final measurement by manipulating the state before the readout, our method allows to fine tune and optimise parameters of the readout process to achieve desired fidelity of state preparation and readout in a least time. Since these methods are complementary, they could be further combined to reduce systematic bias errors. In conclusion, we formulated the method for accurately estimating and optimising the fidelity of the initialisation of the system state by finite demolition measurement. We considered three cases. We find that parameters for initialisation are different from the readout when optimised for the required success time and should be optimised separately. We believe that our treatment is also applicable and interesting to other systems like dopants in SiC or rare earth ions [28–30].
Acknowledgments
The authors are thankful to Nikita Ratanov for fruitful discussions. M Z thanks Max Planck School of Photonics for financial support. We acknowledge financial support from the European Union's Horizon 2020 research programme for the project QIA (Grant Agreement No. 820445, and Horizon Europe FPA 101080128), as well as the German Federal Ministry of Education and Research (BMBF) for the Projects QR.X (Grant Agreement No. 16KISQ008), Spinning (Grant Agreement No. 13N16219), MiLiQuant and Quamapolis. The DFG (FOR 2724), the Baden-Württemberg Foundation for the Project QC4BW (Grant Agreement No. 3-4332.62-IAF/7), the Max Planck Society, and the Volkswagen foundation.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files).
Appendix A: Derivation of the probabilities distribution for photon counts
We consider a case of a two-level system (TLS) with states and . Under the asymmetric stationary decay with rates between the two states 0 and 1, respectively system performs the sequence of transitions (jumps) events which form a point Poisson process. The time intervals and spend in state 0, and 1 between the switches are then random variables which have exponential distributions: . We recall the following known properties related to the exponential distributions.
Lemma 1. The sum of n exponentially distributed random variables xi with rate γ: is a random variable. It has Erlang distribution with probability density function
Additionally, we introduce a random variable Nx [31], which is defined as and represents the minimum number of elements from a given set of random variable sample, which sum exceeds x.
Lemma 2. Variable has Poisson distribution, and Nx has a probability density function as follows
Similar to work [12], we consider cases of odd and even numbers of switching events separately. We introduce n as the number of intervals spent in state . We can now estimate the probability that the system spends time τ in state during the measurement time T. When having an odd number of switching, and starting from state the intervals between switches are sets of random variables and with rates . The Probability that the system spends total time τ in state is a sum of products of the Erlang-n distribution that sum of n variables equals τ with the probability that n intervals occur, which is the probability that the residual time is exceeded in n increments of a process , hence:
where is modified Bessel function of the first kind of order. For the case of an even number of switches, we have to take the opposite consideration. The total interval has a fixed length, and the length τ has to be exceeded because the system could stay in the final state after time T. Hence the probability is sum over n of probability that process exceeds τ in n increments (steps), times the probability that n − 1 intervals sum to T − x, hence:
where is the modified Bessel function of the first kind of order. Additionally, we consider the case where no switches happen, which simply reads:
The probability conditioned on initial state could be obtained by substituting and . Using the property of conditional probability, one obtains:
where , and c denotes complementarity, we conclude the derivation of the distribution of the time spent by the system in the state conditioned on the initial state:
A.1. Photon counting statistics conditioned on initial state
Assuming emitted photons from the system arrive on the photodetector at random times with constant rate λ1 and λ2 conditioned on the system state. The number of photon counts is a random variable
where T is the total counting time, and τ is the total time spent in state . Using the expressions for probability density for τ equations (A7) and (A8), combining with equation (A9). and integrating τ over the interval we obtain expression for the photon counting statistics similar to [12]
Appendix B: Monte–Carlo simulation of the photon distribution in figure 2
We model the time evolution of the stochastic process with small time steps . The system starts in arbitrary state or and tracks for each time step a possible jump with probabilities . We start from −10 ms time, so that the system can reach a steady state to time 0. The coin flip (binomial sampling) decides whether the jump at current step dt has been occurred. In the end we sort the obtained trajectories with respect to the final state of the system and plot the statistics of totally emitted photons depending on the final state 0 or 1, knowing the total time spent in each state by the system. The photon numbers are then sampled via the Poissonian distribution with average . We used parameters Hz, Hz, kHz, kHz, T = 1 ms, s, 10 000 trajectories. The simplified code for the simulation is below:
for i in range (10 000): dt = 0.001*T state = 0
for ti in range (−10*T, T , dt):
p = [gamma0 ,gamma1 ][ state ]
j = random . binomial ( n = 1, p=p*dt )
if j :
jumps += 1
state = switch ( state )
if ti >=0:
if state == 0:
uptime += dt
if state ==0:
taus . append ( uptime )
ns0 . append ( np.random . poisson (uptime*lambda0 + (T-uptime)*lambda1 ))
else:
taus1 . append ( uptime )
ns1 . append ( np . random . poisson ( uptime*lambda0 + (T-uptime)*lambda1 ))