Skip to main content
Log in

A sequential choice model for multiple discrete demand

  • Published:
Quantitative Marketing and Economics Aims and scope Submit manuscript

Abstract

Consumer demand in a marketplace is often characterized to be multiple discrete in that discrete units of multiple products are chosen together. This paper develops a sequential choice model for such demand and its estimation technique. Given an inherently high-dimensional problem to solve, a consumer is assumed to simplify it to a sequence of one-unit choices, which eventually leads to a shopping basket of multiple discreteness. Our model and its estimation method are flexible enough to be extended to various contexts such as complementary demand, non-linear pricing, and multiple constraints. The sequential choice process generally finds an optimal solution of a convex problem (e.g., maximizing a concave utility function over a convex feasible set), while it might result in a sub-optimal solution for a non-convex problem. Therefore, in case of a convex optimization problem, the proposed model can be viewed as an econometrician’s means for establishing the optimality of observed demand, offering a practical estimation algorithm for discrete optimization models of consumer demand. We demonstrate the strengths of our model in a variety of simulation studies and an empirical application to consumer panel data of yogurt purchase.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. “Solving a nonlinear integer programming is generally known to be NP-hard, i.e., Non-deterministic Polynomial-time hard (Wolsey, 1998; Wolsey & Nemhauser, 2014), which implies that the comparisons with all the feasible points are required for guaranteeing the optimality of the observed purchase quantity.”

  2. Due to demand discreteness, incremental utility is defined as the change in overall utility caused by an one-unit change (increase or decrease) of a good while holding the other inside goods constant.

  3. Lee and Allenby (2014) refers to an error domain whose shape is a hyperrectangle as a “regular” region of integration. Conversely, an “irregular” region of integration implies that the shape of an error domain is not a hyperrectangle.

  4. https://cran.r-project.org/web/packages/mvtnorm/mvtnorm.pdf

  5. The joint probability density function is truncated by the ranges of error terms that are derived from the exit condition.

  6. Note that data simulation is very simple and computationally light in our model of sequential choices.

  7. The source code for this simulation study is publicly available at https://github.com/slee-bayes/SCM.

  8. The proposed sequential choice process (equivalent to the steepest ascent hill algorithm) generally finds an optimal solution of a convex problem. However, guarantees can only be given by complete neighborhood comparisons due to the NP-hard nature of the discrete optimization problem (Wolsey, 1998; Wolsey & Nemhauser, 2014)

  9. In order to reduce dimensionality, various flavors are grouped into four broader categories: (i) Berry (Strawberry, Blueberry, Raspberry, etc.)), (ii) Vanilla (e.g., Vanilla, Plain, Creme, etc.), (iii) Non-berry Fruits (Peach, Cherry, Banana, Lemon etc.), (iv) Others (Coffee, Chocolate etc.).

  10. We used the Gelfand and Dey (1994)’s method for computing the LMD statistics.

  11. We first computed the expected purchase quantity vector for each person for each trip based on the full posterior distribution of the individual-level parameters (e.g., \(\left (\hat {x}_{i1tr},\cdots ,\hat {x}_{iJtr} \right )\) for consumer i, trip t, and iteration r). Then the MSE given the rth draw of the parameters was calculated by \({MSE}_{r}=\frac {1}{\sum \limits _{i}^{N} T_{i}} \sum \limits _{i=1}^{N} \sum \limits _{t=1}^{T_{i}} \sum \limits _{j=1}^{J} \left (x_{ijt} - \hat {x}_{ijtr} \right )^{2}\). Reported in Table 12 are the mean of the MSE distribution (i.e., \(\frac {1}{R} \sum \limits _{r=1}^{R} {MSE}_{r}\))

  12. In order to compare the proposed sequential choice model with an optimal demand model, we would need demand data of a non-convex optimization problem (e.g., complementary demand, consumer demand under a volume discount). When handling a non-convex optimization problem, the sequential choice process often gets trapped in local maxima instead of finding the global maximum, which enables us to compare the performance of the sequential choice model with that of an optimal demand model. However, to our best knowledge, the estimation method for a discrete optimization model with a non-convex problem is yet to be developed. Therefore, we leave it for future research.

References

  • Allenby, G. M., Shively, T. S., Yang, S., & Garratt, M. J. (2004). A choice model for packaged goods: dealing with discrete quantities and quantity discounts. Marketing Science, 23(1), 95–108.

    Article  Google Scholar 

  • Andrieu, C., & Roberts, G. O. (2009). The pseudo-marginal approach for efficient monte carlo computations. The Annals of Statistics, 37(2), 697–725.

    Article  Google Scholar 

  • Bettman, J. R., Luce, M. F., & Payne, J. W. (1998). Constructive consumer choice processes. Journal of Consumer Research, 25(3), 187–217.

    Article  Google Scholar 

  • Bhat, C. R. (2008). The multiple discrete-continuous extreme value (mdcev) model: role of utility function parameters, identification considerations, and model extensions. Transportation Research Part B: Methodological, 42(3), 274–303.

    Article  Google Scholar 

  • Bhat, C. R., Castro, M., & Pinjari, A. R. (2015). Allowing for complementarity and rich substitution patterns in multiple discrete–continuous models. Transportation Research Part B: Methodological, 81, 59–77.

    Article  Google Scholar 

  • Boyd, S., & Vandenberghe, L. (2004). Convex Optimization. New York: Cambridge University Press.

    Book  Google Scholar 

  • Bronnenberg, B. J., Kruger, M. W., & Mela, C. F. (2008). Database paper: the iri marketing data set. Marketing Science, 27(4), 745–748.

    Article  Google Scholar 

  • Castro, M., Bhat, C. R., Pendyala, R. M., & Jara-Díaz, S.R. (2012). Accommodating multiple constraints in the multiple discrete–continuous extreme value (mdcev) choice model. Transportation Research Part B:, Methodological, 46(6), 729–743.

    Article  Google Scholar 

  • Dillon, W. R., & Gupta, S. (1996). A segment-level model of category volume and brand choice. Marketing Science, 15(1), 38–59.

    Article  Google Scholar 

  • Dubé, J. -P. (2004). Multiple discreteness and product differentiation: Demand for carbonated soft drinks. Marketing Science, 23(1), 66–81.

    Article  Google Scholar 

  • Foubert, B., & Gijsbrechts, E. (2007). Shopper response to bundle promotions for packaged goods. Journal of Marketing Research, 44(4), 647–662.

    Article  Google Scholar 

  • Gelfand, A. E., & Dey, D. K. (1994). Bayesian model choice: asymptotics and exact calculations. Journal of the Royal Statistical Society. Series B (Methodological), 56(3), 501–514.

    Article  Google Scholar 

  • Genz, A., & Bretz, F. (2009). Computation of multivariate normal and t probabilities. Berlin: Springer.

    Book  Google Scholar 

  • Gigerenzer, G., & Selten, R. (2001). Bounded rationality: the adaptive toolbox. Cambridge: MIT Press.

    Google Scholar 

  • Gu, Z., & Yang, S. (2010). Quantity-discount-dependent consumer preferences and competitive nonlinear pricing. Journal of Marketing Research, 47 (6), 1100–1113.

    Article  Google Scholar 

  • Gupta, S. (1988). Impact of sales promotions on when, what, and how much to buy. Journal of Marketing Research, 25(4), 342–355.

    Article  Google Scholar 

  • Harlam, B. A., & Lodish, L. M. (1995). Modeling consumers’ choices of multiple items. Journal of Marketing Research, 32(4), 404–418.

    Google Scholar 

  • Hendel, I. (1999). Estimating multiple-discrete choice models: an application to computerization returns. The Review of Economic Studies, 66(2), 423–446.

    Article  Google Scholar 

  • Howell, J. R., Lee, S., & Allenby, G. M. (2016). Price promotions in choice models. Marketing Science, 35(2), 319–334.

    Article  Google Scholar 

  • Kahneman, D. (2003). Maps of bounded rationality: psychology for behavioral economics. The American Economic Review, 93(5), 1449–1475.

    Article  Google Scholar 

  • Kim, J., Allenby, G. M., & Rossi, P. E. (2002). Modeling consumer demand for variety. Marketing Science, 21(3), 229–250.

    Article  Google Scholar 

  • Lee, S., & Allenby, G. M. (2014). Modeling indivisible demand. Marketing Science, 33(3), 364– 381.

    Article  Google Scholar 

  • Lee, S., Kim, J., & Allenby, G. M. (2013). A direct utility model for asymmetric complements. Marketing Science, 32(3), 454–470.

    Article  Google Scholar 

  • Manchanda, P., Ansari, A., & Gupta, S. (1999). The “shopping basket”: a model for multicategory purchase incidence decisions. Marketing Science, 18(2), 95–114.

    Article  Google Scholar 

  • Mehta, N., & Ma, Y. (2012). A multicategory model of consumers’ purchase incidence, quantity, and brand choice decisions: Methodological issues and implications on promotional decisions. Journal of Marketing Research, 49(4), 435–451.

    Article  Google Scholar 

  • Niraj, R., Padmanabhan, V., & Seetharaman, P. B. (2008). A cross-category model of households’ incidence and quantity decisions. Marketing Science, 27(2), 225–235.

    Article  Google Scholar 

  • Phaneuf, D. J., Kling, C. L., & Herriges, J. A. (2000). Estimation and welfare calculations in a generalized corner solution model with an application to recreation demand. The Review of Economics and Statistics, 82(1), 83–92.

    Article  Google Scholar 

  • Rossi, P. E., Allenby, G. M., & McCulloch, R. (2012). Bayesian statistics and marketing. New York: Wiley.

    Google Scholar 

  • Russell, S.J., & Norvig, P. (2010). Artificial intelligence: a modern approach. Englewood Cliffs: Prentice Hall.

    Google Scholar 

  • Satomura, T., Kim, J., & Allenby, G. (2011). Multiple-constraint choice models with corner and interior solutions. Marketing Science, 30(3), 481–490.

    Article  Google Scholar 

  • Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: an effort-reduction framework. Psychological bulletin, 134(2), 207–222.

    Article  Google Scholar 

  • Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118.

    Article  Google Scholar 

  • Simon, H. A. (1990). Invariants of human behavior. Annual Review of Psychology, 41(1), 1–20. PMID:18331187.

    Article  Google Scholar 

  • Song, I., & Chintagunta, P. K. (2007). A discrete-continuous model for multicategory purchase behavior of households. Journal of Marketing Research, 44, 595–612.

    Article  Google Scholar 

  • van der Lans, R. (2018). A simultaneous model of multiple-discrete choices of variety and quantity. International Journal of Research in Marketing, 35(2), 242–257.

    Article  Google Scholar 

  • Wolsey, L. A. (1998). Integer programming, Wiley Series in Discrete Mathematics and Optimization (Vol. 52). New York: Wiley.

    Google Scholar 

  • Wolsey, L. A., & Nemhauser, G.L. (2014). Integer and combinatorial optimization. Wiley.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sanghak Lee.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Choice of removing one unit

Presented here is another example of a sequential choice process that involves a removal of a chosen item. Suppose a consumer makes her purchase decision for two goods (A and B) whose unit prices are 2 and 4 respectively (i.e., pAt = 2,pBt = 4) given her utility function that is specified as follows:

$$ \begin{array}{@{}rcl@{}} V\left( x_{At}, x_{Bt}\right) &=& \frac{\psi_{A} e^{\varepsilon_{At}}}{\gamma} \ln \left( \gamma x_{At}+1\right)+ \frac{\psi_{B} e^{\varepsilon_{Bt}}}{\gamma} \ln \left( \gamma x_{Bt}+1\right)\\ &&+ \ln \left( E - {p_{At}x_{At}} - {p_{Bt}x_{Bt}}+1\right) \end{array} $$

with the parameter values being ψA = 0.138,ψB = 0.272,ψ0 = 1,γ = 0.052,E = 29.647. Provided εAt = εBt = 0, how the proposed sequential choice process unfolds is described in Table 15.

Table 15 Example of sequential choices with removal

We observe that one unit of B is removed from the cart in the sixth choice occasion. Note that two units of A (the 4th and 5th choices) are added between the latest addition of B in the third choice and its removal in the sixth, decreasing the remaining budget (=the amount of the outside good). As the remaining budget shrinks, the incremental utility of removing B increases due to the concave sub-utility of the outside good, which leads to a condition where removing one unit of B is the best choice option.

Appendix B: Bayesian MCMC procedure

The joint posterior distribution of all the model parameters is proportional to three components: (i) the prior distribution for the hyperparameters, (ii) the upper-level distribution for heterogeneity, and (iii) the model likelihood of the individual-level data.

$$ \begin{array}{@{}rcl@{}} &&p\left( \bar{\theta}, V_{\theta}, \{\theta_{i}\}_{i=1}^{N} | \{\{x_{i1t},x_{i2t},p_{i1t},p_{i2t}\}_{t=1}^{T}\}_{i=1}^{N} \right) \\ && \propto p\left( \bar{\theta}, V_{\theta} \right) \times p\left( \{\theta_{i}\}_{i=1}^{N} | \bar{\theta}, V_{\theta} \right)\\ &&\times \ell \left( \{\theta_{i}\}_{i=1}^{N} | \{\{x_{i1t},x_{i2t},p_{i1t},p_{i2t}\}_{t=1}^{T}\}_{i=1}^{N}\right) \\ && \propto p\left( \bar{\theta}, V_{\theta} \right) \times \prod\limits_{i=1}^{N} p\left( \theta_{i} | \bar{\theta}, V_{\theta} \right) \times \ell \left( \theta_{i} | \{x_{i1t},x_{i2t},p_{i1t},p_{i2t}\}_{t=1}^{T}\right) \end{array} $$

Adapting the standard procedure for a Bayesian multivariate linear regression, we employ a natural conjugate prior for the hyperparameters and make it diffuse that the influence of the prior distribution is minimal (e.g., ν0 = 7,V0 = 7I(4),𝜃0 = (0, 0, 0, 0),A = 0.01).

$$ \begin{array}{@{}rcl@{}} &&p\left( \bar{\theta}, V_{\theta} \right) = p\left( V_{\theta} \right) \times p\left( \bar{\theta} | V_{\theta} \right) \\ &&\text{where} \quad p\left( V_{\theta} \right) \sim IW \left( \nu_{0}, V_{0} \right) \\ &&\quad\qquad~ p\left( \bar{\theta} | V_{\theta} \right) \sim N \left( \theta_{0}, V_{\theta} \otimes A^{-1}\right) \end{array} $$

Then the joint posterior distribution is decomposed into a series of conditional distributions as follows:

$$ \begin{array}{@{}rcl@{}} && p\left( \theta_{i} | \bar{\theta}, V_{\theta} \right) \propto p\left( \theta_{i} | \bar{\theta}, V_{\theta} \right) \times \ell \left( \theta_{i} | \{x_{i1t},x_{i2t},p_{i1t},p_{i2t}\}_{t=1}^{T}\right),\\ && \forall i \in \{1,\cdots,N\} \\ && p\left( V_{\theta} | \{\theta_{i}\}_{i=1}^{N} \right) \sim IW \left( \nu_{0}+N, V_{0} +\tilde{S}\right) \\ && p\left( \bar{\theta} | V_{\theta}, \{\theta_{i}\}_{i=1}^{N} \right) \sim N \left( \tilde{ \theta}, V_{\theta} \otimes (N+A)^{-1}\right) \\ && \text{where} \quad \tilde{ \theta}=\text{vec}(\tilde{\Theta}), \quad \tilde{\Theta}=(N+A)^{-1}(N\bar{\Theta}+A {\Theta}_{0}) \\ && \quad \quad \quad \tilde{S}=({\Theta}-\tilde{\Theta})^{T}({\Theta}-\tilde{\Theta})+(\tilde{\Theta}-\bar{\Theta})^{T}A(\tilde{\Theta}-\bar{\Theta}) \end{array} $$

While the hyperparameters (\(\bar {\theta }, V_{\theta }\)) can be directly drawn from the standard distributions due to the conjugacy, we use the Metropolis–Hastings algorithm for drawing the individual-level parameters. Our Bayesian MCMC estimation algorithm follows the steps below:

(Step1):

(i) Set initial values for the model parameters (\(\bar {\theta }^{(0)}, V_{\theta }^{(0)},\{\theta _{i}^{(0)}\}_{i=1}^{N}\)). (ii) Compute the log-likelihood of the model for each individual.

$$ \begin{array}{@{}rcl@{}} {ll}_{i}^{(0)} = \ln \ell \left( \theta_{i}^{(0)} | \{x_{i1t},x_{i2t},p_{i1t},p_{i2t}\}_{t=1}^{T}\right), \quad \forall i \in \{1,\cdots,N\} \end{array} $$
(Step2):

Draw the individual level parameters (\(\{\theta _{i}^{(r)}\}_{i=1}^{N}\)) for rth iteration. (i) Given \(\{ \theta _{i}^{(r-1)}, {ll}_{i}^{(r-1)}, \bar {\theta }^{(r-1)},V_{\theta }^{(r-1)} \}\), draw \(\theta _{i}^{(r)}\) by the Random Walk MH algorithm. (ii) Update the log-likelihood as well when the new parameter is accepted (\({ll}_{i}^{(r)}\)). (iii) Repeat (Step2)-(i)\(\sim \)(Step2)-(ii) for every individual (∀i ∈{1,⋯ ,N}).

(Step3):

Draw the hyperparameters (\(\bar {\theta }^{(r)},V_{\theta }^{(r)}\)) for rth iteration. (i) Given \(\{ \{\theta _{i}^{(r)}\}_{i=1}^{N},\nu _{0},V_{0} \}\), draw \(V_{\theta }^{(r)}\) from the Inverse Wishart distribution. (ii) Given \(\{ V_{\theta }^{(r)}, \{\theta _{i}^{(r)}\}_{i=1}^{N},\theta _{0},A \}\), draw \(\bar {\theta }^{(r)}\) from from the multivariate Normal distribution.

(Step4):

Repeat (Step 2)\(\sim \)(Step3) for R iterations

It is noteworthy to mention that we reuse the log-likelihood value of the previous iteration (\({ll}_{i}^{(r-1)}\)) when implementing the MH algorithm for \(\theta _{i}^{(r)}\) in (Step 2). Because our two-step approach approximates the model likelihood by the Monte Carlo integration rather than computing its exact value, the MCMC converges to the exact posterior only when the integration dummies used to simulate the likelihood become a part of the MCMC state space (Andrieu & Roberts, 2009).

Appendix C: Model specifications

  1. 1.

    Benchmark1: \(\theta \equiv (\alpha _{2},\cdots ,\alpha _{J},\beta ,\ln \lambda )^{\prime }\) is estimated with α1 = 0

    $$ \begin{array}{@{}rcl@{}} && Q_{t} \sim \text{Poisson} \left( \lambda \right) \\ && x_{1t},\cdots,x_{Jt} | Q_{t} \sim \text{Multinomial} \left( \pi_{1},\cdots,\pi_{J} \right), \quad \text{where} \pi_{j}=\frac{e^{\alpha_{j}+\beta \times p_{jt}}}{\sum \limits_{j=1}^{J} e^{\alpha_{j}+\beta \times p_{jt}}} \end{array} $$
  2. 2.

    Benchmark2: \(\theta \equiv (\ln \psi _{1},\cdots ,\ln \psi _{J}, \ln \gamma ,\ln E)^{\prime }\) is estimated with ψ0 = 1

    $$ \begin{array}{@{}rcl@{}} &\text{Maximize } \quad & V\left( x_{1t}, \cdots, x_{Jt}\right) = \sum\limits_{j = 1}^{J} \frac{\psi_{j} e^{\varepsilon_{jt}}}{\gamma} \ln \left( \gamma x_{jt}+1\right) + \psi_{0} \ln \left( E - \sum\limits_{j = 1}^{J} {{p_{jt}}{x_{jt}}} \right) \\ &\text{Subject to} \quad & x_{0t} = E - \sum\limits_{j = 1}^{J} {{p_{jt}}{x_{jt}}} \ge 0, x_{jt} \ge 0, \forall j \in \left\{ {1, {\cdots} ,J} \right\} \end{array} $$
  3. 3.

    Benchmark3: \(\theta \equiv (\ln \psi _{1},\cdots ,\ln \psi _{J}, \ln \gamma )^{\prime }\) is estimated with ψ0 = 1

    $$ \begin{array}{@{}rcl@{}} &\text{Maximize } \quad & V\left( x_{1t}, \cdots, x_{Jt}\right) = \sum\limits_{j = 1}^{J} \frac{\psi_{j} e^{\varepsilon_{jt}}}{\gamma} \ln \left( \gamma x_{jt}+1\right) + \psi_{0} \left( E - \sum\limits_{j = 1}^{J} {{p_{jt}}{x_{jt}}} \right) \\ &\text{Subject to} \quad & x_{0t} = E - \sum\limits_{j = 1}^{J} {{p_{jt}}{x_{jt}}} \ge 0, x_{jt} \in \left\{ {0,1,2, {\cdots} } \right\}, \forall j \in \left\{ {1, {\cdots} ,J} \right\} \end{array} $$
  4. 4.

    Proposed1: \(\theta \equiv (\ln \psi _{1},\cdots ,\ln \psi _{J}, \ln \gamma ,\ln E)^{\prime }\) is estimated with ψ0 = 1

    $$ \begin{array}{@{}rcl@{}} &\text{Maximize } \quad & V\left( x_{1t}, \cdots, x_{Jt}\right) = \sum\limits_{j = 1}^{J} \frac{\psi_{j} e^{\varepsilon_{jt}}}{\gamma} \ln \left( \gamma x_{jt}+1\right) + \psi_{0} \ln \left( E - \sum\limits_{j = 1}^{J} {{p_{jt}}{x_{jt}}} \right) \\ &\text{Subject to} \quad & x_{0t} = E - \sum\limits_{j = 1}^{J} {{p_{jt}}{x_{jt}}} \ge 0, x_{jt} \in \left\{ {0,1,2, {\cdots} } \right\}, \forall j \in \left\{ {1, {\cdots} ,J} \right\} \end{array} $$
  5. 5.

    Proposed2: \(\theta \equiv (\ln \psi _{1},\cdots ,\ln \psi _{J}, \ln \gamma ,\ln E,\ln S)^{\prime }\) is estimated with \({\psi _{0}^{m}}={\psi _{0}^{s}}=1\)

    $$ \begin{array}{@{}rcl@{}} \text{Maximize } &&\quad V\left( x_{1t}, \cdots, x_{Jt}\right)\\ &=& \sum\limits_{j = 1}^{J} \frac{\psi_{j} e^{\varepsilon_{jt}}}{\gamma} \ln \left( \gamma x_{jt}+1\right) + {\psi_{0}^{m}} \ln \left( E - \sum\limits_{j = 1}^{J} {{p_{jt}}{x_{jt}}} \right)\\ &&+ {\psi_{0}^{s}} \ln \left( S - \sum\limits_{j = 1}^{J} {{q_{j}}{x_{jt}}} \right) \\ \text{Subject to} \quad x_{0t}^{m} &=& E - \sum\limits_{j = 1}^{J} {{p_{jt}}{x_{jt}}} \ge 0, x_{0t}^{s} = S - \sum\limits_{j = 1}^{J} {{q_{j}}{x_{jt}}}\\ &\ge& 0, x_{jt} \in \left\{ {0,1,2, {\cdots} } \right\}, \forall j \in \left\{ {1, {\cdots} ,J} \right\} \end{array} $$

Appendix D: Parameter estimates of the models

Table 16 Parameter estimates
Table 17 Covariance matrix

Appendix E: Application of the proposed model to optimal demand of a non-convex problem

This section provides an additional simulation study that shows the limitation of the proposed sequential choice process in estimating optimal demand models of non-convex problems. Given the same utility function in Eq. 13, we simulate optimal demand of two complementary goods by comparing the attainable utilities of the entire feasible set. Then we apply our sequential choice model to the simulated data of optimal solutions for estimating the model parameters. Table 18 shows how our estimation algorithm works when it is applied to optimal demand of complementary goods. Because the sequential choice process embedded in the estimation algorithm does not guarantee an optimal solution of a non-convex problem, the parameter estimates fail to recover the true parameter values.

Table 18 Estimation result—optimal demand of complementary goods

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, S., Kim, S. & Park, S. A sequential choice model for multiple discrete demand. Quant Mark Econ 20, 141–178 (2022). https://doi.org/10.1007/s11129-022-09250-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11129-022-09250-9

Keywords

JEL Classification

Navigation