首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The likelihood functions of multinomial probit (MNP)-based choice models entail the evaluation of analytically-intractable integrals. As a result, such models are usually estimated using maximum simulated likelihood (MSL) techniques. Unfortunately, for many practical situations, the computational cost to ensure good asymptotic MSL estimator properties can be prohibitive and practically infeasible as the number of dimensions of integration rises. In this paper, we introduce a maximum approximate composite marginal likelihood (MACML) estimation approach for MNP models that can be applied using simple optimization software for likelihood estimation. It also represents a conceptually and pedagogically simpler procedure relative to simulation techniques, and has the advantage of substantial computational time efficiency relative to the MSL approach. The paper provides a “blueprint” for the MACML estimation for a wide variety of MNP models.  相似文献   

2.
This paper develops a blueprint (complete with matrix notation) to apply Bhat’s (2011) Maximum Approximate Composite Marginal Likelihood (MACML) inference approach for the estimation of cross-sectional as well as panel multiple discrete–continuous probit (MDCP) models. A simulation exercise is undertaken to evaluate the ability of the proposed approach to recover parameters from a cross-sectional MDCP model. The results show that the MACML approach does very well in recovering parameters, as well as appears to accurately capture the curvature of the Hessian of the log-likelihood function. The paper also demonstrates the application of the proposed approach through a study of individuals’ recreational (i.e., long distance leisure) choice among alternative destination locations and the number of trips to each recreational destination location, using data drawn from the 2004 to 2005 Michigan statewide household travel survey.  相似文献   

3.
Integrated choice and latent variable (ICLV) model incorporates latent factors into standard discrete choice model with aim to provide greater explanatory power. Using simulated datasets, this study makes a comparison among three estimation approaches corresponding to the sequential approach and two simultaneous approaches including the maximum simulated likelihood with GHK estimator and maximum approximate composite marginal likelihood (MACML) approach, to evaluate their abilities to recover the underlying parameters of multinomial probit-kernel ICLV model. The results show that both simultaneous approaches outperform the sequential approach in terms of estimates accuracy and efficiency irrespective of the sample sizes, and the MACML approach is the most preferable due to its best performance on recovering true values of parameters with relatively small standard errors, especially when the sample size is large enough.  相似文献   

4.
This paper formulates a generalized heterogeneous data model (GHDM) that jointly handles mixed types of dependent variables—including multiple nominal outcomes, multiple ordinal variables, and multiple count variables, as well as multiple continuous variables—by representing the covariance relationships among them through a reduced number of latent factors. Sufficiency conditions for identification of the GHDM parameters are presented. The maximum approximate composite marginal likelihood (MACML) method is proposed to estimate this jointly mixed model system. This estimation method provides computational time advantages since the dimensionality of integration in the likelihood function is independent of the number of latent factors. The study undertakes a simulation experiment within the virtual context of integrating residential location choice and travel behavior to evaluate the ability of the MACML approach to recover parameters. The simulation results show that the MACML approach effectively recovers underlying parameters, and also that ignoring the multi-dimensional nature of the relationship among mixed types of dependent variables can lead not only to inconsistent parameter estimation, but also have important implications for policy analysis.  相似文献   

5.
The multinomial probit model is a statistical tool that is well suited to analyze some transportation problems. Modal split, gap acceptance, and route choice are some examples of application contexts. This paper presents an in-depth analysis of its statistical properties and an estimation method for the trinomial case. In the statistical part of the paper it is shown that for multinomial probit models with specifications that are linear in the parameters, the global maximum of the log-likelihood function is consistent if the data do not exhibit multicollinearity as defined in the text. For the special case with three alternatives, lack of multicollinearity is also shown to guarantee asymptotic efficiency and normality, and the uniqueness of any root of the likelihood equations. In addition, it is also shown that for the trinomial probit model certain goodness-of-it measures and test statistics can be easily calculated. The methods part of the paper introduces an estimation process that solves the likelihood equations using a special purpose table of the bivariate normal distribution and analytical derivatives of the log-likelihood function. The method is very accurate, can be applied to nonlinear specifications, and is considerably faster than current computer programs. For linear specifications, the method can be mathematically proven to converge if the log-likelihood equations have a root.  相似文献   

6.
In the current paper, we propose the use of a multivariate skew-normal (MSN) distribution function for the latent psychological constructs within the context of an integrated choice and latent variable (ICLV) model system. The multivariate skew-normal (MSN) distribution that we use is tractable, parsimonious in parameters that regulate the distribution and its skewness, and includes the normal distribution as a special interior point case (this allows for testing with the traditional ICLV model). Our procedure to accommodate non-normality in the psychological constructs exploits the latent factor structure of the ICLV model, and is a flexible, yet very efficient approach (through dimension-reduction) to accommodate a multivariate non-normal structure across all indicator and outcome variables in a multivariate system through the specification of a much lower-dimensional multivariate skew-normal distribution for the structural errors. Taste variations (i.e., heterogeneity in sensitivity to response variables) can also be introduced efficiently and in a non-normal fashion through interactions of explanatory variables with the latent variables. The resulting model we develop is suitable for estimation using Bhat’s (2011) maximum approximate composite marginal likelihood (MACML) inference approach. The proposed model is applied to model bicyclists’ route choice behavior using a web-based survey of Texas bicyclists. The results reveal evidence for non-normality in the latent constructs. From a substantive point of view, the results suggest that the most unattractive features of a bicycle route are long travel times (for commuters), heavy motorized traffic volume, absence of a continuous bicycle facility, and high parking occupancy rates and long lengths of parking zones along the route.  相似文献   

7.
This paper studies link travel time estimation using entry/exit time stamps of trips on a steady-state transportation network. We propose two inference methods based on the likelihood principle, assuming each link associates with a random travel time. The first method considers independent and Gaussian distributed link travel times, using the additive property that trip time has a closed-form distribution as the summation of link travel times. We particularly analyze the mean estimates when the variances of trip time estimates are known with a high degree of precision and examine the uniqueness of solutions. Two cases are discussed in detail: one with known paths of all trips and the other with unknown paths of some trips. We apply the Gaussian mixture model and the Expectation–Maximization (EM) algorithm to deal with the latter. The second method splits trip time proportionally among links traversed to deal with more general link travel time distributions such as log-normal. This approach builds upon an expected log-likelihood function which naturally leads to an iterative procedure analogous to the EM algorithm for solutions. Simulation tests on a simple nine-link network and on the Sioux Falls network respectively indicate that the two methods both perform well. The second method (i.e., trip splitting approximation) generally runs faster but with larger errors of estimated standard deviations of link travel times.  相似文献   

8.
The estimation of discrete choice models requires measuring the attributes describing the alternatives within each individual’s choice set. Even though some attributes are intrinsically stochastic (e.g. travel times) or are subject to non-negligible measurement errors (e.g. waiting times), they are usually assumed fixed and deterministic. Indeed, even an accurate measurement can be biased as it might differ from the original (experienced) value perceived by the individual.Experimental evidence suggests that discrepancies between the values measured by the modeller and experienced by the individuals can lead to incorrect parameter estimates. On the other hand, there is an important trade-off between data quality and collection costs. This paper explores the inclusion of stochastic variables in discrete choice models through an econometric analysis that allows identifying the most suitable specifications. Various model specifications were experimentally tested using synthetic data; comparisons included tests for unbiased parameter estimation and computation of marginal rates of substitution. Model specifications were also tested using a real case databank featuring two travel time measurements, associated with different levels of accuracy.Results show that in most cases an error components model can effectively deal with stochastic variables. A random coefficients model can only effectively deal with stochastic variables when their randomness is directly proportional to the value of the attribute. Another interesting result is the presence of confounding effects that are very difficult, if not impossible, to isolate when more flexible models are used to capture stochastic variations. Due the presence of confounding effects when estimating flexible models, the estimated parameters should be carefully analysed to avoid misinterpretations. Also, as in previous misspecification tests reported in the literature, the Multinomial Logit model proves to be quite robust for estimating marginal rates of substitution, especially when models are estimated with large samples.  相似文献   

9.
This paper proposes an integrated Bayesian statistical inference framework to characterize passenger flow assignment model in a complex metro network. In doing so, we combine network cost attribute estimation and passenger route choice modeling using Bayesian inference. We build the posterior density by taking the likelihood of observing passenger travel times provided by smart card data and our prior knowledge about the studied metro network. Given the high-dimensional nature of parameters in this framework, we apply the variable-at-a-time Metropolis sampling algorithm to estimate the mean and Bayesian confidence interval for each parameter in turn. As a numerical example, this integrated approach is applied on the metro network in Singapore. Our result shows that link travel time exhibits a considerable coefficient of variation about 0.17, suggesting that travel time reliability is of high importance to metro operation. The estimation of route choice parameters conforms with previous survey-based studies, showing that the disutility of transfer time is about twice of that of in-vehicle travel time in Singapore metro system.  相似文献   

10.
Count models are used for analyzing outcomes that can only take non-negative integer values with or without any pre-specified large upper limit. However, count models are typically considered to be different from random utility models such as the multinomial logit (MNL) model. In this paper, Generalized Extreme Value (GEV) models that are consistent with the Random Utility Maximization (RUM) framework and that subsume standard count models including Poisson, Geometric, Negative Binomial, Binomial, and Logarithmic models as special cases were developed. The ability of the Maximum Likelihood (ML) inference approach to retrieve the parameters of the resulting GEV count models was examined using synthetic data. The simulation results indicate that the ML estimation technique performs quite well in terms of recovering the true parameters of the proposed GEV count models. Also, the models developed were used to analyze the monthly telecommuting frequency decisions of workers. Overall, the empirical results demonstrate superior data fit and better predictive performance of the GEV models compared to standard count models.  相似文献   

11.
With subway systems around the world experiencing increasing demand, measures such as passengers left behind are becoming increasingly important. This paper proposes a methodology for inferring the probability distribution of the number of times a passenger is left behind at stations in congested metro systems using automated data. Maximum likelihood estimation (MLE) and Bayesian inference methods are used to estimate the left behind probability mass function (LBPMF) for a given station and time period. The model is applied using actual and synthetic data. The results show that the model is able to estimate the probability of being left behind fairly accurately.  相似文献   

12.
Welfare in random utility models is used to be analysed on the basis of only the expectation of the compensating variation. De Palma and Kilani (De Palma, A., Kilani, K., 2011. Transition choice probabilities and welfare analysis in additive random utility models. Economic Theory 46(3), 427–454) have developed a framework for conditional welfare analysis which provides analytic expressions of transition choice probabilities and associated welfare measures. The contribution is of practical relevance in transportation because it allows to compute shares of shifters and non-shifters and attribute benefits to them in a rigorous way. In De Palma and Kilani (2011) the usual assumption of unchanged random terms before and after is made.The paper generalises the framework for conditional welfare analysis to cases of imperfect before–after association of the random terms. The joint before–after distribution of the random terms is introduced with postulated properties in terms of marginal distributions and covariance matrix. Analytic expressions, based on the probability density function and the cumulative distribution function of the joint before–after distribution, and simulation procedures for computation of the transition choice probabilities and the conditional expectations of the compensating variation are provided. Results are specialised for multinomial logit and probit. In the case without income effects, it is proved that the unconditional expectation of the compensating variation depends only on the marginal distributions.The theory is illustrated by a numerical example which refers to a multinomial logit applied to the choice of the transport mode with two specifications, one without and one with income effects. Results show that transition probabilities and conditional welfare measures are affected significantly by the assumption on the before–after correlation. The variability in the transition probabilities across transitions tends to decrease as the before–after correlation decreases. In the extreme case of independent random terms, the conditional expectations of the compensating variation tend to be close to the unconditional expectation.  相似文献   

13.
The multinomial probit model of travel demand is considerably more general but much less tractable than the better-known multinomial logit model. In an effort to determine the effects of using the relatively simple logit model in situations where the assumptions of probit modeling are satisfied but those of logit modeling are not, the accuracy of the multinomial logit model as an approximation to a variety of three-alternative probit models has been evaluated. Multinomial logit can give highly erroneous estimates of the choice probabilities of multinomial probit models. However, logit models appear to give asymptotically accurate estimates of the ratios of the coefficients of the systematic components of probit utility functions, even when the logit choice probabilities differ greatly from the probit ones. Large estimation data sets are not necessarily needed to enable likelihood ratio tests to distinguish three-alternative probit models from logit models that give seriously erroneous estimates of the probit choice probabilities. Inclusion of alternative-specific dummy variables in logit utility functions cannot be relied upon to reduce significantly the errors of logit approximations to the choice probabilities of probit models whose utility functions do not contain the dummies.  相似文献   

14.
The multinomial probit model of travel demand is considerably more general but much less tractable than the better-known multinomial logit model. In an effort to determine the effects of using the relatively simple logit model in situations where the assumptions of probit modeling are satisfied but those of logit modeling are not, the accuracy of the multinomial logit model as an approximation to a variety of three-alternative probit models has been evaluated. Multinomial logit can give highly erroneous estimates of the choice probabilities of multinomial probit models. However, logit models appear to give asymptotically accurate estimates of the ratios of the coefficients of the systematic components of probit utility functions, even when the logit choice probabilities differ greatly from the probit ones. Large estimation data sets are not necessarily needed to enable likelihood ratio tests to distinguish three-alternative probit models from logit models that give seriously erroneous estimates of the probit choice probabilities. Inclusion of alternative-specific dummy variables in logit utility functions cannot be relied upon to reduce significantly the errors of logit approximations to the choice probabilities of probit models whose utility functions do not contain the dummies.  相似文献   

15.
Hensher  David A. 《Transportation》2001,28(2):101-118
The empirical valuation of travel time savings is a derivative of the ratio of parameter estimates in a discrete choice model. The most common formulation (multinomial logit) imposes strong restrictions on the profile of the unobserved influences on choice as represented by the random component of a preference function. As we progress our ability to relax these restrictions we open up opportunities to benchmark the values derived from simple (albeit relatively restrictive) models. In this paper we contrast the values of travel time savings derived from multinomial logit and alternative specifications of mixed (or random parameter) logit models. The empirical setting is urban car commuting in six locations in New Zealand. The evidence suggests that less restrictive choice model specifications tend to produce higher estimates of values of time savings compared to the multinomial logit model; however the degree of under-estimation of multinomial logit remains quite variable, depending on the context.  相似文献   

16.
Kim  Yeonbae  Kim  Tai-Yoo  Heo  Eunnyeong 《Transportation》2003,30(3):351-365
In this paper, we estimate a multinomial probit model of work trip mode choice in Seoul, Korea, using the Bayesian approach with Gibbs sampling. This method constructs a Markov chain Gibbs sampler that can be used to draw directly from the exact posterior distribution and perform finite sample likelihood inference. We estimate direct and cross-elasticities with respect to travel cost and the value of time. Our results show that travel demands are more sensitive to travel time than travel cost. The cross-elasticity results show that the bus has a greater substitute relation to the subway than the auto (and vice versa) and that an increase in the cost of an auto will increase the demand for bus transport more so than that of the subway.  相似文献   

17.
Most applications of discrete choice models in transportation now utilise a random coefficient specification, such as mixed logit, to represent taste heterogeneity. However, little is known about the ability of these models to capture the heterogeneity in finite samples (as opposed to asymptotically). Also, due to the computational intensity of the standard estimation procedures, several alternative, less demanding methods have been proposed, and yet the relative accuracy of these methods has not been investigated. This is especially true in the context of work looking at joint inter-respondent and intra-respondent variation. This paper presents an overview of the various different estimators, gives insights into some of the theoretical properties, and analyses their performance in a large scale study on simulated data. In particular, we specify 31 different forms of heterogeneity, with multiple versions of each dataset, and with results from over 16,000 mixed logit estimation runs. The findings suggest that variation in tastes over consumers is captured by all the methods, including the simpler versions, at least when sample size is sufficiently large. When tastes vary over choice situations for each consumer, as well as over consumers, the ability of the methods to capture and differentiate the two sources of heterogeneity becomes more tenuous. Only the most computationally intensive approach is able to capture adequately the two sources of variation, but at the cost of very high run times. Our results highlight the difficulty of retrieving taste heterogeneity with only cross-sectional data, providing further evidence of the benefits of repeated choice data. Our findings also suggest that the data requirements of random coefficients models may be more substantial than is commonly assumed, further reinforcing concerns about small sample issues.  相似文献   

18.
This paper proposes a reformulation of count models as a special case of generalized ordered-response models in which a single latent continuous variable is partitioned into mutually exclusive intervals. Using this equivalent latent variable-based generalized ordered response framework for count data models, we are then able to gainfully and efficiently introduce temporal and spatial dependencies through the latent continuous variables. Our formulation also allows handling excess zeros in correlated count data, a phenomenon that is commonly found in practice. A composite marginal likelihood inference approach is used to estimate model parameters. The modeling framework is applied to predict crash frequency at urban intersections in Arlington, Texas. The sample is drawn from the Texas Department of Transportation (TxDOT) crash incident files between 2003 and 2009, resulting in 1190 intersection-year observations. The results reveal the presence of intersection-specific time-invariant unobserved components influencing crash propensity and a spatial lag structure to characterize spatial dependence. Roadway configuration, approach roadway functional types, traffic control type, total daily entering traffic volumes and the split of volumes between approaches are all important variables in determining crash frequency at intersections.  相似文献   

19.
Estimation of urban network link travel times from sparse floating car data (FCD) usually needs pre-processing, mainly map-matching and path inference for finding the most likely vehicle paths that are consistent with reported locations. Path inference requires a priori assumptions about link travel times; using unrealistic initial link travel times can bias the travel time estimation and subsequent identification of shortest paths. Thus, the combination of path inference and travel time estimation is a joint problem. This paper investigates the sensitivity of estimated travel times, and proposes a fixed point formulation of the simultaneous path inference and travel time estimation problem. The methodology is applied in a case study to estimate travel times from taxi FCD in Stockholm, Sweden. The results show that standard fixed point iterations converge quickly to a solution where input and output travel times are consistent. The solution is robust under different initial travel times assumptions and data sizes. Validation against actual path travel time measurements from the Google API and an instrumented vehicle deployed for this purpose shows that the fixed point algorithm improves shortest path finding. The results highlight the importance of the joint solution of the path inference and travel time estimation problem, in particular for accurate path finding and route optimization.  相似文献   

20.
This study investigates the influence of adaptive expectations on the purchase of automobiles by income quintile. Through maximum likelihood estimation, it is found that the coefficients of adaptation to income exhibit a trend towards faster rates in the upper quintiles. The effects of a “truncation remainder” and of serial correlation are also noted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号