首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Recent empirical studies have found widespread inaccuracies in traffic forecasts despite the fact that travel demand forecasting models have been significantly improved over the past few decades. We suspect that an intrinsic selection bias may exist in the competitive project appraisal process, in addition to the many other factors that contribute to inaccurate traffic forecasts. In this paper, we examine the potential for selection bias in the governmental process of Build-Operate-Transfer (BOT) transportation project appraisals. Although the simultaneous consideration of multiple criteria is typically used in practice, traffic flow estimate is usually a key criterion in these appraisals. For the purposes of this paper, we focus on the selection bias associated with the highest flow estimate criterion. We develop two approaches to quantify the level and chance of inaccuracy caused by selection bias: the expected value approach and the probability approach. The expected value approach addresses the question “to what extent is inaccuracy caused by selection bias?”. The probability approach addresses the question “what is the chance of inaccuracy due to selection bias?”. The results of this analysis confirm the existence of selection bias when a government uses the highest traffic forecast estimate as the priority criterion for BOT project selection. In addition, we offer some insights into the relationship between the extent/chance of inaccuracy and other related factors. We do not argue that selection bias is the only reason for inaccurate traffic forecasts in BOT projects; however, it does appear that it could be an intrinsic factor worthy of further attention and investigation.  相似文献   

2.
This study proposes Reinforcement Learning (RL) based algorithm for finding optimum signal timings in Coordinated Signalized Networks (CSN) for fixed set of link flows. For this purpose, MOdified REinforcement Learning algorithm with TRANSYT-7F (MORELTRANS) model is proposed by way of combining RL algorithm and TRANSYT-7F. The modified RL differs from other RL algorithms since it takes advantage of the best solution obtained from the previous learning episode by generating a sub-environment at each learning episode as the same size of original environment. On the other hand, TRANSYT-7F traffic model is used in order to determine network performance index, namely disutility index. Numerical application is conducted on medium sized coordinated signalized road network. Results indicated that the MORELTRANS produced slightly better results than the GA in signal timing optimization in terms of objective function value while it outperformed than the HC. In order to show the capability of the proposed model for heavy demand condition, two cases in which link flows are increased by 20% and 50% with respect to the base case are considered. It is found that the MORELTRANS is able to reach good solutions for signal timing optimization even if demand became increased.  相似文献   

3.
The current state-of-practice for predicting travel times assumes that the speeds along the various roadway segments remain constant over the duration of the trip. This approach produces large prediction errors, especially when the segment speeds vary temporally. In this paper, we develop a data clustering and genetic programming approach for modeling and predicting the expected, lower, and upper bounds of dynamic travel times along freeways. The models obtained from the genetic programming approach are algebraic expressions that provide insights into the spatiotemporal interactions. The use of an algebraic equation also means that the approach is computationally efficient and suitable for real-time applications. Our algorithm is tested on a 37-mile freeway section encompassing several bottlenecks. The prediction error is demonstrated to be significantly lower than that produced by the instantaneous algorithm and the historical average averaged over seven weekdays (p-value <0.0001). Specifically, the proposed algorithm achieves more than a 25% and 76% reduction in the prediction error over the instantaneous and historical average, respectively on congested days. When bagging is used in addition to the genetic programming, the results show that the mean width of the travel time interval is less than 5 min for the 60–80 min trip.  相似文献   

4.
In this paper we present a continuous-time network loading procedure based on the Lighthill–Whitham–Richards model proposed by Lighthill and Whitham, 1955, Richards, 1956. A system of differential algebraic equations (DAEs) is proposed for describing traffic flow propagation, travel delay and route choices. We employ a novel numerical apparatus to reformulate the scalar conservation law as a flow-based partial differential equation (PDE), which is then solved semi-analytically with the Lax–Hopf formula. This approach allows for an efficient computational scheme for large-scale networks. We embed this network loading procedure into the dynamic user equilibrium (DUE) model proposed by Friesz et al. (1993). The DUE model is solved as a differential variational inequality (DVI) using a fixed-point algorithm. Several numerical examples of DUE on networks of varying sizes are presented, including the Sioux Falls network with a significant number of paths and origin–destination pairs (OD).The DUE model presented in this article can be formulated as a variational inequality (VI) as reported in Friesz et al. (1993). We will present the Kuhn–Tucker (KT) conditions for that VI, which is a linear system for any given feasible solution, and use them to check whether a DUE solution has been attained. In order to solve for the KT multiplier we present a decomposition of the linear system that allows efficient computation of the dual variables. The numerical solutions of DUE obtained from fixed-point iterations will be tested against the KT conditions and validated as legitimate solutions.  相似文献   

5.
Despite widespread growth in on-road public transport priority schemes, road management authorities have few tools to evaluate the impacts of these schemes on all road users. This paper describes a methodology developed in Melbourne, Australia to assist the road management authority, VicRoads, evaluate trade-offs in the use of its limited road-space for new bus and tram priority projects. The approach employs traffic micro-simulation modelling to assess road-space re-allocation impacts, travel behaviour modelling to assess changes in travel patterns and a social cost benefit framework to evaluate impacts. The evaluation considers a comprehensive range of impacts including the environmental benefits of improved public transport services. Impacts on public transport reliability improvements are also considered. Although improved bus and tram reliability is a major rationale for traffic priority its use in previous evaluations is rare. The paper critiques previous approaches, describes the proposed method and explores some of the results found in its application. A major finding is that despite a more comprehensive approach to measuring the benefits of bus and tram priority, road-space reallocation is difficult to economically justify in road networks where public transport usage is low and car usage high. Strategies involving the balanced deployment of bus and tram priority measures where the allocation of time and space to PT minimises negative traffic impacts is shown to improve the overall management of road-space. A discussion of the approach is also provided including suggestions for further methodology development.
Bill YoungEmail:
  相似文献   

6.
The use of multi-agent systems to model and to simulate real systems consisting of intelligent entities capable of autonomously co-operating with each other has emerged as an important field of research. This has been applied to a variety of areas, such as social sciences, engineering, and mathematical and physical theories. In this work, we address the complex task of modelling drivers’ behaviour through the use of agent-based techniques. Contemporary traffic systems have experienced considerable changes in the last few years, and the rapid growth of urban areas has challenged scientific and technical communities. Influencing drivers’ behaviour appears as an alternative to traditional approaches to cope with the potential problem of traffic congestion, such as the physical modification of road infrastructures and the improvement of control systems. It arises as one of the underlying ideas of intelligent transportation systems. In order to offer a good means to evaluate the impact that exogenous information may exert on drivers’ decision making, we propose an extension to an existing microscopic simulation model called Dynamic Route Assignment Combining User Learning and microsimulAtion (DRACULA). In this extension, the traffic domain is viewed as a multi-agent world and drivers are endowed with mental attitudes, which allow rational decisions about route choice and departure time. This work is divided into two main parts. The first part describes the original DRACULA framework and the extension proposed to support our agent-based traffic model. The second part is concerned with the reasoning mechanism of drivers modelled by means of a Beliefs, Desires, and Intentions (BDI) architecture. In this part, we use AgentSpeak(L) to specify commuter scenarios and special emphasis is given to departure time and route choices. This paper contributes in that respect by showing a practical way of representing and assessing drivers’ behaviour and the adequacy of using AgentSpeak(L) as a modelling language, as it provides clear and elegant specifications of BDI agents.  相似文献   

7.
This paper is a think piece on variations in the structure of stated preference studies when modelling the joint preferences of interacting agents who have the power to influence the attribute levels on offer. The approach proposed is an extension of standard stated choice methods, known as ‘stated endogenous attribute level’ (SEAL) analysis. It allows for interactive agents to adjust attribute levels off a base stated choice specification that are within their control, in an effort to reach agreement in an experimental setting. This accomplishes three goals: (1) the ability to place respondents in an environment that more closely matches interactive settings in which some attribute levels are endogenous to a specific agent, should the modeller wish to capture such behaviour; (2) the improved ability of the modeller to capture the behaviour in such settings, including a greater wealth of information on the related interaction processes, rather than simply outcomes; and (3) the expansion of the set of situations that the modeller can investigate using experimental data.
John M. RoseEmail:
  相似文献   

8.
Traditionally, researchers studying transportation choice have used data either acquired from household surveys or broad, region-wide aggregates. At the disaggregate level, researchers usually do not have access to important variables or observations. This study investigates the potential usefulness of a proxy approach to modeling discrete choice vehicle ownership: substituting narrow area-based aggregate proxies for missing micro-level explanatory variables by accessing large, publicly maintained datasets. We use data from the 2000 Bay Area Travel Survey (BATS) and the contemporaneous U.S. Census file to compare three models of vehicle ownership, drawing area-wide proxies from increasing levels of aggregation. The models with proxies are compared with a parallel model that uses only survey data. The results indicate that the proxy models are preferred in terms of model selection criteria, and predict vehicle ownership as well or better than the survey model. Parameter values produced by the proxy method effectively approximate those returned by household survey models in terms of coefficient sign and significance, particularly when the aggregate variables are representative of their household-level counterparts. The proxy model with the narrowest level of aggregation achieved the best fit, coefficient precision, and percentage of correct prediction.
Jeffrey WilliamsEmail:
  相似文献   

9.
Employing a strategy of sampling of alternatives is necessary for various transportation models that have to deal with large choice-sets. In this article, we propose a method to obtain consistent, asymptotically normal and relatively efficient estimators for Logit Mixture models while sampling alternatives. Our method is an extension of previous results for Logit and MEV models. We show that the practical application of the proposed method for Logit Mixture can result in a Naïve approach, in which the kernel is replaced by the usual sampling correction for Logit. We give theoretical support for previous applications of the Naïve approach, showing not only that it yields consistent estimators, but also providing its asymptotic distribution for proper hypothesis testing. We illustrate the proposed method using Monte Carlo experimentation and real data. Results provide further evidence that the Naïve approach is suitable and practical. The article concludes by summarizing the findings of this research, assessing their potential impact, and suggesting extensions of the research in this area.  相似文献   

10.
The effect of complex models of externalities on estimated optimal tolls   总被引:1,自引:0,他引:1  
Transport externalities such as costs of emissions and accidents are increasingly being used within appraisal and optimisation frameworks alongside the more traditional congestion analysis to set optimal transport policies. Models of externalities and costs of externalities may be implemented by a simple constant cost per vehicle-km approach or by more complex flow and speed dependent approaches. This paper investigates the impact of using both simple and more complex models of CO2 emissions and cost of accidents on the optimal toll for car use and upon resulting welfare levels. The approach adopted is to use a single link model with a technical approach to the representation of the speed-flow relationship as this reflects common modelling practice. It is shown that using a more complex model of CO2 emitted increases the optimal toll significantly compared to using a fixed cost approach while reducing CO2 emitted only marginally. A number of accident models are used and the impact on tolls is shown to depend upon the assumptions made. Where speed effects are included in the accident model, accident costs can increase compared to the no toll equilibrium and so tolls should in this case be reduced compared to the congestion optimal toll. Finally it is shown that the effect of adding variable CO2 emission models along with a fixed cost per vehicle-km for accidents can increase the optimal toll by 44% while increasing the true welfare gained by only 8%. The results clearly demonstrate that model assumptions for externalities can have a significant impact on the resulting policies and in the case of accidents the policies can be reversed.
Simon Peter ShepherdEmail:

Simon Peter Shepherd   at the Institute for Transport Studies since 1989, he gained his doctorate in 1994 applying state-space methods to the problem of traffic responsive signal control in over-saturated conditions. His expertise lies in modelling and policy optimisation ranging from detailed simulation models through assignment to strategic land use transport models. He is currently working on optimal cordon design and systems dynamics approaches to strategic modelling.  相似文献   

11.
With the recent increase in the deployment of ITS technologies in urban areas throughout the world, traffic management centers have the ability to obtain and archive large amounts of data on the traffic system. These data can be used to estimate current conditions and predict future conditions on the roadway network. A general solution methodology for identifying the optimal aggregation interval sizes for four scenarios is proposed in this article: (1) link travel time estimation, (2) corridor/route travel time estimation, (3) link travel time forecasting, and (4) corridor/route travel time forecasting. The methodology explicitly considers traffic dynamics and frequency of observations. A formulation based on mean square error (MSE) is developed for each of the scenarios and interpreted from a traffic flow perspective. The methodology for estimating the optimal aggregation size is based on (1) the tradeoff between the estimated mean square error of prediction and the variance of the predictor, (2) the differences between estimation and forecasting, and (3) the direct consideration of the correlation between link travel time for corridor/route estimation and forecasting. The proposed methods are demonstrated using travel time data from Houston, Texas, that were collected as part of the automatic vehicle identification (AVI) system of the Houston Transtar system. It was found that the optimal aggregation size is a function of the application and traffic condition.
Changho ChoiEmail:
  相似文献   

12.
This paper develops an agent-based modeling approach to predict multi-step ahead experienced travel times using real-time and historical spatiotemporal traffic data. At the microscopic level, each agent represents an expert in a decision-making system. Each expert predicts the travel time for each time interval according to experiences from a historical dataset. A set of agent interactions is developed to preserve agents that correspond to traffic patterns similar to the real-time measurements and replace invalid agents or agents associated with negligible weights with new agents. Consequently, the aggregation of each agent’s recommendation (predicted travel time with associated weight) provides a macroscopic level of output, namely the predicted travel time distribution. Probe vehicle data from a 95-mile freeway stretch along I-64 and I-264 are used to test different predictors. The results show that the agent-based modeling approach produces the least prediction error compared to other state-of-the-practice and state-of-the-art methods (instantaneous travel time, historical average and k-nearest neighbor), and maintains less than a 9% prediction error for trip departures up to 60 min into the future for a two-hour trip. Moreover, the confidence boundaries of the predicted travel times demonstrate that the proposed approach also provides high accuracy in predicting travel time confidence intervals. Finally, the proposed approach does not require offline training thus making it easily transferable to other locations and the fast algorithm computation allows the proposed approach to be implemented in real-time applications in Traffic Management Centers.  相似文献   

13.
Recent experience with the design of bus services in Santiago, Chile, seems to confirm Jansson's (1980) assertion regarding observed planned bus frequency and size being too low and too large, respectively. We offer an explanation based upon the relation between cost coverage, pricing and optimal design variables. We recall that average social cost decreases with patronage, which generates an optimal monetary fare below the average operators' cost, inducing an optimal subsidy. Then we compare optimal frequency and bus size—those that minimize total social costs—with those that minimize operators' costs only. We show that an active constraint on operators' expenses is equivalent to diminish the value of users' time in the optimal design problem. Inserting this property back in the optimal pricing scheme, we conclude that a self-financial constraint, if active, always provokes an inferior solution, a smaller frequency and, under some circumstances, larger than optimal buses.
Sergio R. Jara-DíazEmail:
  相似文献   

14.
The purpose of our study is to develop a “corrected average emission model,” i.e., an improved average speed model that accurately calculates CO2 emissions on the road. When emissions from the central roads of a city are calculated, the existing average speed model only reflects the driving behavior of a vehicle that accelerates and decelerates due to signals and traffic. Therefore, we verified the accuracy of the average speed model, analyzed the causes of errors based on the instantaneous model utilizing second-by-second data from driving in a city center, and then developed a corrected model that can improve the accuracy. We collected GPS data from probe vehicles, and calculated and analyzed the average emissions and instantaneous emissions per link unit. Our results showed that the average speed model underestimated CO2 emissions with an increase in acceleration and idle time for a speed range of 20 km/h and below, which is the speed range for traffic congestion. Based on these results, we analyzed the relationship between average emissions and instantaneous emissions according to the average speed per link unit, and we developed a model that performed better with an improved accuracy of calculated CO2 emissions for 20 km/h and below.  相似文献   

15.
To accurately estimate real-world vehicle emission at 1 Hz the road grade for each second of data must be quantified. Failure to incorporate road grade can result in over or underestimation of a vehicle’s power output and hence cause inaccuracy in the instantaneous emission estimate. This study proposes a simple LiDAR (Light Detection And Ranging) – GIS (Geographic Information System) road grade estimation methodology, using GIS software to interpolate the elevation for each second of data from a Digital Terrain Map (DTM). On-road carbon dioxide (CO2) emissions from a passenger car were recorded by Portable Emission Measurement System (PEMS) over 48 test laps through an urban-traffic network. The test lap was divided into 8 sections for micro-scale analysis. The PHEM instantaneous emission model (Hausberger, 2003) was employed to estimate the total CO2 emission through each lap and section. The addition of the LiDAR-GIS road grade to the PHEM modelling improved the accuracy of the CO2 emission predictions. The average PHEM estimate (with road grade) of the PEMS measured section total CO2 emission (n = 288) was 93%, with 90% of the PHEM estimates between 80% and 110% of the PEMS recorded value. The research suggests that instantaneous emission modelling with LiDAR-GIS calculated road grade is a viable method for generating accurate real-world micro-scale CO2 emission estimates. The sensitivity of the CO2 emission predictions to road grade was also tested by lessening and exaggerating the gradient profiles, and demonstrates that assuming a flat profile could cause considerable error in real-world CO2 emission estimation.  相似文献   

16.
For uninterrupted traffic flow, it is well-known that the fundamental diagram (FD) describes the relationship between traffic flow and density under steady state. For interrupted traffic flow on a signalized road, it has been recognized that the arterial fundamental diagram (AFD) is significantly affected by signal operations. But little research up to date has discussed in detail how signal operations impact the AFD. In this paper, based upon empirical observations from high-resolution event-based traffic signal data collected from a major arterial in the Twin Cities area, we study the impacts of g/C ratio, signal coordination, and turning movements on the cycle-based AFD, which describes the relationship between traffic flow and occupancy in a signal cycle. By microscopically investigating individual vehicle trajectories from event-based data, we demonstrate that not only g/C ratio constrains the capacity of a signalized approach, poor signal coordination and turning movements from upstream intersections also have significant impact on the capacity. We show that an arterial link may not be congested even with high occupancy values. Such high values could result from queue build-up during red light that occupies the detector, i.e. the Queue-Over-Detector (QOD) phenomenon discussed in this paper. More importantly, by removing the impact of QOD, a stable form of AFD is revealed, and one can use that to identify three different regimes including under-saturation, saturation, and over-saturation with queue spillovers. We believe the stable form of AFD is of great importance for traffic signal control because of its ability to identify traffic states on a signal link.  相似文献   

17.
This paper presents a comprehensive econometric modelling framework for daily activity program generation. It is for day-specific activity program generations of a week-long time span. Activity types considered are 15 generic categories of non-skeletal and flexible activities. Under the daily time budget and non-negativity of participation rate constraints, the models predict optimal sets of frequencies of the activities under consideration (given the average duration of each activity type). The daily time budget considers at-home basic needs and night sleep activities together as a composite activity. The concept of composite activity ensures the dynamics and continuity of time allocation and activity/travel behaviour by encapsulating altogether the activity types that are not of our direct interest in travel demand modelling. Workers’ total working hours (skeletal activity and not a part of the non-skeletal activity time budget) are considered as a variable in the models to accommodate the scheduling effects inside the generation model of non-skeletal activities. Incorporation of previous day’s total executed activities as variables introduces day-to-day dynamics into the activity program generation models. The possibility of zero frequency of any specific activity under consideration is ensured by the Kuhn-Tucker optimality conditions used for formulating the model structure. Models use the concept of random utility maximization approach to derive activity program set. Estimations of the empirical models are done using the 2002–2003 CHASE survey data set collected in Toronto.
Eric J. MillerEmail:
  相似文献   

18.
This paper considers the problem of short to mid-term aircraft trajectory prediction, that is, the estimation of where an aircraft will be located over a 10–30 min time horizon. Such a problem is central in decision support tools, especially in conflict detection and resolution algorithms. It also appears when an air traffic controller observes traffic on the radar screen and tries to identify convergent aircraft, which may be in conflict in the near future. An innovative approach for aircraft trajectory prediction is presented in this paper. This approach is based on local linear functional regression that considers data preprocessing, localizing and solving linear regression using wavelet decomposition. This algorithm takes into account only past radar tracks, and does not use any physical or aeronautical parameters. This approach has been successfully applied to aircraft trajectories between several airports on the data set that is one year air traffic over France. The method is intrinsic and independent from airspace structure.  相似文献   

19.
ABSTRACT

This paper describes the development of a probabilistic formulation that provides global optimum selection and allocation of a fleet of buses in a private transportation system of an organization where a third party is hired to provide transportation for its employees and their dependents. In this private transportation system, a fleet of buses is to be selected and allocated to serve employees and their independents on different prescheduled trips along different routes from the organization’s headquarters and residential compound where round-trip times of scheduled trips are subject to uncertainty due to random delays. We propose a probabilistic approach based on 0-1 integer programming for the selection and allocation to determine the optimal number and size of buses assigned to a set of prescheduled trips in a particular time interval. Examples and a case study are presented to illustrate the applicability and suitability of the proposed approach.  相似文献   

20.
Stated choice experiments have proven to be a powerful tool in eliciting preferences across a broad range of choice settings. This paper outlines the elements of a group-based experiment designed for interdependent urban freight stakeholders, along with the procedure to administer the questionnaire sequentially. The focus is on the design of a computer-assisted personal survey instrument and the value in disseminating the details of a new approach to design and collect stated choice data for interacting agents. The paper also discusses how to specify a reference alternative, and then how to recruit appropriate real-market or representative decision-making group members to participate in a subsequent phase of the survey, which incorporates the reference alternative and contextual information from an initial phase. The empirical strategy, set out in some detail, provides a new framework within which to understand more fully the role that specific attributes, such as variable user charges, influencing freight distribution chains might play, and who in the supply chain is affected by specific attributes in terms of willingness to pay for the gains in distribution efficiency.
Andrew CollinsEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号