首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The development and calibration of complex traffic models demands parsimonious techniques, because such models often involve hundreds of thousands of unknown parameters. The Weighted Simultaneous Perturbation Stochastic Approximation (W-SPSA) algorithm has been proven more efficient than its predecessor SPSA (Spall, 1998), particularly in situations where the correlation structure of the variables is not homogeneous. This is crucial in traffic simulation models where effectively some variables (e.g. readings from certain sensors) are strongly correlated, both in time and space, with some other variables (e.g. certain OD flows). In situations with reasonably sized traffic networks, the difference is relevant considering computational constraints. However, W-SPSA relies on determining a proper weight matrix (W) that represents those correlations, and such a process has been so far an open problem, and only heuristic approaches to obtain it have been considered.This paper presents W-SPSA in a formally comprehensive way, where effectively SPSA becomes an instance of W-SPSA, and explores alternative approaches for determining the matrix W. We demonstrate that, relying on a few simplifications that marginally affect the final solution, we can obtain W matrices that considerably outperform SPSA. We analyse the performance of our proposed algorithm in two applications in motorway networks in Singapore and Portugal, using a dynamic traffic assignment model and a microscopic traffic simulator, respectively.  相似文献   

2.
Simultaneous perturbation stochastic approximation (SPSA) is an efficient and well established optimization method that approximates gradients from successive objective function evaluations. It is especially attractive for high-dimensional problems and has been successfully applied to the calibration of Dynamic Traffic Assignment (DTA) models. This paper presents an enhanced SPSA algorithm, called Weighted SPSA (W-SPSA), which incorporates the information of spatial and temporal correlation in a traffic network to limit the impact of noise and improve convergence and robustness. W-SPSA appears to outperform the original SPSA algorithm by reducing the noise generated by uncorrelated measurements in the gradient approximation, especially for DTA models of sparsely correlated large-scale networks and a large number of time intervals. Comparisons between SPSA and W-SPSA have been performed through rigorous synthetic tests and the application of W-SPSA for the calibration of real world DTA networks is demonstrated with a case study of the entire expressway network in Singapore.  相似文献   

3.
Use of traffic simulation has increased in recent decades; and this high-fidelity modelling, along with moving vehicle animation, has allowed transportation decisions to be made with better confidence. During this time, traffic engineers have been encouraged to embrace the process of calibration, in which steps are taken to reconcile simulated and field-observed performance. According to international surveys, experts, and conventional wisdom, existing (non-automated) methods of calibration have been difficult or inadequate. There has been extensive research on improved calibration methods, but many of these efforts have not produced the flexibility and practicality required by real-world engineers. With this in mind, a patent-pending (US 61/859,819) architecture for software-assisted calibration was developed to maximize practicality, flexibility, and ease-of-use. This architecture is called SASCO (i.e. Sensitivity Analysis, Self-Calibration, and Optimization). The original optimization method within SASCO was based on “directed brute force” (DBF) searching; performing exhaustive evaluation of alternatives in a discrete, user-defined search space. Simultaneous Perturbation Stochastic Approximation (SPSA) has also gained favor as an efficient method for optimizing computationally expensive, “black-box” traffic simulations, and was also implemented within SASCO. This paper uses synthetic and real-world case studies to assess the qualities of DBF and SPSA, so they can be applied in the right situations. SPSA was found to be the fastest method, which is important when calibrating numerous inputs, but DBF was more reliable. Additionally DBF was better than SPSA for sensitivity analysis, and for calibrating complex inputs. Regardless of which optimization method is selected, the SASCO architecture appears to offer a new and practice-ready level of calibration efficiency.  相似文献   

4.
We develop theoretical and computational tools which can appraise traffic flow models and optimize their performance against current time-series traffic data and prevailing conditions. The proposed methodology perturbs the parameter space and undertakes path-wise analysis of the resulting time series. Most importantly the approach is valid even under non-equilibrium conditions and is based on procuring path-space (time-series) information. More generally we propose a mathematical methodology which quantifies traffic information loss.In particular the method undertakes sensitivity analysis on available traffic data and optimizes the traffic flow model based on two information theoretic tools which we develop. One of them, the relative entropy rate, can adjust and optimize model parameter values in order to reduce the information loss. More precisely, we use the relative entropy rate as an information metric between time-series data and parameterized stochastic dynamics describing a microscopic traffic model. On the other hand, the path-space Fisher Information Matrix, (pFIM) reduces model complexity and can even be used to control fidelity. This is achieved by eliminating unimportant model parameters or their combinations. This results in easier regression of parametric models with a smaller number of parameters.The method reconstructs the Markov Chain and emulates the traffic dynamics through Monte Carlo simulations. We use the microscopic interaction model from Sopasakis and Katsoulakis (2006) as a representative traffic flow model to illustrate this parameterization methodology. During the comparisons we use both synthetic and real, rush-hour, traffic data from highway US-101 in Los Angeles, California.  相似文献   

5.
The predictions of a well-calibrated traffic simulation model are much more valid if made for various conditions. Variation in traffic can arise due to many factors such as time of day, work zones and weather. Calibration of traffic simulation models for traffic conditions requires larger datasets to capture the stochasticity in traffic conditions. In this study we use datasets spanning large time periods to incorporate variability in traffic flow, speed for various time periods. However, large data poses a challenge in terms of computational effort. With the increase in number of stochastic factors, the numerical methods suffer from the curse of dimensionality. In this study, we propose a novel methodology to address the computational complexity due to the need for the calibration of simulation models under highly stochastic traffic conditions. This methodology is based on sparse grid stochastic collocation, which, treats each stochastic factor as a different dimension and uses a limited number of points where simulation and calibration are performed. A computationally efficient interpolant is constructed to generate the full distribution of the simulated flow output. We use real-world examples to calibrate for different times of day and conditions and show that this methodology is much more efficient that the traditional Monte Carlo-type sampling. We validate the model using a hold out dataset and also show the drawback of using limited data for the calibration of a macroscopic simulation model. We also discuss the drawbacks of the predictive ability of a single calibrated model for all the conditions.  相似文献   

6.
Roadside trees in Singapore are regularly trimmed for the purpose of traffic safety and roadside tree‐trimming project is one typical type of short‐term work zone projects. To implement such a short‐term work zone project, contractors usually divide an entire work zone into multiple subwork zones with the uniform length. This paper aims to determine an optimal subwork zone strategy for the short‐term work zone projects in four‐lane two‐way freeways with time window and uniform subwork zone length constraints. The deterministic queuing model is employed to estimate total user delay caused by the work zone project by taking into account variable traffic speeds. Based on the user delay estimations, this paper proceeds to build a minimization model subject to time window and uniform length constraints for the optimal subwork zone strategy problem. This paper also presents a variation of the minimization model to examine the impact of unequal subwork zone length constraint. Since these minimization models belong to the mixed‐integer non‐differentiable optimization problems, an iterative algorithm embedding with the genetic simulated annealing method is thus proposed to solve these models. Finally, a numerical example is carried out to investigate the effectiveness of the proposed models. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Work zones on motorways necessitate the drop of one or more lanes which may lead to significant reduction of traffic flow capacity and efficiency, traffic flow disruptions, congestion creation, and increased accident risk. Real-time traffic control by use of green–red traffic signals at the motorway mainstream is proposed in order to achieve safer merging of vehicles entering the work zone and, at the same time, maximize throughput and reduce travel delays. A significant issue that had been neglected in previous research is the investigation of the impact of distance between the merge area and the traffic lights so as to achieve, in combination with the employed real-time traffic control strategy, the most efficient merging of vehicles. The control strategy applied for real-time signal operation is based on an ALINEA-like proportional–integral (PI-type) feedback regulator. In order to achieve maximum performance of the control strategy, some calibration of the regulator’s parameters may be necessary. The calibration is first conducted manually, via a typical trial-and-error procedure. In an additional investigation, the recently proposed learning/adaptive fine-tuning (AFT) algorithm is employed in order to automatically fine-tune the regulator parameters. Experiments conducted with a microscopic simulator for a hypothetical work zone infrastructure, demonstrate the potential high benefits of the control scheme.  相似文献   

8.
Simulating driving behavior in high accuracy allows short-term prediction of traffic parameters, such as speeds and travel times, which are basic components of Advanced Traveler Information Systems (ATIS). Models with static parameters are often unable to respond to varying traffic conditions and simulate effectively the corresponding driving behavior. It has therefore been widely accepted that the model parameters vary in multiple dimensions, including across individual drivers, but also spatially across the network and temporally. While typically on-line, predictive models are macroscopic or mesoscopic, due to computational and data considerations, nowadays microscopic models are becoming increasingly practical for dynamic applications. In this research, we develop a methodology for online calibration of microscopic traffic simulation models for dynamic multi-step prediction of traffic measures, and apply it to car-following models, one of the key models in microscopic traffic simulation models. The methodology is illustrated using real trajectory data available from an experiment conducted in Naples, using a well-established car-following model. The performance of the application with the dynamic model parameters consistently outperforms the corresponding static calibrated model in all cases, and leads to less than 10% error in speed prediction even for ten steps into the future, in all considered data-sets.  相似文献   

9.
To connect microscopic driving behaviors with the macro-correspondence (i.e., the fundamental diagram), this study proposes a flexible traffic stream model, which is derived from a novel car-following model under steady-state conditions. Its four driving behavior-related parameters, i.e., reaction time, calmness parameter, speed- and spacing-related sensitivities, have an apparent effect in shaping the fundamental diagram. Its boundary conditions and homogenous case are also analyzed in detail and compared with other two models (i.e., Longitudinal Control Model and Intelligent Driver Model). Especially, these model formulations and properties under Lagrangian coordinates provide a new perspective to revisit the traffic flow and complement with those under Eulerian coordinate. One calibration methodology that incorporates the monkey algorithm with dynamic adaptation is employed to calibrate this model, based on real-field data from a wide range of locations. Results show that this model exhibits the well flexibility to fit these traffic data and performs better than other nine models. Finally, a concrete example of transportation application is designed, in which the impact of three critical parameters on vehicle trajectories and shock waves with three representations (i.e., respectively defined in x-t, n-t and x-n coordinates) is tested, and macro- and micro-solutions on shock waves well agree with each other. In summary, this traffic stream model with the advantages of flexibility and efficiency has the good potential in level of service analysis and transportation planning.  相似文献   

10.
Fuel consumption or pollutant emissions can be assessed by coupling a microscopic traffic flow model with an instantaneous emission model. Traffic models are usually calibrated using goodness of fit indicators related to the traffic behavior. Thus, this paper investigates how such a calibration influences the accuracy of fuel consumption and NOx and PM estimations. Two traffic models are investigated: Newell and Gipps. It appears that the Gipps model provides the closest simulated trajectories when compared to real ones. Interestingly, a reverse ranking is observed for fuel consumption, NOx and PM emissions. For both models, the emissions of single vehicles are very sensitive to the calibration. This is confirmed by a global sensitivity analysis of the Gipps model that shows that non-optimal parameters significantly increase the variance of the outputs. Fortunately, this is no longer the case when emissions are calculated for a group of many vehicles. Indeed, the mean errors for platoons are close to 10% for the Gipps model and always lower than 4% for the Newell model. Another interesting property is that optimal parameters for each vehicle can be replaced by the mean values with no discrepancy for the Newell model and low discrepancies for the Gipps model when calculating the different emission outputs. Finally, this study presents preliminary results that show that multi-objective calibration methods are certainly the best direction for future works on the Gipps model. Indeed, the accuracy of vehicle emissions can be highly improved with negligible counterparts on the traffic model accuracy.  相似文献   

11.
Despite its importance in macroscopic traffic flow modeling, comprehensive method for the calibration of fundamental diagram is very limited. Conventional empirical methods adopt a steady state analysis of the aggregate traffic data collected from measurement devices installed on a particular site without considering the traffic dynamics, which renders the simulation may not be adaptive to the variability of data. Nonetheless, determining the fundamental diagram for each detection site is often infeasible. To remedy these, this study presents an automatic calibration method to estimate the parameters of a fundamental diagram through a dynamic approach. Simulated flow from the cell transmission model is compared against the measured flow wherein an optimization merit is conducted to minimize the discrepancy between model‐generated data and real data. The empirical results prove that the proposed automatic calibration algorithm can significantly improve the accuracy of traffic state estimation by adapting to the variability of traffic data when compared with several existing methods under both recurrent and abnormal traffic conditions. Results also highlight the robustness of the proposed algorithm. The automatic calibration algorithm provides a powerful tool for model calibration when freeways are equipped with sparse detectors, new traffic surveillance systems lack of comprehensive traffic data, or the case that lots of detectors lose their effectiveness for aging systems. Furthermore, the proposed method is useful for off‐line model calibration under abnormal traffic conditions, for example, incident scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
13.
This paper shows that the behavior of driver models, either individually or entangled in stochastic traffic simulation, is affected by the accuracy of empirical vehicle trajectories. To this aim, a “traffic-informed” methodology is proposed to restore physical and platoon integrity of trajectories in a finite time–space domain, and it is applied to one NGSIM I80 dataset. However, as the actual trajectories are unknown, it is not possible to verify directly whether the reconstructed trajectories are really “nearer” to the actual unknowns than the original measurements. Therefore, a simulation-based validation framework is proposed, that is also able to verify indirectly the efficacy of the reconstruction methodology. The framework exploits the main feature of NGSIM-like data that is the concurrent view of individual driving behaviors and emerging macroscopic traffic patterns. It allows showing that, at the scale of individual models, the accuracy of trajectories affects the distribution and the correlation structure of lane-changing model parameters (i.e. drivers heterogeneity), while it has very little impact on car-following calibration. At the scale of traffic simulation, when models interact in trace-driven simulation of the I80 scenario (multi-lane heterogeneous traffic), their ability to reproduce the observed macroscopic congested patterns is sensibly higher when model parameters from reconstructed trajectories are applied. These results are mainly due to lane changing, and are also the sought indirect validation of the proposed data reconstruction methodology.  相似文献   

14.
The use of microscopic simulation models to assess the likely effects of new traffic management applications and changes in vehicle technology is becoming increasingly popular. However the validity of the models is a topic of increasing concern, as the quality of the presentation often exceeds the models ability to predict what is likely to happen.Traditionally, model validity has been ascertained through comparing outputs aggregated at a macroscopic level such as speed flow and lane use, against real data. Little microscopic comparison is generally possible and, where this is done there is often no separation of the calibration and validation process. This paper demonstrates how microscopic validation may be undertaken when suitable data is available, in this case time series data collected by an instrumented vehicle, and its use in the validation of the car following performance of a fuzzy logic based car following model. Good agreement has been attained between the simulated model and observed data, primarily using a root mean square error indicator. Lastly, a brief comparison of the new model with the performance of a number of existing formulations has also been undertaken.  相似文献   

15.
Abstract

A large number of cellular automata (CA) based traffic flow models have been proposed in the recent past. Often, the speed‐flow‐density relations obtained from these models are only presented and their apparent similarities with observed relations are cited as reasons for considering them as valid models of traffic flow. Hardly any attempt has been made to comprehensively study the microscopic properties (like time‐headway distribution, acceleration noise, stability in car‐following situations, etc.) of the simulated streams. This article proposes a framework for such evaluations. The article also presents the results from the evaluation of six existing CA‐based models. The results show that none of them satisfy all the properties. A new model proposed by the authors to overcome these shortcomings is briefly presented, and results supporting the improved performance of the proposed model are also provided.  相似文献   

16.
Macroscopic pedestrian models for bidirectional flow analysis encounter limitations in describing microscopic dynamics at crosswalks. Pedestrian behavior at crosswalks is typically characterized by the evasive effect with conflicting pedestrians and vehicles and the following effect with leading pedestrians. This study proposes a hybrid approach (i.e., route search and social force-based approach) for modeling of pedestrian movement at signalized crosswalks. The key influential factors, i.e., leading pedestrians, conflict with opposite pedestrians, collision avoidance with vehicles, and compromise with traffic lights, are considered. Aerial video data collected at one intersection in Beijing, China were recorded and extracted. A new calibration approach based on a genetic algorithm is proposed that enables optimization of the relative error of pedestrian trajectory in two dimensions, i.e., moving distance and angle. Model validation is conducted by comparison with the observed trajectories in five typical cases of pedestrian crossing with or without conflict between pedestrians and vehicles. The characteristics of pedestrian flow, speed, acceleration, pedestrian-vehicle conflict, and the lane formation phenomenon were compared with those from two competitive models, thus demonstrating the advantage of the proposed model.  相似文献   

17.
The primary focus of this research is to develop an approach to capture the effect of travel time information on travelers’ route switching behavior in real-time, based on on-line traffic surveillance data. It also presents a freeway Origin–Destination demand prediction algorithm using an adaptive Kalman Filtering technique, where the effect of travel time information on users’ route diversion behavior has been explicitly modeled using a dynamic, aggregate, route diversion model. The inherent dynamic nature of the traffic flow characteristics is captured using a Kalman Filter modeling framework. Changes in drivers’ perceptions, as well as other randomness in the route diversion behavior, have been modeled using an adaptive, aggregate, dynamic linear model where the model parameters are updated on-line using a Bayesian updating approach. The impact of route diversion on freeway Origin–Destination demands has been integrated in the estimation framework. The proposed methodology is evaluated using data obtained from a microscopic traffic simulator, INTEGRATION. Experimental results on a freeway corridor in northwest Indiana establish that significant improvement in Origin–Destination demand prediction can be achieved by explicitly accounting for route diversion behavior.  相似文献   

18.
This paper proposes a combined usage of microscopic traffic simulation and Extreme Value Theory (EVT) for safety evaluation. Ten urban intersections in Fengxian District in Shanghai were selected in the study and three calibration strategies were applied to develop simulation models for each intersection: a base strategy with fundamental data input, a semi-calibration strategy adjusting driver behavior parameters based on Measures of Effectiveness (MOE), and a full-calibration strategy altering driver behavior parameters by both MOE and Measures of Safety (MOS). SSAM was used to extract simulated conflict data from vehicle trajectory files from VISSIM and video-based data collection was introduced to assist trained observers to collect field conflict data. EVT-based methods were then employed to model both simulated/field conflict data and derive the Estimated Annual Crash Frequency (EACF), used as Surrogate Safety Measures (SSM). PET was used for EVT measurement for three conflict types: crossing, rear-end, and lane change. EACFs based on three simulation calibration strategies were compared with field-based EACF, conventional SSM based on Traffic Conflict Techniques (TCT), and actual crash frequency, in terms of direct correlation, rank correlation, and prediction accuracy. The results showed that, MOS should be considered during simulation model calibration and EACF based on the full-calibration strategy appeared to be a better choice for simulation-based safety evaluation, compared to other candidate safety measures. In general, the combined usage of microscopic traffic simulation and EVT is a promising tool for safety evaluation.  相似文献   

19.
Improper mandatory lane change (MLC) maneuvers in the vicinity of highway off-ramp will jeopardize traffic efficiency and safety. Providing an advance warning for lane change necessity is one of the efficient methods to perform systematic lane change management, which encourages smooth MLC maneuvers occurring at proper locations to mitigate the negative effects of MLC maneuvers on traffic flow nearby off-ramp. However, the state of the art indicates the lack of rigorous methods to optimally locate this advance warning so that the maximum benefit can be obtained. This research is motivated to address this gap. Specifically, the proposed approach considers that the area downstream of the advance warning includes two zones: (i) the green zone whose traffic ensures safe and smooth lane changes without speed deceleration (S-MLC); the start point of the green zone corresponding to the location of the advance warning; (ii) the yellow zone whose traffic leads to rush lane change maneuvers with speed deceleration (D-MLC). An optimization model is proposed to search for the optimal green and yellow zones. Traffic flow theory such as Greenshield model and shock wave analysis are used to analyze the impacts of the S-MLC and D-MLC maneuvers on the traffic delay. A grid search algorithm is applied to solve the optimization model. Numerical experiments conducted on the simulation model developed in Paramics 6.9.3 indicate that the proposed optimization model can identify the optimal location to set the advance MLC warning nearby an off-ramp so that the traffic delay resulting from lane change maneuvers is minimized, and the corresponding capacity drop and traffic oscillation can be efficiently mitigated. Moreover, the experiments validated the consistency of the green and yellow zones obtained in the simulation traffic flow and from the optimization model for a given optimally located MLC advance warning under various traffic regimes. The proposed approach can be implemented by roadside mobile warning facility or on-board GPS for human-driven vehicles, or embedded into lane change aid systems to serve connected and automated vehicles. Thus it will greatly contribute to both literature and engineering practice in lane change management.  相似文献   

20.
Due to the noticeable environmental and economical problems caused by traffic congestion and by the emissions produced by traffic, analysis and control of traffic is essential. One of the various traffic analysis approaches is the model-based approach, where a mathematical model of the traffic system is developed/used based on the governing physical rules of the system. In this paper, we propose a framework to interface and integrate macroscopic flow models and microscopic emission models. As a result, a new mesoscopic integrated flow-emission model is obtained that provides a balanced trade-off between high accuracy and low computation time. The proposed approach considers an aggregated behavior for different groups of vehicles (mesoscopic) instead of considering the behavior of individual vehicles (microscopic) or the entire group of vehicles (macroscopic). A case study is done to evaluate the proposed framework, considering the performance of the resulting mesoscopic integrated flow-emission model. The traffic simulation software SUMO combined with the microscopic emission model VT-micro is used as the comparison platform. The results of the case study prove that the proposed approach provides excellent results with high accuracy levels. In addition, the mesoscopic nature of the integrated flow-emission model guarantees a low CPU time, which makes the proposed framework suitable for real-time model-based applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号