首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
文章针对广西高速公路联网收费管理系统中的清分结算问题,特别是数据传输和业务逻辑分层处理难题,采用B/S结构模式和基于消息中间件的数据传输技术来进行联网收费管理系统的开发实现。所开发的联网收费管理系统在广西高速公路网管理应用中运行稳定,性能可靠,具体较高的实用价值与技术参考价值。  相似文献   

2.
<正>实施单位:上海申通地铁集团有限公司上海城市轨道交通已建立了由站、线、网三级架构组成的能耗监测管理系统,各级系统之间通过专用通信网络进行数据传输。能耗监测管理系统可实现对轨道交通各线路、车站的能源消耗状况的检测,并生成各种能耗报表、能耗数据曲线、饼图、柱  相似文献   

3.
邱民 《西部交通科技》2011,(3):60-61,69
文章阐述了高速公路车辆偷逃通行费的主要形式,探讨了车辆偷逃通行费行为的防制措施及建议,为进一步完善征费软、硬件设施提供参考依据。  相似文献   

4.
广西桂柳高速公路管理处成立于1996年12月,是隶属于广西高速公路管理局的二级法人事业单位,处机关座落在柳州市静兰桥东柳南高速公路入口处,现有职工1034人。管理处内设征费科、养护科、经营科等8个职能科室,路政支队等3个直属机构,成立有党委办公室、工会和共青团组织。下辖桂林管理所、柳州管理所等7个基层管理所和21个收费站。负责对桂林至柳州、桂林绕城线、宜州至柳州、南宁至柳州(柳州至王灵段)、全州至黄沙河、平乐至钟山等6条(段)共495公里高速公路进行管理,其主要职能是根据广西高速公路管理的有关规定,实施路政、养护、征费管理,并对辖区内9个高速公路服务区实施行业管理。广西桂柳高速公路管理处于2003年通过ISO9001质量体系认证。  相似文献   

5.
介绍了轨道交通现有手工票务管理系统的弊端和采用电子化票务管理系统的优势。在多年票务管理经验的基础上建立的全新电子化票务管理系统,具有帐务管理(包括票务实收、票务应收和银行资金)、票卡和发票管理(车票的库存、调拨、回收和发票的领用、缴销、调拨等管理)与报表管理(营收、票卡、客流报表和车站级、线路级和运营公司级的报表)3大...  相似文献   

6.
文章通过运用范围分区策略等海量数据处理技术,采用Oracle数据库,设计和完善高速公路联网收费系统中的偷逃通行费治理应用系统,对有关通行费追缴的车道收费、收费稽查、统计清分、票据管理等功能进行了设计,方便收费管理人员发现嫌疑车辆并追缴通行费,可有效防范、打击并遏制高速公路车辆通行费偷逃行为,保证正常的高速公路收费管理秩序。  相似文献   

7.
在高速公路实现取消省界站后,行车计费、收费均实现无人值守,需要对高速公路上的机电设备和收费系统运行状态进行实时监测。文章介绍了高速公路运行监测系统的功能架构,阐述了如何通过RabbitMQ消息中间件接收门架、车道"心跳"数据,并从oracle业务表中同步交易数据到Solr数据库,实现数据分析、数据统计、界面展示功能。  相似文献   

8.
在分析高速公路机电设备维护的基础上,重点论述了高速公路机电管理系统构架组成,对系统设置、维护管理、系统管理和概述四部分进行了探讨,希望本文所探讨的公路机电管理系统能够对于高速公路信息化发展具有一定帮助。  相似文献   

9.
文章分析了典型的高速公路路基施工风险因素,基于风险管理系统的功能需求,设计了高速公路路基施工风险管理系统架构,搭建了风险预控模型,并结合高速公路路基施工实例,通过施工风险控制对比试验,证明了该风险管理系统的正确性与可行性。  相似文献   

10.
通过对高速公路安全管理系统的论述,重点分析了影响高速公路交通安全的因素,并针对这些因素对高速公路安全管理提出了一些建议。  相似文献   

11.
文章通过对EPS数据和S-57标准数据的对比分析,提出了EPS数据到IENCs数据的转换模型和入库流程,探讨了数据分层设计、数据编码设计、数据映射关系建立、属性设计及赋值、数据综合取舍及修改等数据预处理方法,并在此基础上对数据转换和入库处理进行了研究,以实现EPS数据到内河电子航道图数据的完全转换。  相似文献   

12.
Standard network data are generally used in estimation of mode choice models. These data are inaccurate in several ways, but the cost of correcting the inaccuracies is great. This paper analyzes the effects which correcting some of the inaccuracies in the standard network data has on the estimated parameters of mode choice models. Models are estimated on the standard network data and on data which have been adjusted so as to correct the problems in the standard network data. It is found that, for analysis of policies affecting transfer wait times or distances to bus stops, correction of the standard network data is advisable. For other policy analyses, however, it seems that the extra expense of correcting the standard data is unnecessary.  相似文献   

13.
Research on using high-resolution event-based data for traffic modeling and control is still at early stage. In this paper, we provide a comprehensive overview on what has been achieved and also think ahead on what can be achieved in the future. It is our opinion that using high-resolution event data, instead of conventional aggregate data, could bring significant improvements to current research and practices in traffic engineering. Event data records the times when a vehicle arrives at and departs from a vehicle detector. From that, individual vehicle’s on-detector-time and time gap between two consecutive vehicles can be derived. Such detailed information is of great importance for traffic modeling and control. As reviewed in this paper, current research has demonstrated that event data are extremely helpful in the fields of detector error diagnosis, vehicle classification, freeway travel time estimation, arterial performance measure, signal control optimization, traffic safety, traffic flow theory, and environmental studies. In addition, the cost of event data collection is low compared to other data collection techniques since event data can be directly collected from existing controller cabinet without any changes on the infrastructure, and can be continuously collected in 24/7 mode. This brings many research opportunities as suggested in the paper.  相似文献   

14.
ABSTRACT

Cities are promoting bicycling for transportation as an antidote to increased traffic congestion, obesity and related health issues, and air pollution. However, both research and practice have been stalled by lack of data on bicycling volumes, safety, infrastructure, and public attitudes. New technologies such as GPS-enabled smartphones, crowdsourcing tools, and social media are changing the potential sources for bicycling data. However, many of the developments are coming from data science and it can be difficult evaluate the strengths and limitations of crowdsourced data. In this narrative review we provide an overview and critique of crowdsourced data that are being used to fill gaps and advance bicycling behaviour and safety knowledge. We assess crowdsourced data used to map ridership (fitness, bike share, and GPS/accelerometer data), assess safety (web-map tools), map infrastructure (OpenStreetMap), and track attitudes (social media). For each category of data, we discuss the challenges and opportunities they offer for researchers and practitioners. Fitness app data can be used to model spatial variation in bicycling ridership volumes, and GPS/accelerometer data offer new potential to characterise route choice and origin-destination of bicycling trips; however, working with these data requires a high level of training in data science. New sources of safety and near miss data can be used to address underreporting and increase predictive capacity but require grassroots promotion and are often best used when combined with official reports. Crowdsourced bicycling infrastructure data can be timely and facilitate comparisons across multiple cities; however, such data must be assessed for consistency in route type labels. Using social media, it is possible to track reactions to bicycle policy and infrastructure changes, yet linking attitudes expressed on social media platforms with broader populations is a challenge. New data present opportunities for improving our understanding of bicycling and supporting decision making towards transportation options that are healthy and safe for all. However, there are challenges, such as who has data access and how data crowdsourced tools are funded, protection of individual privacy, representativeness of data and impact of biased data on equity in decision making, and stakeholder capacity to use data given the requirement for advanced data science skills. If cities are to benefit from these new data, methodological developments and tools and training for end-users will need to track with the momentum of crowdsourced data.  相似文献   

15.
This paper investigates the nature, and impact of the reporting bias associated with the police-reported crash data on inferences made using this data. In doing so, we merge a detailed emergency room data and police-reported crash data for a specific region in Denmark. To disentangle potentially common observable and unobservable factors that affect drivers’ injury severity risk and their crash reporting behavior, we formulate a bivariate ordered-response probit model of injury severity risk and crash reporting propensity. To empirically identify the reporting bias in this joint model, we exploit an exogenous police reform that particularly affects some specific municipalities of the region under consideration. The empirical analysis reveals substantial reporting bias in the commonly used police-reported road crash data. This non-random sample selection associated with the police-reported crash data leads to biased estimates on the effect of some of the explanatory variables in injury severity analysis. For instance, estimates based on the police-reported crash data substantially underestimate the effectiveness of seat belt use in reducing drivers’ injury severity risk.  相似文献   

16.
Trajectories drawn in a common reference system by all the vehicles on a road are the ultimate empirical data to investigate traffic dynamics. The vast amount of such data made freely available by the Next Generation SIMulation (NGSIM) program is therefore opening up new horizons in studying traffic flow theory. Yet the quality of trajectory data and its impact on the reliability of related studies was a vastly underestimated problem in the traffic literature even before the availability of NGSIM data. The absence of established methods to assess data accuracy and even of a common understanding of the problem makes it hard to speak of reproducibility of experiments and objective comparison of results, in particular in a research field where the complexity of human behaviour is an intrinsic challenge to the scientific method. Therefore this paper intends to design quantitative methods to inspect trajectory data. To this aim first the structure of the error on point measurements and its propagation on the space travelled are investigated. Analytical evidence of the bias propagated in the vehicle trajectory functions and a related consistency requirement are given. Literature on estimation/filtering techniques is then reviewed in light of this requirement and a number of error statistics suitable to inspect trajectory data are proposed. The designed methodology, involving jerk analysis, consistency analysis and spectral analysis, is then applied to the complete set of NGSIM databases.  相似文献   

17.
This paper presents a study that examines two waves of travel survey data through a pooled model structure. The pooled model structure provides a means to take advantage of multiple data sources which will lead to a better estimate and understanding of travel behavior. In particular, it accounts for the difference in data variance and therefore allows for the comparison of the true impacts of the model parameters on travelers’ tour-making behavior. Larger variance is found in the 1998 data than in the 2010 data. Comparison between model parameters reveals significant behavioral changes among several socio-economic and demographic groups. In terms of common variables, the magnitude of the coefficient values has generally decreased, which conforms to the overall decreasing trend in traveling. Overall, the model equality tests indicate that the models developed based on the two data sources do not have equal taste parameters, thus the transferability hypothesis is rejected. The results of this study are expected to have implications for the application of models based on cross-sectional data, especially over long time periods.  相似文献   

18.
The possibility of and procedure for pooling RP and SP data have been discussed in recent research work. In that literature, the RP data has been viewed as the yardstick against which the SP data must be compared. In this paper we take a fresh look at the two data types. Based on the peculiar strengths and weaknesses of each we propose a new, sequential approach to exploiting the strengths and avoiding the weaknesses of each data source. This approach is based on the premise that SP data, characterized by a well-conditioned design matrix and a less constrained decision environment than the real world, is able to capture respondents' tradeoffs more robustly than is possible in RP data. (This, in turn, results in more robust estimates of share changes due to changes in independent variables.) The RP data, however, represent the current market situation better than the SP data, hence should be used to establish the aggregate equilibrium level represented by the final model. The approachfixes the RP parameters for independent variables at the estimated SP parameters but uses the RP data to establish alternative-specific constants. Simultaneously, the RP data are rescaled to correct for error-in-variables problems in the RP design matrixvis-à- vis the SP design matrix. All specifications tested are Multinomial Logit (MNL) models.The approach is tested with freight shippers' choice of carrier in three major North American cities. It is shown that the proposed sequential approach to using SP and RP data has the same or better predictive power as the model calibrated solely on the RP data (which is the best possible model for that data, in terms of goodness-of-fit figures of merit), when measured in terms of Pearson's Chi-squared ratio and the percent correctly predicted statistic. The sequential approach is also shown to produce predictions with lower error than produced by the more usual method of pooling the RP and SP data.  相似文献   

19.
ABSTRACT

The collection of big data, as an alternative to traditional resource-intensive manual data collection approaches, has become significantly more feasible over the past decade. The availability of such data, coupled with more sophisticated predictive statistical techniques, has contributed to an increase in attention towards the application of these data, particularly for transportation analysis. Within the transportation literature, there is a growing emphasis on developing sources of commonly collected public transportation data into more powerful analytical tools. A commonly held belief is that application of big data to transportation problems will yield new insights previously unattainable through traditional transportation data sets. However, there exist many ambiguities related to what constitutes big data, the ethical implications of big data collection and application, and how to best utilize the emerging data sets. The existing literature exploring big data provides no clear and consistent definition. While the collection of big data has grown and its application in both research and practice continues to expand, there is a significant disparity between methods of analysis applied to such data. This paper summarizes the recent literature on sources of big data and commonly applied methods used in its application to public transportation problems. We assess predominant big data sources, most frequently studied topics, and methodologies employed. The literature suggests smart card and automated data are the two big data sources most frequently used by researchers to conduct public transit analyses. The studies reviewed indicate that big data has largely been used to understand transit users’ travel behavior and to assess public transit service quality. The techniques reported in the literature largely mirror those used with smaller data sets. The application of more advanced statistical methods, commonly associated with big data, has been limited to a small number of studies. In order to fully capture the value of big data, new approaches to analysis will be necessary.  相似文献   

20.
The origin–destination matrix is an important source of information describing transport demand in a region. Most commonly used methods for matrix estimation use link volumes collected on a subset of links in order to update an existing matrix. Traditional volume data collection methods have significant shortcomings because of the high costs involved and the fact that detectors only provide status information at specified locations in the network. Better matrix estimates can be obtained when information is available about the overall distribution of traffic through time and space. Other existing technologies are not used in matrix estimation methods because they collect volume data aggregated on groups of links, rather than on single links. That is the case of mobile systems. Mobile phones sometimes cannot provide location accuracy for estimating flows on single links but do so on groups of links; in contrast, data can be acquired over a wider coverage without additional costs. This paper presents a methodology adapted to the concept of volume aggregated on groups of links in order to use any available volume data source in traditional matrix estimation methodologies. To calculate volume data, we have used a model that has had promising results in transforming phone call data into traffic movement data. The proposed methodology using vehicle volumes obtained by such a model is applied over a large real network as a case study. The experimental results reveal the efficiency and consistency of the solution proposed, making the alternative attractive for practical applications. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号