首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2193篇
  免费   81篇
公路运输   424篇
综合类   796篇
水路运输   607篇
铁路运输   408篇
综合运输   39篇
  2024年   9篇
  2023年   17篇
  2022年   43篇
  2021年   62篇
  2020年   53篇
  2019年   32篇
  2018年   16篇
  2017年   19篇
  2016年   36篇
  2015年   69篇
  2014年   145篇
  2013年   106篇
  2012年   156篇
  2011年   174篇
  2010年   154篇
  2009年   171篇
  2008年   162篇
  2007年   170篇
  2006年   217篇
  2005年   139篇
  2004年   91篇
  2003年   44篇
  2002年   31篇
  2001年   31篇
  2000年   27篇
  1999年   12篇
  1998年   17篇
  1997年   12篇
  1996年   9篇
  1995年   4篇
  1994年   7篇
  1993年   10篇
  1992年   8篇
  1991年   3篇
  1990年   6篇
  1989年   6篇
  1988年   6篇
排序方式: 共有2274条查询结果,搜索用时 234 毫秒
141.
从影响文化遗产数字化保护的内外因素分析入手,得出影响文化遗产数字化保护的两类主要内部因素为:其自身的自然属性与文化属性;七类主要外部因素为:科技因素、政策因素、管理因素、传播因素、研究因素、经济因素、群体因素;以及一类次要其他因素。在此基础上进一步推导得出文化遗产数字化保护效能公式,并得出结论:变量政策因素和经济因素作为具有否定作用的主要变量因素,决定着整个效能的高效与否;同时其他变量因素之和值量的高低,也在很大程度上决定着最后评定的效能值。  相似文献   
142.
针对小波变换不能很好地表达图像边缘信息,NSCT变换对图像细节信息表达缺失的问题,本文提出了一种改进的基于NSCT变换的图像融合方法.首先将经过预处理和配准后的红外图像和可见光图像进行NSCT变换,得到各个源图像的低频和高频系数,然后对分解后的低频系数采用小波变换的融合规则进行融合处理,高频系数则采用基于特征的区域能量的融合规则进行融合处理,最后对融合后的系数进行NSCT反变换得到融合图像.仿真实验表明,采用改进的NSCT融合方法对红外与可见光图像的融合有良好的效果,图像更清晰,信息更全面.  相似文献   
143.
数字化汽车标准信源体系的建立是解决我国公路交通管理难题的有效方式。以800/900MHz无源RFID技术可建立起数字化汽车标准信源体系平台,通过建立该平台,可形成不停车收费及路径识别的应用系统,该系统不但可以实现不停车收费及路径识别功能,还可以进行一系列的拓展应用,提高我国公路交通的管理水平。  相似文献   
144.
在公路工程的施工技术管理中,工程试验检测是一个重要环节,同时也是公路施工质量控制和工程竣工验收工作中不可或缺的一个关键步骤。针对这样一个关键环节,从数据表达和误差表示两个方面分析公路工程试验检测数据处理问题,可供参考借鉴。  相似文献   
145.
在玻璃基底上通过两次蒸发的新技术制备SuxS薄膜,硫在衬底温度为160℃-200℃时与淀积在衬底上的铜直接发生反应,生成CuxS薄膜,实验发现,生成的CuxS薄与衬底温度有很大关系。160℃时生成的CuxS呈黄绿色,而在190℃左右生成的多晶状的CuxS薄膜,颜色为深绿色,通过XRD、SEM、TS等方法样品的组织结构进行了研究和讨论。  相似文献   
146.
在汽车标准件的生产过程中,模具起着主导作用,只有提高模具寿命,才能起到事半功倍的效果。有效提高标准件模具的寿命,除了需要合理选用模具的材料,合理设计模具的结构,做好模具的热处理以及加工与抛光外,还要选用恰当的润滑剂和精度高的设备,同时要加强管理,提高操作者的素质。  相似文献   
147.
隧道的施工监控测量对保证工程质量与安全具有重要意义。在建兴高速樊屯隧道施工过程中,对地表沉降、拱顶下沉、围岩收敛等进行了监控测量。介绍了樊屯隧道施工监控测量的内容与方法,并对监测数据进行了模拟分析。结果表明,隧道的地表沉降、拱顶下沉、围岩收敛在隧道施工后的15 d 里变形迅速,在随后一个月逐渐趋于稳定;对观测数据所建立的回归分析模型具有较大的判定系数 R2,模型具有较好的稳定性与预测能力。  相似文献   
148.
结合具体实例探讨了数控车床加工细长轴工艺方法。详细阐述了在数控车床上加工细长轴过程中车床精度的调整、工件刚性增强、切削力、切削热减少及车刀角度合理选择等关健问题的处理方法和技巧。  相似文献   
149.
The effectiveness of traditional incident detection is often limited by sparse sensor coverage, and reporting incidents to emergency response systems is labor-intensive. We propose to mine tweet texts to extract incident information on both highways and arterials as an efficient and cost-effective alternative to existing data sources. This paper presents a methodology to crawl, process and filter tweets that are accessible by the public for free. Tweets are acquired from Twitter using the REST API in real time. The process of adaptive data acquisition establishes a dictionary of important keywords and their combinations that can imply traffic incidents (TI). A tweet is then mapped into a high dimensional binary vector in a feature space formed by the dictionary, and classified into either TI related or not. All the TI tweets are then geocoded to determine their locations, and further classified into one of the five incident categories.We apply the methodology in two regions, the Pittsburgh and Philadelphia Metropolitan Areas. Overall, mining tweets holds great potentials to complement existing traffic incident data in a very cheap way. A small sample of tweets acquired from the Twitter API cover most of the incidents reported in the existing data set, and additional incidents can be identified through analyzing tweets text. Twitter also provides ample additional information with a reasonable coverage on arterials. A tweet that is related to TI and geocodable accounts for approximately 5% of all the acquired tweets. Of those geocodable TI tweets, 60–70% are posted by influential users (IU), namely public Twitter accounts mostly owned by public agencies and media, while the rest is contributed by individual users. There is more incident information provided by Twitter on weekends than on weekdays. Within the same day, both individuals and IUs tend to report incidents more frequently during the day time than at night, especially during traffic peak hours. Individual tweets are more likely to report incidents near the center of a city, and the volume of information significantly decays outwards from the center.  相似文献   
150.
This paper aims at demonstrating the usefulness of integrating virtual 3D models in vehicle localization systems. Usually, vehicle localization algorithms are based on multi-sensor data fusion. Global Navigation Satellite Systems GNSS, as Global Positioning System GPS, are used to provide measurements of the geographic location. Nevertheless, GNSS solutions suffer from signal attenuation and masking, multipath phenomena and lack of visibility, especially in urban areas. That leads to degradation or even a total loss of the positioning information and then unsatisfactory performances. Dead-reckoning and inertial sensors are then often added to back up GPS in case of inaccurate or unavailable measurements or if high frequency location estimation is required. However, the dead-reckoning localization may drift in the long term due to error accumulation. To back up GPS and compensate the drift of the dead reckoning sensors based localization, two approaches integrating a virtual 3D model are proposed in registered with respect to the scene perceived by an on-board sensor. From the real/virtual scenes matching, the transformation (rotation and translation) between the real sensor and the virtual sensor (whose position and orientation are known) can be computed. These two approaches lead to determine the pose of the real sensor embedded on the vehicle. In the first approach, the considered perception sensor is a camera and in the second approach, it is a laser scanner. The first approach is based on image matching between the virtual image extracted from the 3D city model and the real image acquired by the camera. The two major parts are: 1. Detection and matching of feature points in real and virtual images (three features points are compared: Harris corner detector, SIFT and SURF). 2. Pose computation using POSIT algorithm. The second approach is based on the on–board horizontal laser scanner that provides a set of distances between it and the environment. This set of distances is matched with depth information (virtual laser scan data), provided by the virtual 3D city model. The pose estimation provided by these two approaches can be integrated in data fusion formalism. In this paper the result of the first approach is integrated in IMM UKF data fusion formalism. Experimental results obtained using real data illustrate the feasibility and the performances of the proposed approaches.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号