首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1395篇
  免费   162篇
公路运输   456篇
综合类   498篇
水路运输   287篇
铁路运输   250篇
综合运输   66篇
  2024年   10篇
  2023年   28篇
  2022年   74篇
  2021年   96篇
  2020年   76篇
  2019年   49篇
  2018年   61篇
  2017年   49篇
  2016年   34篇
  2015年   55篇
  2014年   109篇
  2013年   89篇
  2012年   129篇
  2011年   127篇
  2010年   98篇
  2009年   83篇
  2008年   75篇
  2007年   86篇
  2006年   82篇
  2005年   48篇
  2004年   18篇
  2003年   21篇
  2002年   14篇
  2001年   21篇
  2000年   10篇
  1999年   7篇
  1998年   2篇
  1997年   1篇
  1996年   1篇
  1994年   2篇
  1993年   1篇
  1986年   1篇
排序方式: 共有1557条查询结果,搜索用时 15 毫秒
31.
32.
本文以基于工作过程系统化思想,对汽车车身修复技术课程进行了设计.阐述了本课程基于工作过程系统化的课程设计思路、学习情境设计、课程教学方法及考核方法设计.  相似文献   
33.
考虑城市大客流通勤者跨区域出行需求,结合城市公交线网中乘客出行密集、客流走向规律等特点,提出一种跨区域定制公交的搭乘方案. 通过改进的Q-learning 模型对公交线路进行优化,为城市通勤者提供更加便捷和高效的出行服务. 通过综合路段拥堵状态、乘客需求及居民小区位置,设定了Q-learning 强化学习的奖惩函数,提升定制公交区域路径的直线系数、满载率、通行时间. 结果表明,所提出的改进方法能够降低通勤者跨区域通行的旅行时间,有效提高髙峰时段定制公交线网的通行效率.  相似文献   
34.
[目的]智能船舶的航迹跟踪控制问题往往面临着控制环境复杂、控制器稳定性不高以及大量的算法计算等问题。为实现对航迹跟踪的精准控制,提出一种引入深度强化学习技术的航向控制器。[方法]首先,结合视线(LOS)算法制导,以船舶的操纵特性和控制要求为基础,将航迹跟踪问题建模成马尔可夫决策过程,设计其状态空间、动作空间、奖励函数;然后,使用深度确定性策略梯度(DDPG)算法作为控制器的实现,采用离线学习方法对控制器进行训练;最后,将训练完成的控制器与BP-PID控制器进行对比研究,分析控制效果。[结果]仿真结果表明,设计的深度强化学习控制器可以从训练学习过程中快速收敛达到控制要求,训练后的网络与BP-PID控制器相比跟踪迅速,具有偏航误差小、舵角变化频率小等优点。[结论]研究成果可为智能船舶航迹跟踪控制提供参考。  相似文献   
35.
In this research, a Bayesian network (BN) approach is proposed to model the car use behavior of drivers by time of day and to analyze its relationship with driver and car characteristics. The proposed BN model can be categorized as a tree-augmented naive (TAN) Bayesian network. A latent class variable is included in this model to describe the unobserved heterogeneity of drivers. Both the structure and the parameters are learned from the dataset, which is extracted from GPS data collected in Toyota City, Japan. Based on inferences and evidence sensitivity analysis using the estimated TAN model, the effects of each single observed characteristic on car use measures are tested and found to be significant. The features of each category of the latent class are also analyzed. By testing the effect of each car use measure on every other measure, it is found that the correlations between car use measures are significant and should be considered in modeling car use behavior.  相似文献   
36.
通过对107名高职院校学生的学习动机、自我效能感以及两者关系的测量统计,发现高职学生的学习动机、内生动机与学业自我效能感和一般自我效能感之间均存在显著正相关,并以此为依据,探索提出了以增强自我效能感来提高学习兴趣的思路和方法。  相似文献   
37.
长江下游河段施工区域地形异常复杂、水深和流速条件对抛石坝体形成非常不利。以长江南京以下12.5 m深水航道二期工程口岸直水道整治工程Ⅱ标段为依托,通过改造抛石设备、增加串筒约束构造来降低抛石高度、减小抛石漂移距的方法,减小外界条件对抛石落点的影响,从而达到控制块石落点的目的,实现了水下抛石精准定位。工程应用效果良好,在提高施工质量的同时降低了成本,为类似项目提供了借鉴。  相似文献   
38.
为满足洋山深水港生产营运需要,有关部门希望保留颗珠山—蒋公柱潮流汊道,这与总体规划布置不一致。为此需进行深水港西港区岸线功能规划调整研究。考虑到汊道的颗珠山一侧潮流具有岬角绕流型特点,而蒋公柱一侧有岬角环抱型特点;在此认识基础上针对多种岸线方案进行试验。试验结果表明,采用岸线优化方案后,可以保持甚至改善汊道内和已建港区潮流条件。从水流角度看,采取合理的岸线布置和适当的工程措施后,保留和开发利用颗珠山—蒋公柱潮流汊道是可行的。  相似文献   
39.
This study proposes a framework for human-like autonomous car-following planning based on deep reinforcement learning (deep RL). Historical driving data are fed into a simulation environment where an RL agent learns from trial and error interactions based on a reward function that signals how much the agent deviates from the empirical data. Through these interactions, an optimal policy, or car-following model that maps in a human-like way from speed, relative speed between a lead and following vehicle, and inter-vehicle spacing to acceleration of a following vehicle is finally obtained. The model can be continuously updated when more data are fed in. Two thousand car-following periods extracted from the 2015 Shanghai Naturalistic Driving Study were used to train the model and compare its performance with that of traditional and recent data-driven car-following models. As shown by this study’s results, a deep deterministic policy gradient car-following model that uses disparity between simulated and observed speed as the reward function and considers a reaction delay of 1 s, denoted as DDPGvRT, can reproduce human-like car-following behavior with higher accuracy than traditional and recent data-driven car-following models. Specifically, the DDPGvRT model has a spacing validation error of 18% and speed validation error of 5%, which are less than those of other models, including the intelligent driver model, models based on locally weighted regression, and conventional neural network-based models. Moreover, the DDPGvRT demonstrates good capability of generalization to various driving situations and can adapt to different drivers by continuously learning. This study demonstrates that reinforcement learning methodology can offer insight into driver behavior and can contribute to the development of human-like autonomous driving algorithms and traffic-flow models.  相似文献   
40.
[Objective ] To meet the requirements of remotely controlling ship in curved, narrow and crowded inland waterways, this paper proposes an approach that consists of CNN-based algorithms and knowledge based models under ship-shore cooperation conditions. [Method]On the basis of analyzing the characteristics of ship-shore cooperation, the proposed approach realizes autonomous perception of the environment with visual simulation at the core and navigation decision-making control based on deep reinforcement learning, and finally constructs an artificial intelligence system composed of image deep learning processing, navigation situation cognition, route steady-state control and other functions. Remote control and short-time autonomous navigation of operating ships are realized under inland waterway conditions, and remote control of container ships and ferries is carried out. [Results]The proposed approach is capable of replacing manual work by remote orders or independent decision-making, as well as realizing independent obstacle avoidance, with a consistent deviation of less than 20 meters. [Conclusions]The developed prototype system carries out the remote control operation demonstration of the above ship types in such waterways as the Changhu Canal Shenzhou line and the Yangtze River, proving that a complete set of algorithms with a CNN and reinforcement learning at the core can independently extract key navigation information, construct obstacle avoidance and control awareness, and lay the foundation for inland river intelligent navigation systems. © 2022 Journal of Clinical Hepatology. All rights reserved.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号