全文获取类型
收费全文 | 686篇 |
免费 | 87篇 |
专业分类
公路运输 | 161篇 |
综合类 | 431篇 |
水路运输 | 75篇 |
铁路运输 | 49篇 |
综合运输 | 57篇 |
出版年
2024年 | 6篇 |
2023年 | 31篇 |
2022年 | 68篇 |
2021年 | 74篇 |
2020年 | 47篇 |
2019年 | 19篇 |
2018年 | 32篇 |
2017年 | 20篇 |
2016年 | 17篇 |
2015年 | 20篇 |
2014年 | 41篇 |
2013年 | 32篇 |
2012年 | 38篇 |
2011年 | 52篇 |
2010年 | 43篇 |
2009年 | 45篇 |
2008年 | 25篇 |
2007年 | 35篇 |
2006年 | 43篇 |
2005年 | 28篇 |
2004年 | 8篇 |
2003年 | 10篇 |
2002年 | 8篇 |
2001年 | 17篇 |
2000年 | 4篇 |
1999年 | 6篇 |
1998年 | 1篇 |
1994年 | 2篇 |
1993年 | 1篇 |
排序方式: 共有773条查询结果,搜索用时 15 毫秒
11.
This study proposes a framework for human-like autonomous car-following planning based on deep reinforcement learning (deep RL). Historical driving data are fed into a simulation environment where an RL agent learns from trial and error interactions based on a reward function that signals how much the agent deviates from the empirical data. Through these interactions, an optimal policy, or car-following model that maps in a human-like way from speed, relative speed between a lead and following vehicle, and inter-vehicle spacing to acceleration of a following vehicle is finally obtained. The model can be continuously updated when more data are fed in. Two thousand car-following periods extracted from the 2015 Shanghai Naturalistic Driving Study were used to train the model and compare its performance with that of traditional and recent data-driven car-following models. As shown by this study’s results, a deep deterministic policy gradient car-following model that uses disparity between simulated and observed speed as the reward function and considers a reaction delay of 1 s, denoted as DDPGvRT, can reproduce human-like car-following behavior with higher accuracy than traditional and recent data-driven car-following models. Specifically, the DDPGvRT model has a spacing validation error of 18% and speed validation error of 5%, which are less than those of other models, including the intelligent driver model, models based on locally weighted regression, and conventional neural network-based models. Moreover, the DDPGvRT demonstrates good capability of generalization to various driving situations and can adapt to different drivers by continuously learning. This study demonstrates that reinforcement learning methodology can offer insight into driver behavior and can contribute to the development of human-like autonomous driving algorithms and traffic-flow models. 相似文献
12.
13.
[Objective ] To meet the requirements of remotely controlling ship in curved, narrow and crowded inland waterways, this paper proposes an approach that consists of CNN-based algorithms and knowledge based models under ship-shore cooperation conditions. [Method]On the basis of analyzing the characteristics of ship-shore cooperation, the proposed approach realizes autonomous perception of the environment with visual simulation at the core and navigation decision-making control based on deep reinforcement learning, and finally constructs an artificial intelligence system composed of image deep learning processing, navigation situation cognition, route steady-state control and other functions. Remote control and short-time autonomous navigation of operating ships are realized under inland waterway conditions, and remote control of container ships and ferries is carried out. [Results]The proposed approach is capable of replacing manual work by remote orders or independent decision-making, as well as realizing independent obstacle avoidance, with a consistent deviation of less than 20 meters. [Conclusions]The developed prototype system carries out the remote control operation demonstration of the above ship types in such waterways as the Changhu Canal Shenzhou line and the Yangtze River, proving that a complete set of algorithms with a CNN and reinforcement learning at the core can independently extract key navigation information, construct obstacle avoidance and control awareness, and lay the foundation for inland river intelligent navigation systems. © 2022 Journal of Clinical Hepatology. All rights reserved. 相似文献
14.
范利东 《南通航运职业技术学院学报》2008,7(4):110-112
诊断测试能有效地诊断出英语教学中存在的弱点,它是评价英语教学的一个有效手段。文章在介绍诊断测试概念的基础上,分析一次实际进行的诊断性测试。根据测试结果发现存在的问题并以此来改进高职荚语教学的模式。 相似文献
15.
张美娜 《辽宁省交通高等专科学校学报》2015,(2)
本文以高等职业教育教学资源管理平台开发为研究对象,在分析国内外教学资源平台发展状况的基础上,以辽宁省交通高等专科学校教学资源管理平台开发为依据,系统阐述了高等职业院校教学资源管理平台顶层设计内容、1+1+N平台运行模式、五层次用户管理模式、九模块自主添加功能、不公开信息处理功能等内容,对教学资源管理平台的主要功能进行了简明介绍,为高等职业院校教学资源管理平台开发与优化提供了很好的借鉴。 相似文献
16.
贾豁然 《辽宁省交通高等专科学校学报》2015,(2)
翻转课堂自实施以来,在教育领域引发大讨论,给教学带来了前所未有的变化,取得了良好的效果。本文对翻转课堂的由来及其对教学的改变进行分析,并探讨了翻转课堂实施对师生的要求,最后对我国大面积普及翻转课堂的制约因素进行了分析。 相似文献
17.
针对摘挂列车编组调车作业计划编制问题,基于强化学习技术和Q学习算法,提出1种调车作业计划优化方法。在表格调车法的基础上,将调车作业计划分为下落和重组2个部分。通过动作、状态和奖励3要素构建调车作业问题的强化学习模型,以调车机车为智能体,以车组下落的股道编号为动作,以待编车列的下落情况为状态,形成车组挂车、摘车具体条件和车辆重组流程,并依据车组下落的连接状态和车辆重组后产生的总调车程设计奖励函数。改进Q学习算法求解模型,以最小化调车程为目标,建立待编车列与最优调车作业计划之间的映射关系,智能体学习充分后即可求解得到最优的调车作业计划。通过3组算例对比验证本方法效果,结果表明:相较于统筹对口法和排序二叉树法,本方法使用的股道数量更少、调车作业计划更优;相较于分支定界法,本方法可在更短时间内求解质量近似的调车作业计划。因而,本方法有助于提高车站调车作业计划编制的智能化决策水平。 相似文献
18.
19.
20.
为准确预测电动汽车动力电池的能耗,缓解驾驶者的里程焦虑,本文中提出一种基于数据驱动的电动汽车动力电池SOC预测模型.首先分析电动汽车能耗构成并提取能耗影响因素,接着基于某款电动出租车CAN总线采集的汽车运行数据,采用机器学习算法,提出基于温度分层的能耗模型,通过宏观数据与微观数据的融合减小误差,最后使用该模型对车载BM... 相似文献