当前位置: X-MOL 学术Int. J. Engine Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reinforcement learning applied to dilute combustion control for increased fuel efficiency
International Journal of Engine Research ( IF 2.5 ) Pub Date : 2024-01-31 , DOI: 10.1177/14680874241226580
Bryan P Maldonado 1 , Brian C Kaul 1 , Catherine D Schuman 2 , Steven R Young 3
Affiliation  

To reduce the modeling burden for control of spark-ignition engines, reinforcement learning (RL) has been applied to solve the dilute combustion limit problem. Q-learning was used to identify an optimal control policy to adjust the fuel injection quantity in each combustion cycle. A physics-based model was used to determine the relevant states of the system used for training the control policy in a data-efficient manner. The cost function was chosen such that high cycle-to-cycle variability (CCV) at the dilute limit was minimized while maintaining stoichiometric combustion as much as possible. Experimental results demonstrated a reduction of CCV after the training period with slightly lean combustion, contributing to a net increase in fuel conversion efficiency of 1.33%. To ensure stoichiometric combustion for three-way catalyst compatibility, a second feedback loop based on an exhaust oxygen sensor was incorporated into the fuel quantity controller using a slow proportional-integral (PI) controller. The closed-loop experiments showed that both feedback loops can cooperate effectively, maintaining stoichiometric combustion while reducing combustion CCV and increasing fuel conversion efficiency by 1.09%. Finally, a modified cost function was proposed to ensure stoichiometric combustion with a single controller. In addition, the learning period was shortened by half to evaluate the RL algorithm performance on limited training time. Experimental results showed that the modified cost function could achieve the desired CCV targets, however, the learning time was reduced by half and the fuel conversion efficiency increased only by 0.30%.

中文翻译:

强化学习应用于稀释燃烧控制以提高燃油效率

为了减轻火花点火发动机控制的建模负担,强化学习(RL)已被应用于解决稀燃烧极限问题。Q-学习用于确定最佳控制策略来调整每个燃烧循环中的燃油喷射量。基于物理的模型用于确定系统的相关状态,用于以数据有效的方式训练控制策略。选择成本函数,使得稀释极限下的高循环变异性(CCV)最小化,同时尽可能保持化学计量燃烧。实验结果表明,经过轻微稀薄燃烧训练后,CCV 有所降低,导致燃料转化效率净增加 1.33%。为了确保三效催化剂兼容性的化学计量燃烧,使用慢速比例积分 (PI) 控制器将基于排气氧传感器的第二反馈回路纳入燃料量控制器。闭环实验表明,两个反馈回路能够有效配合,在保持化学计量燃烧的同时降低燃烧CCV,使燃料转化效率提高1.09%。最后,提出了一种改进的成本函数,以确保使用单个控制器的化学计量燃烧。此外,学习周期缩短了一半,以在有限的训练时间内评估强化学习算法的性能。实验结果表明,修改后的成本函数可以达到预期的CCV目标,但学习时间减少了一半,燃料转换效率仅提高了0.30%。
更新日期:2024-01-31
down
wechat
bug