Have a personal or library account? Click to login
Research on Medical Dialogue Generation of External Knowledge Cover
By: Na Liu,  Xiaohui Su and  Feng Huang  
Open Access
|Mar 2024

References

  1. QIN Libo, LI Zhouyang, LOU Jieming, YU Qiying, CHI Wanxiang. Review of research progress on natural language generation in task-based dialogue systems [J]. Journal of Chinese Information Technology, 2022.
  2. ZHANG Xiaoyu, LI Dongdong, REN Pengjie, CHEN Zhumin, MA Jun, REN Zhaochun. Knowledge-aware medical dialogue generation based on memory network [J]. Computer Research and Development, 2022.
  3. Wen T H, Gasic M, Kim D, et al. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking [J]. 2015.
  4. Wen T H, Gasic M, Mrksic N, et al. Semantically conditioned lstm-based natural language generation for spoken dialogue systems [J]. 1508.01745, 2015.
  5. Dušek O, Jurčíček F. Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings [J]. 2016.
  6. Dušek O, Jurčíček F. A context-aware natural language generator for dialogue systems [J]. 2016.
  7. Tran V K, Nguyen L M. Neural-based natural language generation in dialogue using rnn encoder-decoder with semantic aggregation [J]. 2017.
  8. Wei Z, Liu Q, Peng B, et al. Task-oriented dialogue system for automatic diagnosis[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2018: 201–207.
  9. Su S Y, Huang C W, Chen Y N. Dual supervised learning for natural language understanding and generation [J]. 2019.
  10. Peng B, Zhu C, Li C, et al. Few-shot natural language generation for task-oriented dialog [J]. 2020.
  11. Li Y, Yao K. Interpretable nlg for task-oriented dialogue systems with heterogeneous rendering machines[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(15): 13306–13314.
  12. Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks [J]. Advances in neural information processing systems, 2014, 27.
  13. Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training [J]. 2018.
  14. Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners [J]. 2019, 1(8).
Language: English
Page range: 26 - 34
Published on: Mar 15, 2024
Published by: Xi’an Technological University
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2024 Na Liu, Xiaohui Su, Feng Huang, published by Xi’an Technological University
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.