Yile Wang's photo

Yile Wang (王祎乐)


Institute for AI Industry Research (AIR),
Tsinghua University, China

Email: wangyile AT air.tsinghua.edu.cn

Yile Wang is currently a postdoc at Institute for AI Industry Research (AIR), Tsinghua University, working with Prof. Yang Liu. He received his Ph.D. degree in Computer Science from Zhejiang University (2018 ~ 2022), advised by Prof. Yue Zhang. Before that, he received his B.S. and M.S. degree from Zhejiang University and also worked at Huawei Technologies Co., Ltd. His primarily research interests include artificial intelligence, natural language processing and large language models.


Publications

(* indicates equal contribution)

  • Speak It Out: Solving Symbol-Related Problems with Symbol-to-Language Conversion for Language Models. [arxiv] [bib]
         Yile Wang, Sijie Cheng, Zixin Sun, Peng Li, and Yang Liu.
         To appear in ICLR AGI Workshop 2024
  • DEEM: Dynamic Experienced Expert Modeling for Stance Detection. [arxiv] [bib]
         Xiaolong Wang*, Yile Wang*, Sijie Cheng, Peng Li, and Yang Liu.
         To appear in Proceedings of LREC-COLING 2024
  • Reasoning in Conversation: Solving Subjective Tasks through Dialogue Simulation for Large Language Models. [arxiv] [bib]
         Xiaolong Wang*, Yile Wang*, Yuanchi Zhang, Fuwen Luo, Peng Li, Maosong Sun, and Yang Liu.
         arxiv preprint 2024
  • Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. [arxiv] [bib]
         Yuanchi Zhang, Yile Wang, Zijun Liu, Shuo Wang, Xiaolong Wang, Peng Li, Maosong Sun, and Yang Liu.
         arxiv preprint 2024
  • Towards Unified Alignment Between Agents, Humans, and Environment. [arxiv] [bib]
         Zonghan Yang, An Liu, Zijun Liu, Kaiming Liu, Fangzhou Xiong, Yile Wang, Zeyuan Yang, Qingyuan Hu, Xinrui Chen, Zhenhe Zhang, Fuwen Luo, Zhicheng Guo, Peng Li, and Yang Liu.
         To appear in ICLR LLM Agents Workshop 2024
  • Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security. [arxiv] [bib]
         Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang,
         Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zilong Ye, Guanjing Xiong, Fan Zhang, Xiang Li,
         Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Ya-Qin Zhang, and Yunxin Liu
         arxiv preprint 2024
  • Self-Knowledge Guided Retrieval Augmentation for Large Language Models. [pdf] [bib]
         Yile Wang, Peng Li, Maosong Sun and Yang Liu.
         In Findings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP Findings 2023)
  • YATO: Yet Another deep learning based Text analysis Open toolkit. [pdf] [bib]
         Zeqiang Wang*, Yile Wang*, Jiageng Wu, Zhiyang Teng and Jie Yang.
         In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP System Demonstrations 2023)
  • CVT-SLR: Contrastive Visual-Textual Transformation for Sign Language Recognition with Variational Alignment. [pdf] [bib]
         Jiangbin Zheng, Yile Wang, Cheng Tan, Siyuan Li, Ge Wang, Jun Xia, Yidong Chen and Stan Z. Li.
         In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023)
  • Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks. [pdf] [bib]
         Zhicheng Guo, Sijie Cheng, Yile Wang, Peng Li and Yang Liu.
         In Findings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL Findings 2023)
  • Gradual Syntactic Label Replacement for Language Model Pre-Training. [pdf] [bib]
         Yile Wang, Yue Zhang, Peng Li and Yang Liu.
         In IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP 2023)
  • Lost in Context? On the Sense-wise Variance of Contextualized Word Embeddings. [pdf] [bib]
         Yile Wang and Yue Zhang.
         In IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP 2023)
  • Pre-Training a Graph Recurrent Network for Language Representation. [pdf] [video] [bib]
         Yile Wang, Linyi Yang, Zhiyang Teng, Ming Zhou and Yue Zhang.
         In the second version of the Efficient Natural Language and Speech Processing workshop (NeurIPS Workshop 2022)
  • Using Context-to-Vector with Graph Retrofitting to Improve Word Embeddings. [pdf] [bib]
         Jiangbin Zheng*, Yile Wang*, Ge Wang, Jun Xia, Yufei Huang, Guojiang Zhao, Yue Zhang and Stan Z. Li.
         In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)
  • Can Offline Reinforcement Learning Help Natural Language Understanding? [arxiv] [bib]
         Ziqi Zhang*, Yile Wang*, Yue Zhang and Donglin Wang.
         arxiv preprint 2022
  • Improving Skip-Gram Embeddings Using BERT. [pdf] [bib]
         Yile Wang, Leyang Cui and Yue Zhang.
         In IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP 2021)
  • Does Chinese BERT Encode Word Structure? [pdf] [bib]
         Yile Wang, Leyang Cui and Yue Zhang.
         In Proceedings of the 28th International Conference on Computational Linguistics (COLING 2020)
  • LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning. [pdf] [bib]
         Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang and Yue Zhang.
         In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI 2020)
  • Lattice LSTM for Chinese Sentence Representation. [pdf] [bib]
         Yue Zhang*, Yile Wang* and Jie Yang*.
         In IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP 2020)
  • Deep reconstruction model for dynamic PET images. [pdf] [bib]
         Jianan Cui, Xin Liu, Yile Wang and Huafeng Liu.
         PloS one 2017

  • Professional Services

  • Area Chair: EMNLP (2022)
  • Reviewer: EMNLP (2023), COLING (2022, 2024), AAAI (2023, 2024), ENSLP (2022), ACL Rolling Review (2021, 2022, 2023)

  • Grants

  • National Natural Science Foundation of China Youth Program: Retrieval-augmented Methods based on Continuous Knowledge Base. Grant No. 62306161, 2024.01-2025.12.


  • ( last updated on Mar. 12, 2024 )