Hi there! I am a Master Student at the Zhejiang University.
Currently, my research is centered on trustworthy LLMs, with a specific emphasis on improving the reliability and robustness of LLM agents, LLM-based multi-agent systems (LLM-MAS), and large reasoning models. I'm especially interested in mitigating risks such as prompt injection in these systems.
Feel free to contact me if you are interested in my research!
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
Hengyu An, Jinghuai Zhang, Tianyu Du, Chunyi Zhou, Qingming Li, Tao Lin, Shouling Ji
Empirical Methods in Natural Language Processing (EMNLP) 2025 Poster
LLM agents face Indirect Prompt Injection (IPI) when using tools with untrusted data, as hidden instructions can make them perform malicious actions. Our new defense, IPIGuard, significantly enhances agent security against these attacks by separating action planning from external data interaction using a Tool Dependency Graph (TDG).
Hengyu An, Jinghuai Zhang, Tianyu Du, Chunyi Zhou, Qingming Li, Tao Lin, Shouling Ji
Empirical Methods in Natural Language Processing (EMNLP) 2025 Poster
LLM agents face Indirect Prompt Injection (IPI) when using tools with untrusted data, as hidden instructions can make them perform malicious actions. Our new defense, IPIGuard, significantly enhances agent security against these attacks by separating action planning from external data interaction using a Tool Dependency Graph (TDG).