All publications

2025
Yu, Z., Belinkov, Y. and Ananiadou, S., Back Attention: Understanding and Enhancing Multi-Hop Reasoning in Large Language Models, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 11257–11272, 2025
[DOI]
[URL]
Kabir, M., Abrar, A. and Ananiadou, S., Break the Checkbox: Challenging Closed-Style Evaluations of Cultural Alignment in LLMs, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 24–51, 2025
[DOI]
[URL]
Liu, Z., Thompson, P., Rong, J. and Ananiadou, S., ConspEmoLLM-v2: A robust and stable model to detect sentiment-transformed conspiracy theories, in: Proceedings of the 14th Conference on Prestigious Applications of Intelligent Systems (PAIS-2025), pages 5311 - 5318, 2025
[URL]
Yano, K., Luo, Z., Huang, J., Xie, Q., Asada, M., Yuan, C., Yang, K, Miwa, M., Ananiadou, S. and Tsujii, J., ELAINE-medLLM: Lightweight English Japanese Chinese Trilingual Large Language Model for Bio-medical Domain, in: Proceedings of the 31st International Conference on Computational Linguistics (COLING 2025), pages 4670–4688, 2025
[URL]
Luo, Z., Yuan, C., Xie, Q. and Ananiadou, S., EMPEC: A Comprehensive Benchmark for Evaluating Large Language Models Across Diverse Healthcare Professions, in: Findings of the Association for Computational Linguistics: ACL 2025, pages 9945–9958, 2025
[DOI]
[URL]
Soufleri, E. and Ananiadou, S., Enhancing Stress Detection on Social Media Through Multi-Modal Fusion of Text and Synthesized Visuals, in: Proceedings of the 24th Workshop on Biomedical Language Processing (BioNLP), pages 34–43, 2025
[DOI]
[URL]
Liu, Z., Wang, K., Bao, Z., Zhang, X., Dong, J., Yang, K, Kabir, M., Giannouris, P., Xing, R., Park, S., Kim, J., Li, D., Xie, Q. and Ananiadou, S., FinNLP-FNP-LLMFinLegal-2025 Shared Task: Financial Misinformation Detection Challenge Task, in: Proceedings of the Joint Workshop of the 9th Financial Technology and Natural Language Processing (FinNLP), the 6th Financial Narrative Processing (FNP), and the 1st Workshop on Large Language Models for Finance and Legal (LLMFinLegal), pages 271–276, 2025
[URL]
Liu, Z., Zhang, X., Yang, K, Xie, Q., Huang, J. and Ananiadou, S., FMDLlama: Financial Misinformation Detection Based on Large Language Models, in: Proceedings of the ACM on Web Conference 2025, pages 1153 - 1157, 2025
[DOI]
[URL]
Kabir, M., Tahsin, T. and Ananiadou, S., From n-gram to Attention: How Model Architectures Learn and Propagate Bias in Language Modelin, in: Findings of the Association for Computational Linguistics: EMNLP 2025, pages 18478–18498, 2025
[DOI]
[URL]
Yano, K., Miwa, M. and Ananiadou, S., IRIS: Rapid Curation Framework for Iterative Improvement of Noisy Named Entity Annotations, in: Proceedings of the International Conference on Applications of Natural Language to Information Systems, pages 58-69, 2025
[DOI]
[URL]
Yu, Z. and Ananiadou, S., Locate-then-Merge: Neuron-Level Parameter Fusion for Mitigating Catastrophic Forgetting in Multimodal LLMs, in: Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7065–7078, 2025
[DOI]
[URL]
Peng, X., Papadopoulos, T., Soufleri, E., Giannouris, P., Xiang, R., Wang, Y., Qian, L., Huang, J., Xie, Q. and Ananiadou, S., Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 30176–30202, 2025
[DOI]
[URL]