Dear guest, welcome to this publication database. As an anonymous user, you will probably not have edit rights. Also, the collapse status of the topic tree will not be persistent. If you like to have these and other options enabled, you might ask Admin for a login account.
This site is powered by Aigaion - A PHP/Web based management system for shared and annotated bibliographies. For more information visit www.aigaion.de. Get Web based bibliography management system at SourceForge.net. Fast, secure and Free Open Source software downloads
All publications

In Press
Yu, Z., Belinkov, Y. and Ananiadou, S., Back Attention: Understanding and Enhancing Multi-Hop Reasoning in Large Language Models, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), In Press
[URL]
Kabir, M., Abrar, A. and Ananiadou, S., Break the Checkbox: Challenging Closed-Style Evaluations of Cultural Alignment in LLMs, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), In Press
[URL]
Liu, Z., Thompson, P., Rong, J. and Ananiadou, S., ConspEmoLLM-v2: A robust and stable model to detect sentiment-transformed conspiracy theories, in: Proceedings of the 14th Conference on Prestigious Applications of Intelligent Systems (PAIS-2025), In Press
[URL]
Kabir, M., Tahsin, T. and Ananiadou, S., From n-gram to Attention: How Model Architectures Learn and Propagate Bias in Language Modelin, in: Findings of the Association for Computational Linguistics: EMNLP 2025, In Press
[URL]
Yu, Z. and Ananiadou, S., Locate-then-Merge: Neuron-Level Parameter Fusion for Mitigating Catastrophic Forgetting in Multimodal LLMs, in: Findings of the Association for Computational Linguistics: EMNLP 2024, In Press
[URL]
Peng, X., Papadopoulos, T., Soufleri, E., Giannouris, P., Xiang, R., Wang, Y., Qian, L., Huang, J., Xie, Q. and Ananiadou, S., Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), In Press
[URL]
Yang, K, Liu, Z., Xie, Q., Huang, J., Min, E. and Ananiadou, S., Selective Preference Optimization via Token-Level Reward Function Estimation, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), In Press
[URL]
Zhang, X., Wei, Q., Zhu, Y., Wu, F. and Ananiadou, S., THCM-CAL: Temporal-Hierarchical Causal Modelling with Conformal Calibration for Clinical Risk Prediction, in: Findings of the Association for Computational Linguistics: EMNLP 2024, In Press
[URL]
2025
Yano, K., Luo, Z., Huang, J., Xie, Q., Asada, M., Yuan, C., Yang, K, Miwa, M., Ananiadou, S. and Tsujii, J., ELAINE-medLLM: Lightweight English Japanese Chinese Trilingual Large Language Model for Bio-medical Domain, in: Proceedings of the 31st International Conference on Computational Linguistics (COLING 2025), pages 4670–4688, 2025
[URL]
Luo, Z., Yuan, C., Xie, Q. and Ananiadou, S., EMPEC: A Comprehensive Benchmark for Evaluating Large Language Models Across Diverse Healthcare Professions, in: Findings of the Association for Computational Linguistics: ACL 2025, pages 9945–9958, 2025
[DOI]
[URL]
Soufleri, E. and Ananiadou, S., Enhancing Stress Detection on Social Media Through Multi-Modal Fusion of Text and Synthesized Visuals, in: Proceedings of the 24th Workshop on Biomedical Language Processing (BioNLP), pages 34–43, 2025
[DOI]
[URL]
Liu, Z., Wang, K., Bao, Z., Zhang, X., Dong, J., Yang, K, Kabir, M., Giannouris, P., Xing, R., Park, S., Kim, J., Li, D., Xie, Q. and Ananiadou, S., FinNLP-FNP-LLMFinLegal-2025 Shared Task: Financial Misinformation Detection Challenge Task, in: Proceedings of the Joint Workshop of the 9th Financial Technology and Natural Language Processing (FinNLP), the 6th Financial Narrative Processing (FNP), and the 1st Workshop on Large Language Models for Finance and Legal (LLMFinLegal), pages 271–276, 2025
[URL]
Zhang, X., Wei, Q., Zhu, Y., Zhang, L., Zhou, D. and Ananiadou, S., SynGraph: A Dynamic Graph-LLM Synthesis Framework for Sparse Streaming User Sentiment Modeling, in: Findings of the Association for Computational Linguistics: ACL 2025, pages 16338–16356, 2025
[DOI]
[URL]
2024