Ananiadou, S.
First name(s): S.
Last name(s): Ananiadou

Publications of Ananiadou, S. sorted by first author


Y

Yang, K, Liu, Z., Xie, Q., Huang, J., Min, E. and Ananiadou, S., Selective Preference Optimization via Token-Level Reward Function Estimation, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7032–7056, 2025
[DOI]
[URL]
Yang, K, Liu, Z., Xie, Q., Huang, J., Zhang, T. and Ananiadou, S., MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models, in: Proceedings of the Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024
[URL]
Yang, K, Zhang, T. and Ananiadou, S., Disentangled Variational Autoencoder for Emotion Recognition in Conversations (2023), in: IEEE Transactions on Affective Computing(1-12)
[DOI]
[URL]
Yano, K., Luo, Z., Huang, J., Xie, Q., Asada, M., Yuan, C., Yang, K, Miwa, M., Ananiadou, S. and Tsujii, J., ELAINE-medLLM: Lightweight English Japanese Chinese Trilingual Large Language Model for Bio-medical Domain, in: Proceedings of the 31st International Conference on Computational Linguistics (COLING 2025), pages 4670–4688, 2025
[URL]
Yano, K., Miwa, M. and Ananiadou, S., IRIS: Rapid Curation Framework for Iterative Improvement of Noisy Named Entity Annotations, in: Proceedings of the International Conference on Applications of Natural Language to Information Systems, pages 58-69, 2025
[DOI]
[URL]
Yu, Z. and Ananiadou, S., Locate-then-Merge: Neuron-Level Parameter Fusion for Mitigating Catastrophic Forgetting in Multimodal LLMs, in: Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7065–7078, 2025
[DOI]
[URL]
Yu, Z. and Ananiadou, S., How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for Metric Learning, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3281–3292, 2024
[DOI]
[URL]
Yu, Z. and Ananiadou, S., Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3293–3306, 2024
[DOI]
[URL]
Yu, Z. and Ananiadou, S., Neuron-Level Knowledge Attribution in Large Language Models, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3267–3280, 2024
[DOI]
[URL]
Yu, Z., Belinkov, Y. and Ananiadou, S., Back Attention: Understanding and Enhancing Multi-Hop Reasoning in Large Language Models, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 11257–11272, 2025
[DOI]
[URL]
Yuan, C., Xie, Q. and Ananiadou, S., Zero-shot Temporal Relation Extraction with ChatGPT, in: Proceedings of BioNLP 2023, pages 92–102, 2023
[URL]
Yuan, C., Xie, Q., Huang, J. and Ananiadou, S., Back to the Future: Towards Explainable Temporal Reasoning with Large Language Models, in: Proceedings of the ACM on Web Conference 2024 (WWW '24), pages 1963 - 1974, 2024
[URL]

Z

Zerva, C. and Ananiadou, S., Paths for uncertainty: Exploring the intricacies of uncertainty identification for news, in: Proceedings of the NAACL Workshop on Computational Semantics Beyond Events and Roles (SemBEaR), pages 6-20, 2018
[URL]