Ananiadou, S.
Vorname(n): S.
Nachname(n): Ananiadou

Publikationen von Ananiadou, S. sortiert nach erstem Autor


Y

Yang, K, Liu, Z., Xie, Q., Huang, J., Min, E. und Ananiadou, S., Selective Preference Optimization via Token-Level Reward Function Estimation, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), Seiten 7032–7056, 2025
[DOI]
[URL]
Yang, K, Liu, Z., Xie, Q., Huang, J., Zhang, T. und Ananiadou, S., MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models, in: Proceedings of the Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024
[URL]
Yang, K, Zhang, T. und Ananiadou, S., Disentangled Variational Autoencoder for Emotion Recognition in Conversations (2023), in: IEEE Transactions on Affective Computing(1-12)
[DOI]
[URL]
Yano, K., Luo, Z., Huang, J., Xie, Q., Asada, M., Yuan, C., Yang, K, Miwa, M., Ananiadou, S. und Tsujii, J., ELAINE-medLLM: Lightweight English Japanese Chinese Trilingual Large Language Model for Bio-medical Domain, in: Proceedings of the 31st International Conference on Computational Linguistics (COLING 2025), Seiten 4670–4688, 2025
[URL]
Yano, K., Miwa, M. und Ananiadou, S., IRIS: Rapid Curation Framework for Iterative Improvement of Noisy Named Entity Annotations, in: Proceedings of the International Conference on Applications of Natural Language to Information Systems, Seiten 58-69, 2025
[DOI]
[URL]
Yu, Z. und Ananiadou, S., Locate-then-Merge: Neuron-Level Parameter Fusion for Mitigating Catastrophic Forgetting in Multimodal LLMs, in: Findings of the Association for Computational Linguistics: EMNLP 2024, Seiten 7065–7078, 2025
[DOI]
[URL]
Yu, Z. und Ananiadou, S., Neuron-Level Knowledge Attribution in Large Language Models, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Seiten 3267–3280, 2024
[DOI]
[URL]
Yu, Z. und Ananiadou, S., Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Seiten 3293–3306, 2024
[DOI]
[URL]
Yu, Z. und Ananiadou, S., How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for Metric Learning, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Seiten 3281–3292, 2024
[DOI]
[URL]
Yu, Z., Belinkov, Y. und Ananiadou, S., Back Attention: Understanding and Enhancing Multi-Hop Reasoning in Large Language Models, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), Seiten 11257–11272, 2025
[DOI]
[URL]
Yuan, C., Xie, Q. und Ananiadou, S., Zero-shot Temporal Relation Extraction with ChatGPT, in: Proceedings of BioNLP 2023, Seiten 92–102, 2023
[URL]
Yuan, C., Xie, Q., Huang, J. und Ananiadou, S., Back to the Future: Towards Explainable Temporal Reasoning with Large Language Models, in: Proceedings of the ACM on Web Conference 2024 (WWW '24), Seiten 1963 - 1974, 2024
[URL]

Z

Zerva, C. und Ananiadou, S., Paths for uncertainty: Exploring the intricacies of uncertainty identification for news, in: Proceedings of the NAACL Workshop on Computational Semantics Beyond Events and Roles (SemBEaR), Seiten 6-20, 2018
[URL]