Dear guest, welcome to this publication database. As an anonymous user, you will probably not have edit rights. Also, the collapse status of the topic tree will not be persistent. If you like to have these and other options enabled, you might ask Admin for a login account.
This site is powered by Aigaion - A PHP/Web based management system for shared and annotated bibliographies. For more information visit www.aigaion.de. Get Web based bibliography management system at SourceForge.net. Fast, secure and Free Open Source software downloads
Ananiadou, S.
Vorname(n): S.
Nachname(n): Ananiadou

Publikationen von Ananiadou, S. sortiert nach Aktualität

Yang, K, Liu, Z., Xie, Q., Huang, J., Min, E. und Ananiadou, S., Selective Preference Optimization via Token-Level Reward Function Estimation, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), In Press
[URL]
Zhang, X., Wei, Q., Zhu, Y., Wu, F. und Ananiadou, S., THCM-CAL: Temporal-Hierarchical Causal Modelling with Conformal Calibration for Clinical Risk Prediction, in: Findings of the Association for Computational Linguistics: EMNLP 2024, In Press
[URL]
Soufleri, E. und Ananiadou, S., Enhancing Stress Detection on Social Media Through Multi-Modal Fusion of Text and Synthesized Visuals, in: Proceedings of the 24th Workshop on Biomedical Language Processing (BioNLP), Seiten 34–43, 2025
[DOI]
[URL]
Yu, Z. und Ananiadou, S., Locate-then-Merge: Neuron-Level Parameter Fusion for Mitigating Catastrophic Forgetting in Multimodal LLMs, in: Findings of the Association for Computational Linguistics: EMNLP 2024, In Press
[URL]
Liu, Z., Thompson, P., Rong, J. und Ananiadou, S., ConspEmoLLM-v2: A robust and stable model to detect sentiment-transformed conspiracy theories, in: Proceedings of the 14th Conference on Prestigious Applications of Intelligent Systems (PAIS-2025), In Press
[URL]
Kabir, M., Tahsin, T. und Ananiadou, S., From n-gram to Attention: How Model Architectures Learn and Propagate Bias in Language Modelin, in: Findings of the Association for Computational Linguistics: EMNLP 2025, In Press
[URL]
Zhang, X., Wei, Q., Zhu, Y., Zhang, L., Zhou, D. und Ananiadou, S., SynGraph: A Dynamic Graph-LLM Synthesis Framework for Sparse Streaming User Sentiment Modeling, in: Findings of the Association for Computational Linguistics: ACL 2025, Seiten 16338–16356, 2025
[DOI]
[URL]
Peng, X., Papadopoulos, T., Soufleri, E., Giannouris, P., Xiang, R., Wang, Y., Qian, L., Huang, J., Xie, Q. und Ananiadou, S., Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), In Press
[URL]
Yu, Z., Belinkov, Y. und Ananiadou, S., Back Attention: Understanding and Enhancing Multi-Hop Reasoning in Large Language Models, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), In Press
[URL]
Kabir, M., Abrar, A. und Ananiadou, S., Break the Checkbox: Challenging Closed-Style Evaluations of Cultural Alignment in LLMs, in: Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), In Press
[URL]
Luo, Z., Yuan, C., Xie, Q. und Ananiadou, S., EMPEC: A Comprehensive Benchmark for Evaluating Large Language Models Across Diverse Healthcare Professions, in: Findings of the Association for Computational Linguistics: ACL 2025, Seiten 9945–9958, 2025
[DOI]
[URL]
Liu, Z., Wang, K., Bao, Z., Zhang, X., Dong, J., Yang, K, Kabir, M., Giannouris, P., Xing, R., Park, S., Kim, J., Li, D., Xie, Q. und Ananiadou, S., FinNLP-FNP-LLMFinLegal-2025 Shared Task: Financial Misinformation Detection Challenge Task, in: Proceedings of the Joint Workshop of the 9th Financial Technology and Natural Language Processing (FinNLP), the 6th Financial Narrative Processing (FNP), and the 1st Workshop on Large Language Models for Finance and Legal (LLMFinLegal), Seiten 271–276, 2025
[URL]
Yano, K., Luo, Z., Huang, J., Xie, Q., Asada, M., Yuan, C., Yang, K, Miwa, M., Ananiadou, S. und Tsujii, J., ELAINE-medLLM: Lightweight English Japanese Chinese Trilingual Large Language Model for Bio-medical Domain, in: Proceedings of the 31st International Conference on Computational Linguistics (COLING 2025), Seiten 4670–4688, 2025
[URL]
Yu, Z. und Ananiadou, S., Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Seiten 3293–3306, 2024
[DOI]
[URL]