BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//conference-hub.linguistic-society.com//athens-2026//talk
 //WKVNAE
BEGIN:VEVENT
UID:pretalx-athens-2026-WKVNAE@conference-hub.linguistic-society.com
DTSTART:20260424T124500Z
DTEND:20260424T130000Z
DESCRIPTION:Abstract\nThis study explores the extent to which Large Languag
 e Models (LLMs) can accurately analyze and compare\ninflectional morphosyn
 tactic features and derivational patterns in English and Greek — two typ
 ologically\ndivergent systems. While English is predominantly analytic\, G
 reek exhibits a rich fusional inventory of\nmorphosyntactic features encod
 ed inflectionally\, including tense\, aspect\, number\, case\, gender\, an
 d person.\nThe central research question is whether LLMs can reliably iden
 tify\, categorise\, and disambiguate these\nfeatures across the two system
 s and how their performance on inflectional paradigms interacts with their
 \nhandling of derivational morphology. A secondary focus concerns the degr
 ee to which morphosyntactic\nfeature encoding constrains or assists LLMs i
 n recognizing derivational processes and their productivity.\nThe study hy
 pothesizes that the typological mismatch between English and Greek exposes
  systematic gaps\nin LLM morphological competence\, particularly in the pr
 ocessing of inflectionally dense paradigms and\nderivationally complex lex
 emes.\nMethodology\nThe study adopts a mixed-methods design integrating th
 eoretical morphological analysis with\ncomputational modeling and empirica
 l evaluation. Two annotated corpora — one for English and one for\nGreek
  — are constructed from diverse text sources\, with words tagged for mor
 phosyntactic feature values\,\nderivational patterns (prefixation\, suffix
 ation)\, and morphological complexity ranging from transparent to\nopaque 
 forms. Complex phenomena receiving special attention include syncretism\, 
 allomorphy\, suppletion\,\nand morphosemantic ambiguity — all of which p
 ose well- documented challenges for both human parsers\nand computational 
 models. State-of-the-art LLMs (GPT-5.1\, Gemini 3\, Claude 4.6\, and Perpl
 exity 4.5) are\nevaluated on the annotated datasets using standard metrics
  (precision\, recall\, F1-score)\, complemented by\nnovel morphology-sensi
 tive metrics developed specifically for LLM morphological evaluation: a\nM
 orphosyntactic Context Sensitivity metric\, a Morphological Complexity Sco
 re\, and a Morpheme Accuracy\nMetric\, among others. Supervised fine-tunin
 g and Reinforcement Learning from Human Feedback (RLHF)\nvia prompt engine
 ering are further explored as strategies for improving model performance.\
 nResults\nEvaluation results reveal that LLMs perform inconsistently acros
 s morphosyntactic feature categories\, with\nthe greatest difficulties eme
 rging in Greek paradigms characterized by high degrees of syncretism and\n
 morphophonological alternation (e.g.\, inactive phonological phenomena\, a
 llomorphy). In derivational\nanalysis\, models tend to rely on surface ana
 logical patterns rather than rule-governed morphological\noperations\, lea
 ding to systematic errors in disambiguating derivation from inflection and
  in correctly\nidentifying the base and affix structure of complex words. 
 Cross- linguistic comparison further confirms\nthat morphosyntactic typolo
 gy significantly affects LLM generalization\, with Greek consistently yiel
 ding\nlower accuracy scores than English across all evaluation metrics.\nC
 onclusions\nThe findings demonstrate that current LLMs lack robust morphos
 yntactic feature representations and that\ntheir handling of inflection an
 d derivation falls short of linguistically informed analysis. Crucially\, 
 the study\nshows that targeted fine-tuning on morphologically annotated da
 ta — particularly for feature-rich languages\nlike Greek — can meaning
 fully improve performance. These results have direct implications for the 
 designof NLP tools in morphologically complex languages and call for evalu
 ation frameworks that foreground\nmorphosyntactic adequacy rather than sur
 face fluency alone.
DTSTAMP:20260419T083025Z
LOCATION:Online Session
SUMMARY:Decoding Morphosyntax: Can LLMs Handle Inflection and Derivation in
  English and Greek? - Athanasios Karasimos
URL:https://conference-hub.linguistic-society.com/athens-2026/talk/WKVNAE/
END:VEVENT
END:VCALENDAR
