Tutorial Outline
This is a half-day tutorial combining formal exposition and interactive discussion.
Part I: Distributing Meaning and Compositionality (90 min)
1. Motivation (15 min): Composition as the cornerstone of semantics. Why a distributed view is needed in light of neural models (Pavlick, 2022; Boleda and Korhonen, 2025).
2. Historical and Formal Background (20 min): From Frege and Montague to dynamic and frame-based semantics. How event structure, argument typing, and qualia distribute semantic information.
3. Current Computational Landscape (25 min): Overview of neural and distributional models; LLMs as emergent cognitive architectures (Pavlick, 2022; Piantadosi, 2024). Comparative discussion of composition across architectures.
4. Conceptual Synthesis (20 min): From distributed representation to distributed composition. Identify the theoretical gap and motivate the GL approach as a unifying formalism.
Part II: Dual-Aspect Theory in Practice (90 min)
1. Introducing Dual-Aspect Semantics (25 min): Define ⟨S_sit, S_obj⟩. Show how eventive and object-level updates interact.
2. Distributed Compositionality Mechanisms (30 min): Work through examples of coercion and co-composition, illustrating redistribution across argument, event, and qualia structures; connect to implicit neural updates.
3. Computational Demonstration (25 min): Role-filler binding with vector operations (HRR-style convolution). Provide notebooks showing how GL rules generate distributed vectors.
4. Applications/Discussion (10 min): Relate GL to NLI, event/state tracking, and symbolic-distributed synthesis.