The proposed project addresses the development of an accurate probabilistic model of syntactic ambiguity resolution in the processing of natural languages. The new model deviates from current practice in performance models in that it starts out from corpora annotated according to a weak linguistic theory, and aims at inducing further syntactic constraints, by machine learning techniques, that are not anticipated by that linguistic theory. Modern, advanced linguistic theories constitute a source of inspiration for the new model. More specifically, the focus of the project is on the question of how to model, by automatic means, some of the implicitly available lexical constraints over phrase-structure annotations, within a memory-based performance model of human syntactic processing. The proposed model uses highly context- and lexically-sensitive répresentations (called Graph-Symbols) and distributional similarity-based algorithms to approximate the influence of these implicit constraints. The present research is located at the interdisciplinary meeting point between linguistic theory, probabilistic modeling methods, and machine learning techniques. lt has an important empirical component and it touches upon technological issues.