Max Louwerse, Zhiqiang Cai, Xiangen Hu, Mathew Ventura, and Patrick Jeuniaux, University of Memphis
Latent semantic analysis (LSA) is a statistical, corpus-based technique of representing knowledge. It has been successfully used in a variety of applications including intelligent tutoring systems, essay grading and coherence metrics. From its very introduction LSA has been claimed to simulate aspects of human knowledge representation. This amodal symbolic view immediately resulted in a rebuttal from cognitive scientists who argued that LSA can never come close to human knowledge representation, because it lacks embodiment into perceptual experience or action as in human cognition. The ultimate test to determine the embodiment of LSA is to evaluate how well it operates on typical embodied dimensions like spatiality and temporality. The results of such a test would have an impact on theories of meaning and knowledge representation, and would help us in the development of natural language processing modules for applications that use these embodied dimensions. Six Multidimensional Scaling representations are derived from the LSA cosines between spatial and temporal words. Results show that by combining LSA with MDS previously hidden spatial and temporal relations can now be revealed.