So what is stopping computers from becoming as capable as humans in understanding natural language? Part of the problem lies in the variability and complexity of our sentences. Consider this excerpt from a movie review. “I was lured to see this on the promise of a smart witty slice of old fashioned fun and intrigue. I was conned. ” Although it starts with some potentially positive words it turns out to be a strongly negative review. Sentences like this might be somewhat entertaining for us but computers tend to make mistakes when trying to analyze them. But there is a bigger challenge that makes NLP harder than you think. Take a look at this sentence. “The sofa didn’t fit through the door because it was too narrow.” What does “it” refer to? Clearly “it” refers to the door. Now consider a slight variation of this sentence. “The sofa didn’t fit through the door because it was too wide.” What does “it” refer to in this case? Here it’s the sofa. Think about it. To understand the proper meaning or semantics of the sentence you implicitly applied your knowledge about the physical world, that wide things don’t fit through narrow things. You may have experienced a similar situation before. You can imagine that there are countless other scenarios in which some knowledge or context is indispensable for correctly understanding what is being said.