It’s not clear to me exactly what people in AI do, but one thing that is talked about a lot is how important knowledge is. One of the things knowledge is supposed to be good for is solving hard problems. The idea, as I understand it, is that one characteristic of large set of problems is that their solutions are radically contingent on the peculiarities of the various situations in which the problems are instantiated. To anyone who is ignorant of most, or even many, of the peculiarities, those problems appear hard. It is only to an agent who has knowledge of almost all of the relevant peculiarities that the problems appear straightforward.