Search engines consist of three main components, the spider (also called crawler) being the most important. Spiders visit websites, index them and follow the links to the other pages of a website. We also talk of 'spidering' or 'crawling' a site. Spiders come back in regular intervals - every second month, for example - to look for changes on a website.
Everything that spiders find is directly transferred to the second component of an engine, the index. You can think of the index, sometimes also referred to as the catalogue, as a huge book with copies of all websites that the spider has found. If a website changes, the book is updated.
Sometimes it may take a while until changes found by the spider are added to the index. It may well be that a page has been spidered, but has not been indexed yet. As long as this has not happened, i. e. the page has not been added to the index, it cannot be found by users.
The third component of an engine is its software. The software is a program that searches through the millions of indexed pages for results matching and sorts them by relevance.
All engines contain the components described above. Nevertheless, they differ in the way how these components have been set. This is why the same search hardly ever produces identical results in different engines.