The next step consists of creating a bridge page which is only accessible by search engines. Engines reveal their 'agent' names like browsers reveal their names. With the help of the agent names you can check whether your pages have been spidered or not.
The advantage of this method is that you create an ideal page for engines, but you still send users to the content they wish to see. Thus, you get rid of the whole bridge problem easily. In addition, your code is protected from curious users. Well, not entirely! Users could still try to dial into your web server with telnet and use the agent name of a search engine. Then they could see what exactly you are providing. But some engines do not always use the same agent name, mainly to help users stay on the right track.