Publishers aim for some control of results

Found on Reuters on Friday, 22 September 2006
Browse Internet

Global publishers, fearing that Web search engines such as Google Inc. are encroaching on their ability to generate revenue, plan to launch an automated system for granting permission on how to use their content.

Buoyed by a Belgian court ruling this week that Google was infringing on the copyright of French and German language newspapers by reproducing article snippets in search results, the publishers said on Friday they plan to start testing the service before the end of the year.

"Since search engine operators rely on robotic 'spiders' to manage their automated processes, publishers' Web sites need to start speaking a language which the operators can teach their robots to understand," according to a document seen by Reuters that outlines the publishers' plans.

That "language" is called robots.txt and already in use. Serious search engines respect those restrictions. A new "language" won't change much here; if it can be understood, it can be rejected and the website can be spidered anyway by rogue robots. Apart from all that, is it really a good idea to lock out those who bring visitors to your site? People use search engines; they don't browse a few hundred random URLs until they find what they are looking for.