SERobot.txt, also known as Robot Exclusion Protocol (REP) is a text file created to command search engine robots how to navigate pages and index the pages found. It is therefore an important tool in SEO.
For the text file to work effectively, it should be in the top-level directory of a server for example; www.specimen.com/robots.txt
Applications of REPs
REP Tags
When REP tags are attached to an URI, they manage indexing and will query search engines. Contents of X-Robots tags can override incongruent directives present in META elements as REP tags can be delivered in META elements of X/HTML content and HTTP headers ofweb objects.
Micro- Formats
Micro-formats override page settings for certain HTML elements. Although robots.txt lacks directive to index, it is possible to set directives to index clusters of URIs.
Referred Image- www.baonc.com
Pattern matching
Two expressions are allowed by Bing and Google to be usedto identify pages or sub folders that a search engine optimization does not want:
- * isa wildcard representing sequence of characters
- $ is used to match the URL endings
Robot.txt is Public
It is publiclyavailable and anyone can see what sections have been blocked. Therefore if an SEO company in Chicago has private information that they donot want shared, they should use a more secure approach.