Robots.txt is a file which is usually placed in the root of any website. It decides whether crawlers are permitted or forbidden access to the web site.
For example, the site admin can forbid crawlers to visit a certain folder (and all the files therein contained) or to crawl a specific file, usually to prevent those files being indexed by other search engines.
- Robots.txt on Wikipedia
- Standard specification draft: https://datatracker.ietf.org/doc/html/draft-rep-wg-topic