A robots.txt file exists at the root of your website. It indicates the parts of your site you don’t want search engine crawlers to access using the Robots Exclusion Standard. This is a protocol with commands that indicate access to your site by type of web crawler (e.g., mobile vs. desktop) and website section.
Secondarily, it provides a method to inform crawler agents directions for what pages and sections of the site should not be indexed. It is also the vehicle to provide information that you DO want indexed and the location of you XML sitemap files.