Google uses software known as web crawlers that look at webpages and follow links on those pages to bring data about them back to Google’s servers. A robots.txt file tells these crawlers which pages or files the crawler can or can't request from your site.

You can learn more about the robots.txt in the Google Search General documentation.

See how you can access your robots.txt file for the site.

Accessing robots.txt

We generate the robots.txt file automatically. To access it, simply add /robots.txt to your domain name. For example:

https://weblium.com/robots.txt

If the site is open for indexing in the search engines, the robots.txt file will look this way — with a link to the sitemap:



If the site is blocked from being indexed in the search engines, the file will look this way:



You can hide or open the site to the search engines anytime. Go to the Settings of your website and proceed to the General info tab, then go to the Visibility in the search engines section:



Note: if you are looking for a way to hide a site or a page from the search engines, check out the articles: Hiding website from search engines and Hiding a page from search engines.
Was this article helpful?
Cancel
Thank you!