Articles on: SEO
This article is also available in:

Accessing robots.txt

Google uses software known as web crawlers that look at webpages and follow links on those pages to bring data about them back to Google’s servers. A robots.txt file tells these crawlers which pages or files the crawler can or can't request from your site.

You can learn more about the robots.txt in the Google Search General documentation.

See how you can access your robots.txt file for the site.

Accessing robots.txt



We generate the robots.txt file automatically. To access it, simply add /robots.txt to your domain name. For example:

https://weblium.com/robots.txt

If the site is open for indexing in the search engines, the robots.txt file will look this way — with a link to the sitemap:


Note: /.sw_/_host_/_replacer_ — is a technical record that our system adds by default.

If the site is blocked from being indexed in the search engines, the file will look this way:



You can hide or open the site to the search engines anytime. Go to the Settings of your website and proceed to the General info tab, then go to the Visibility in the search engines section:



Note: If you are looking for a way to hide a site or a page from the search engines, check out the Hiding website from search engines and Hiding a page from search engines articles.

Updated on: 06/30/2023

Was this article helpful?

Share your feedback

Cancel

Thank you!