Create robots.txt files for your website. Control how search engines and AI crawlers access your site with Allow, Disallow rules, and sitemap references.
Place robots.txt in your website root directory (e.g., example.com/robots.txt)
| Directive | Description | Example |
|---|---|---|
| User-agent | Specifies which crawler the rules apply to | User-agent: Googlebot |
| Disallow | Blocks access to a path or pattern | Disallow: /admin/ |
| Allow | Explicitly permits access to a path | Allow: /api/public |
| Sitemap | Points to XML sitemap location | Sitemap: https://example.com/sitemap.xml |
| Crawl-delay | Seconds between requests | Crawl-delay: 10 |
| Host | Preferred domain (Yandex) | Host: https://www.example.com |
User-agent: * Disallow: /cart Disallow: /checkout Disallow: /account Disallow: /admin/ Disallow: /api/ Disallow: /*?* User-agent: GPTBot Disallow: / User-agent: CCBot Disallow: / Sitemap: https://example.com/sitemap.xml
Visual Rule Builder
Multiple User Agents
Quick Templates
Sitemap Integration
AI Bot Blocking
Advanced Options
Control which pages search engines can index. Block admin areas, duplicate content, and internal pages.
Prevent AI companies from using your content for training by blocking GPTBot, CCBot, and similar crawlers.
Block cart, checkout, and account pages from indexing while allowing product pages to be crawled.
Block all crawlers from staging or development environments to prevent them from appearing in search results.