Back

Robots.txt Generator

Generate and validate robots.txt rules.

Free • Unlimited (Beta)

Limits will be introduced later; early users will get benefits.

Default: * (all bots)
Note: Not all bots honor the Host directive

Related tools

Timestamp Converter

Convert between Unix timestamp and date.

Timestamp Converter Date
Open
Password Generator

Generate secure random passwords with custom rules.

Password Generator Security Random
Open
Email Spam Score Checker

Analyze email headers and estimate deliverability / spam risk.

Email Header Spam DKIM SPF DMARC
Open

About Robots.txt Generator and Validator

Overview

Robots.txt tells crawlers which paths they may or may not request. It uses directives like User-agent, Disallow, Allow, and optionally Sitemap and Crawl-delay. This tool has two modes: Generator builds a robots.txt from form inputs (user agents, paths); Validator parses and checks existing robots.txt content and reports warnings (e.g. unknown directives, malformed lines). Combine with the Sitemap Generator to create a sitemap and reference it in robots.txt; use the SEO Analyzer to verify crawlability of your pages.

When to use it

Use the Generator when you need a new robots.txt for a site or when you want to add Sitemap or Host. Use the Validator when you have an existing robots.txt and want to check syntax and directives before deploying. After generating, add the Sitemap URL so search engines can find your sitemap. Use the SEO Analyzer on key URLs to ensure they are not blocked. For structured data, the Schema.org tool helps with JSON-LD; robots.txt only controls crawling, not indexing of allowed URLs.

How to use it

Generator tab: Add User-agent lines (default *), Disallow and Allow paths, optional Crawl-delay, Sitemap URL, and Host. Click Generate to produce the robots.txt text. Copy or Download. Validator tab: Paste your robots.txt content and click Validate. The tool shows parsed groups (User-agent blocks with directives), warnings (unknown or malformed directives), and a normalized preview. Fix any issues and re-validate. Download uses POST with CSRF; quota applies. We do not fetch URLs; validation is offline.

Tips

Include a Sitemap line pointing to your sitemap URL (e.g. from the Sitemap Generator). Use Disallow for admin, API, or duplicate content paths. Allow can override Disallow in the same group for specific paths. Crawl-delay is non-standard and only some crawlers respect it. Test with the SEO Analyzer crawl or single-URL check to confirm important pages are reachable. Keep robots.txt at the site root and use the correct case (robots.txt).

Common mistakes

Blocking the whole site with Disallow: / and forgetting Allow for essential paths can hide everything. Typos in directive names (e.g. Disalow) are ignored or cause parse warnings. Wrong Sitemap URL format (non-http(s)) triggers a validator warning. Putting sensitive data in robots.txt is bad—it is public. For sitemap creation use the Sitemap Generator; for on-page and technical SEO use the SEO Analyzer and Schema.org for structured data.

↑ Back to top

FAQ

A file that tells crawlers which paths to request or avoid. Placed at site root.

Yes. No sign-up required.

No. Generator and Validator work offline from your input.

Yes. Add a Sitemap: line with your sitemap URL (e.g. from Sitemap Generator).

To catch typos, unknown directives, and malformed lines before deploying.