Robots.txt Analyzer

Enter a domain to fetch and parse its robots.txt file. The tool breaks down the rules by user agent, showing allowed and disallowed paths, crawl delay settings, and sitemap references. Useful for SEO auditing and verifying that search engines can access the pages you want indexed.

Get an API key to automate this

Result


                    

What this tool checks

  • Per-user-agent rule breakdown
  • Allow and Disallow directive listing
  • Crawl-delay detection
  • Sitemap URL extraction

Automate this with the API

Run this tool programmatically from your code. Get a free temporary API key with 200 requests — or register for unlimited access.

curl https://apixies.io/api/v1/robots-txt?domain=... \ -H "X-API-Key: YOUR_API_KEY"

Frequently asked questions

Does robots.txt actually block crawlers?
It's an advisory standard, not an enforcement mechanism. Well-behaved bots (Googlebot, Bingbot) respect it. Malicious bots ignore it. Don't use robots.txt to hide sensitive content — use authentication instead.
Should I block /admin in robots.txt?
Ironically, listing /admin in robots.txt tells everyone where your admin panel is. If your admin area requires authentication (it should), there's no need to list it. Robots.txt is publicly readable.

Explore more tools

View all 34 tools →