🗺️

Robots.txt Parser

Parse robots rules using the current scrape robots endpoint

GET /v1/scrape/robots

Description

Parse robots rules using the current scrape robots endpoint

Parameters

Name Type Required Description
url string required Site URL or direct robots.txt URL
x-api-key string optional

How to Use

1

1. Send the site root or direct robots URL in the `url` query parameter. 2. The endpoint resolves and parses the file into user-agent sections. 3. Review `agents` and `sitemaps` in the response.

About This Tool

Use the robots endpoint to fetch and structure a site's `robots.txt` rules. This helps you understand crawl permissions, exclusions, and published sitemap locations before you automate scraping or crawling jobs.

Why Use This Tool

Frequently Asked Questions

Does this validate the semantics of a robots file?
It parses and structures the file, but it does not make policy decisions for you.
What if the site has no robots file?
The endpoint returns an error if the file cannot be fetched.
Is this tied to the crawl endpoint?
Yes — it is a useful companion when planning crawl jobs or site discovery workflows.

Start using Robots.txt Parser now

Get your free API key and make your first request in under a minute.