Everything you need to get started
Get up and running in minutes
Get Started →Complete API documentation
View API →Real-world use cases
See Examples →1. Install Node.js dependencies:
npm install
2. Install Python dependencies:
cd backend pip install -r requirements.txt
/converterGenerate both MCP server and dataset:
python backend/url_to_mcp.py https://example.com --both
Generate only MCP server (Python):
python backend/url_to_mcp.py https://example.com --mcp --language python
Generate only dataset:
python backend/url_to_mcp.py https://example.com --dataset
The tool fetches and analyzes the webpage to discover:
Based on the analysis, generates production-ready MCP servers with:
Exports discovered data in structured formats:
Convert a URL to MCP server and/or dataset
Request Body:
{
"url": "https://example.com",
"outputType": "both" | "mcp" | "dataset",
"language": "python" | "typescript"
}Response:
{
"success": true,
"message": "Conversion completed successfully",
"outputPath": "/path/to/output",
"mcpPython": true,
"mcpTypeScript": false,
"dataset": true,
"timestamp": 1234567890
}--mcpGenerate MCP server only--datasetGenerate dataset only--bothGenerate both MCP server and dataset--language python|typescriptMCP server language (default: python)--output DIROutput directory (default: output)--save-analysisSave analysis JSON to output directoryCommand:
python backend/url_to_mcp.py https://news-site.com --both
Generated MCP Tools:
list_articles() - List all articlesget_article(url) - Get specific article contentsearch_articles(query) - Search articlesget_categories() - List article categoriesGenerated Dataset:
articles.json - Article titles and previewslinks.json - Navigation structuremetadata.json - Site metadataCommand:
python backend/url_to_mcp.py https://docs-site.com --mcp --language typescript
Generated MCP Tools:
list_pages() - List documentation pagesget_page(path) - Get page contentsearch_docs(query) - Search documentationget_section(section) - Get specific sectionCommand:
python backend/url_to_mcp.py https://data-portal.com --dataset
Extracted Dataset:
table_1.csv - Main data tabletable_2.csv - Secondary data tablelists.json - Category listingsschema.json - Data schema1. Install dependencies:
cd output/mcp_server_python pip install -r requirements.txt
2. Run the server:
python server.py
3. Add to Claude Desktop:
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"my-site": {
"command": "python",
"args": ["/path/to/output/mcp_server_python/server.py"]
}
}
}1. Install dependencies:
cd output/mcp_server_typescript npm install
2. Build and run:
npm run build npm start
3. Add to Claude Desktop:
{
"mcpServers": {
"my-site": {
"command": "node",
"args": ["/path/to/output/mcp_server_typescript/dist/index.js"]
}
}
}Well-structured sites with clear HTML semantics work best. Documentation sites, news sites, and data portals typically yield excellent results. JavaScript-heavy single-page applications may require additional rendering support.
Currently, the tool analyzes static HTML only. For JavaScript-rendered content, you'll need to add Playwright or Puppeteer support. This is a planned feature for future releases.
Yes! The generated code includes TODO comments marking where you should implement custom logic. The structure is production-ready, but you'll want to add authentication, error handling, and business logic specific to your use case.
Some sites may block automated requests. You can add custom headers, user agents, or authentication to the generated MCP server code. For production use, consider implementing rate limiting and respecting robots.txt.
The current implementation extracts sample data to demonstrate structure. For full extraction, you'll need to implement pagination handling in the generated MCP server code. This is a common customization for production deployments.
Official Model Context Protocol documentation
Visit Docs →Python SDK for building MCP servers
View on GitHub →TypeScript SDK for building MCP servers
View on GitHub →HTML parsing library documentation
Visit Docs →Turn any website into tools your AI agents can use