Compose a robots.txt for your site root: User-agent rules, quick presets, optional Crawl-delay, and one or more Sitemap: lines. Text is built in your
browser — nothing is uploaded to DroidXP, like
APK Analyzer and
APK String Extractor staying local. You still deploy the file to /robots.txt on your host. Works alongside
Meta Tag Generator and
Sitemap Generator.
Googlebot often ignores this.
Deploy to https://your-domain/robots.txt. Test with your search console after publishing.
The file tells compliant crawlers which paths they may fetch. It does not hide HTML from determined scrapers and is not authentication — treat it as a polite signal, not a vault lock. Building the text locally matches how APK Analyzer keeps APK inspection in the browser: DroidXP never stores your rules on our servers.
Allow all is the usual default for public marketing sites. Disallow all can discourage well-behaved bots on staging (pair with login). Common paths blocks a few sensitive prefixes —
adjust for your stack. Custom is for explicit Allow/Disallow lines (do not repeat User-agent unless you know you need multiple groups).
Listing Sitemap: URLs helps discovery; use absolute HTTPS links. Crawl-delay is honored by some crawlers but not relied on by Google — tune crawl budget in Search Console for Google.
Drafts can persist in localStorage for convenience. Your robots rules are not sent to DroidXP when you edit — similar to local parsing with
APK String Extractor.
User-agent (usually *) and a preset or custom directives.robots.txt, upload to the site root, and validate in webmaster tools.No. The file content is composed in your browser — you copy it to your own server root, same local-only stance as APK Analyzer and APK String Extractor.
At the root of the host: https://yourdomain.com/robots.txt — not in a subfolder. Each subdomain needs its own file if you use subdomains.
It is a voluntary convention — well-behaved bots respect it, but bad actors may ignore it. Do not rely on it for secrets; use auth and server rules too.
robots.txt gives host-wide crawl hints; meta robots (and HTTP headers) can refine indexing per URL. They work together — conflicts should be avoided.
Google generally ignores Crawl-delay in robots.txt; use Search Console crawl settings and server performance instead. Some other bots may read it.
Yes — repeat Sitemap: lines, each with a full URL. Splitting large sites across index sitemaps is common.
Google supports limited pattern syntax in robots.txt paths. Test changes in Search Console URL inspection and robots testing tools after deploy.
Usually no — modern Google rendering needs resources that mirror what users see. Blocking critical assets can harm how your site is understood.
Yes — drafts stay in your tab (localStorage optional); only normal static assets load from DroidXP.
Staging, mirrors, or pre-launch sites sometimes want to block crawlers — pair with authentication because robots.txt is not a security control.