WebmasterID logoWebmasterID
Data import

Import data the Agent can act on

The Agent already uses your WebmasterID Core data out of the box. Import support adds workspace-scoped operational datasets — sitemap data, repository mappings, execution history — that strengthen the recommendation and task-generation pipelines. Honest disclosure: import support is expanding, not exhaustive.

Supported today

What you can bring to the Agent now

Each row is wired end-to-end: import surface, storage, and consumption by the recommendation pipeline.

  • Sitemap data

    Last-modified per URL, change-frequency, priority. Used by the Agent to highlight stale or unreachable pages.

  • Repository mapping data

    Org / repo / framework / branch + likely editable surfaces. The single most useful import — every Claude prompt benefits from it.

  • Crawler-derived data

    Aggregates over per-pathname crawl frequency from Core's own bot-visits table. No separate import needed.

  • Execution history

    When an operator records a deploy outcome (verifyBefore + verification + notes), the lifecycle is stored and feeds the rule brain.

Expanding

On the roadmap

Honest about timeline: these are areas we are extending support into. If your team needs one of them now, let us know — concrete use cases move them up the queue.

  • Search Console-style exports (planned, not available today)
  • Manual CSV operational data (limited support today, expanding)
  • Per-site canonical history
  • Per-page reading-traffic timeseries from third-party sources

Import support is expanding. The Agent already uses WebmasterID Core and crawler-derived data — useful from day one even without imports.

How it works

Imported data flows through the same pipeline

Imports go into workspace-scoped storage. The recommendation engine reads them alongside Core data; when a recommendation reaches the daily brief, the Claude prompt carries lineage to the imported source.

  1. Operator uploads or wires the import (workspace-scoped).
  2. The Agent's recommendation engine reads imports alongside Core data — no preference for either source.
  3. Recommendations grounded in imported data appear in the daily brief with provenance.
  4. The operator prepares a Claude prompt; the prompt carries the imported-data lineage so Claude knows what it's acting on.
  5. The operator reviews, deploys, and verifies as usual.
Privacy + scope

Imports are workspace-scoped

Imported data follows the same rules as Core events: workspace-scoped, on infrastructure you control, never shared across workspaces, never sent to a third-party processor.

FAQ

Frequently asked

Do I have to import data to use the Agent?
No. The Agent already uses WebmasterID Core events, bot visits, AI referrals, and crawler-derived data out of the box. Import support is additive — useful when you have operational data that lives outside Core (sitemaps, repo metadata, manual operational notes).
Where does imported data live?
Workspace-scoped, on infrastructure you (or our managed deployment) own. No third-party processor sees the imports. The same privacy posture as Core events.
What formats are supported today?
Sitemap data and repository mapping data have first-class import surfaces. CSV / manual operational data has limited support — useful for specific workflows the operator already runs. We expand support as concrete use cases land.
Will Search Console exports be supported?
Search Console-style exports are on the roadmap; support is not in place today. If you have a specific use case, contact us at sales@webmasterid.com.
Can imported data trigger Claude prompts?
Yes — imported data feeds the same recommendation pipeline the rest of Agent uses. When a recommendation reaches the daily brief, the operator can prepare a Claude prompt as usual; the prompt carries lineage to the underlying data.

Ecosystem

Built for operators

Imports + Agent + Claude compose with the rest of the operator stack.

Pocket Manager also on Google Play.