What ScanLytics checks automatically today.

From public HTML and indexability to early AI-readiness signals and CMS context for Shopify and WordPress.

  • robots.txt presence
  • Sitemap present or unusable
  • Canonical present or missing
  • Noindex signals on public pages
Example finding

"3 public pages are sending a noindex signal"

  • Title present or missing
  • Meta description present or missing
  • H1 structure: none, one or multiple H1 tags
  • Images without alt text
Example finding

"8 pages are missing a meta description"

  • JSON-LD present or missing
  • Sample URLs where structured data is missing
  • Signal per crawled page in the current scope
Example finding

"Structured data is missing on 6 crawled pages"

  • robots.txt and sitemap as discoverability basics
  • llms.txt and llms-full.txt presence
  • Open Graph, feeds, hreflang and machine-readable hints
  • A first indication of whether your site is easier for agents and AI systems to read
Example finding

"No llms.txt found and few machine-readable routes visible"

  • Detection of WordPress, Shopify or custom
  • CMS label per audit result
  • Fix direction based on detected CMS context
Example finding

"Detected CMS context: Shopify"

  • Severity per finding
  • Impact score per finding
  • Estimated fix time per finding
  • Sorted by what deserves attention first
Example finding

"Noindex on public pages ranks above a missing alt finding"

  • Quick Audit: compact first scope
  • Deep Audit: broader scope for larger sites
  • Results directly in your dashboard
  • Monitoring and Agency available directly as recurring and agency plans
Example finding

"Deep Audit uses the same base layer, but with more pages in scope"