Screaming Frog SEO Spider 10.3
The Screaming Frog SEO Spider is a website crawler, that allows you to crawl websitesвЂ™ URLs and fetch key onsite elements to analyse onsite SEO. Download for free, or purchase a licence for additional advanced features. The SEO Spider is lite, flexible and can crawl extremely quickly allowing you to analyse the results in real-time. It gathers key onsite data to allow SEOs to make informed decisions.
Find Broken Links
Crawl a website instantly and find broken links (404s) and server errors. Bulk export the errors and source URLs to fix, or send to a developer.
Find temporary and permanent redirects, identify redirect chains and loops, or upload a list of URLs to audit in a site migration.
Analyse Page Titles & Meta Data
Analyse page titles and meta descriptions during a crawl and identify those that are too long, short, missing, or duplicated across your site.
Discover Duplicate Content
Discover exact duplicate URLs with an md5 algorithmic check, partially duplicated elements such as page titles, descriptions or headings and find low content pages.
Extract Data with XPath
Collect any data from the HTML of a web page using CSS Path, XPath or regex. This might include social meta tags, additional headings, prices, SKUs or more!
Review Robots & Directives
View URLs blocked by robots.txt, meta robots or X-Robots-Tag directives such as вЂ?noindexвЂ™ or вЂ?nofollowвЂ™, as well as canonicals and rel=вЂњnextвЂќ and rel=вЂњprevвЂќ.
Generate XML Sitemaps
Quickly create XML Sitemaps and Image XML Sitemaps, with advanced configuration over URLs to include, last modified, priority and change frequency.
Integrate with Google Analytics
Connect to the Google Analytics API and fetch user data, such as sessions or bounce rate and conversions, goals, transactions and revenue for landing pages against the crawl.
The Screaming Frog SEO Spider is an SEO auditing tool, built by real SEOs with thousands of users worldwide. A quick summary of some of the data collected in a crawl include –
Errors вЂ“ Client errors such as broken links & server errors (No responses, 4XX, 5XX).
Redirects вЂ“ Permanent, temporary redirects (3XX responses) & JS redirects.
Blocked URLs вЂ“ View & audit URLs disallowed by the robots.txt protocol.
Blocked Resources вЂ“ View & audit blocked resources in rendering mode.
External Links вЂ“ All external links and their status codes.
Protocol вЂ“ Whether the URLs are secure (HTTPS) or insecure (HTTP).
URI Issues вЂ“ Non ASCII characters, underscores, uppercase characters, parameters, or long URLs.
Duplicate Pages вЂ“ Hash value / MD5checksums algorithmic check for exact duplicate pages.
Page Titles вЂ“ Missing, duplicate, over 65 characters, short, pixel width truncation, same as h1, or multiple.
Meta Description вЂ“ Missing, duplicate, over 156 characters, short, pixel width truncation or multiple.
Meta Keywords вЂ“ Mainly for reference, as they are not used by Google, Bing or Yahoo.
File Size вЂ“ Size of URLs & images.
Page Depth Level.
H1 вЂ“ Missing, duplicate, over 70 characters, multiple.
H2 вЂ“ Missing, duplicate, over 70 characters, multiple.
Meta Robots вЂ“ Index, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir etc.
Meta Refresh вЂ“ Including target page and time delay.
Canonical link element & canonical HTTP headers.
rel=вЂњnextвЂќ and rel=вЂњprevвЂќ.
Follow & Nofollow вЂ“ At page and link level (true/false).
hreflang Attributes вЂ“ Audit missing confirmation links, inconsistent & incorrect languages codes, non canonical hreflang and more.
Rendering вЂ“ Crawl jР°vascript frameworks like AngularJS and React, by crawling the rendered HTML after jР°vascript has executed.
AJAX вЂ“ Select to obey GoogleвЂ™s now deprecated AJAX Crawling Scheme.
Inlinks вЂ“ All pages linking to a URI.
Outlinks вЂ“ All pages a URI links out to.
Anchor Text вЂ“ All link text. Alt text from images with links.
Images вЂ“ All URIs with the image link & ll images from a given page. Images over 100kb, missing alt text, alt text over 100 characters.
User-Agent Switcher вЂ“ Crawl as Googlebot, Bingbot, Yahoo! Slurp, mobile user-agents or your own custom UA.
Configurable Accept-Language Header вЂ“ Supply an Accept-Language HTTP header to crawl locale-adaptive content.
Redirect Chains вЂ“ Discover redirect chains and loops.
Custom Source Code Search вЂ“ Find anything you want in the source code of a website! Whether thatвЂ™s Google Analytics code, specific text, or code etc.
Custom Extraction вЂ“ Scrape any data from the HTML of a URL using XPath, CSS Path selectors or regex.
Google Analytics Integration вЂ“ Connect to the Google Analytics API and pull in user and conversion data directly during a crawl.
Google Search Console Integration вЂ“ Connect to the Google Search Analytics API and collect impression, click and average position data against URLs.
External Link Metrics вЂ“ Pull external link metrics from Majestic, Ahrefs and Moz APIs into a crawl to perform content audits or profile links.
XML Sitemap Generator вЂ“ Create an XML sitemap and an image sitemap using the SEO spider.
Custom robots.txt вЂ“ Download, edit and test a siteвЂ™s robots.txt using the new custom robots.txt.
Rendered Screen Shots вЂ“ Fetch, view and analyse the rendered pages crawled.