Home » Guide » Top 5 Crawl Stats Insights in Google Search Console

Top 5 Crawl Stats Insights in Google Search Console

Google Search Console’s crawl statistics is a useful report, but it’s not available from the main interface. You need it for this reason.

For those just getting started with SEO, one report in Google Search Console is both quite helpful and extremely difficult to locate.

Even though you can’t access it through Google Search Console’s main interface, it’s one of the most useful tools for any SEO practitioner.

The purpose of this report, how to get it, and how to utilize it for SEO purposes are all covered in this article.

How is Your Website Crawled?

It’s critical for SEO, particularly for big sites, to keep an eye on the number of pages Googlebot can and wants to crawl.

Google may not index some of your most important pages if your website’s crawl budget is too low.

Google doesn’t index anything that does not appear there, therefore in this case, there is nothing to look for.

Every day, Googlebot visits a certain number of pages on your website.

Anomalies that may be contributing to your SEO troubles may be identified using this information.

Diving Into Your Crawl Stats: 5 Key Insights

The Crawl stats report may be accessed by logging into your Google Search Console account and going to Settings > Crawl stats.

In the Crawl statistic report, you may look at all of these data dimensions:

1. Host

On shop.website.com, you’ve got an e-commerce store and a blogging site, respectively:

You can simply examine the crawl statistics for each subdomain of your website using the Crawl metrics report.

Subfolders are not supported by this approach at this time.

2. HTTP Status Code

The status codes of crawled URLs may also be examined in the Crawl metrics report.

To avoid Googlebot wasting energy crawling URLs that aren’t HTTP 200 OK. Your crawl money will be wasted.

Settings > Crawl Stats > Crawl requests breakdown provides a breakdown of crawled URLs by status code.

In this example, rerouted pages accounted for 16 percent of all queries.

Redirect hops and other possible problems should be investigated if you notice data like these.

The presence of a huge number of 5xx faults is, in my view, one of the more egregious examples shown here.

“The limit falls lower and Googlebot crawls less,” Google says in its instructions in response to a site’s slowness or server issues.

To learn more about Google Search Console’s 5xx issues, check out Roger Montti’s post.

3. Purpose

There are two types of crawling in the Crawl metrics report:

  • A list of the URLs that have been crawled by Refresh (a recrawl of already known pages, e.g., Googlebot is visiting your homepage to discover new links and content).
  • As part of the Discovery process, URLs are crawled (URLs that were crawled for the first time).

Here’s an illustration of how valuable this breakdown is:

About a million pages of a website were labeled “Discovered – presently not indexed,” which I discovered lately.

Ninety percent of the pages on that site were affected by this problem.

Hire vetted freelancers to help you grow your business right now!

If you have a team, Fiverr Business offers them the tools they need to work together and outsource tasks to the world’s most skilled pool of freelancers.

The term “Detected but not indexed” refers to the fact that Google discovered a page, but did not visit it.) If, for example, you found a new restaurant in your area but didn’t give it a try.

Related:  10 Best Social Media Magazines & Blogs to Read in 2022

One alternative was to wait and see whether Google would eventually index these sites.

Another alternative was to examine the facts and decide about the problem.

On the other hand, after logging into the Google Search Console, I clicked on Settings and went to the section labeled Crawl Stats.

Only 7460 pages were seen by Google daily, according to the results.

However, there’s one more item to consider.

Only 35% of the 7460 URLs were crawled for discovery purposes, as I learned from the Crawl data report.

Every day, Google discovers just 2611 new pages.

2611 of more than a million results.

At that rate, Google would take 382 days to index the whole website.

It was a game-changer when I discovered this. To concentrate on crawl budget optimization, all other search improvements were put on the back burner.

4. File Type

The GSC Crawl statistics might be useful for JavaScript websites, as well. JS files that are essential for proper rendering may be simply checked by looking at how often Googlebot scans them.

You can examine how successfully Googlebot can crawl your photos in this report if you have a lot of images on your site and image search is an important part of your SEO strategy.

5. Googlebot Type

Lastly, the Crawl analytics report details the kind of Googlebot that crawled your site.

Each of the Google bots (Mobile, Desktop, Image, and Video) may be identified by their fraction of total queries.

Various Other Valuable Details

  • If you don’t have access to your server logs, the Crawl statistic report is a great place to look for vital information:
  • DNS problems are the most common cause of these issues.
  • Timeouts on pages.

Problems retrieving the robots.txt file on the host computer.

Using Crawl Stats in the URL Inspection Tool

In addition to the Crawl metrics report, you can also obtain additional specific crawl statistics in the URL Inspection Tool.

Recently, I worked on a huge e-commerce website and identified two urgent difficulties after some first analyses:

Many product pages were missing from Google’s search results.

Products did not communicate with one another inside the company because of this. Sitemaps and category pages were the sole means for Google to find fresh material.

Getting access to server logs and seeing whether Google has crawled the paginated category pages was a logical next step to take.

However, accessing server logs might be a challenge, particularly if you’re dealing with a big firm.

Search Console’s crawl analytics report saved the day.

Take a look at my approach to solving this problem and see if you can apply it to solve your own.

The URL Inspection tool may be used to find a URL. In one of the site’s key categories, I selected one of the site’s paginated pages.

Next, go to the Crawl report under Coverage.

The URL was last crawled three months ago in this instance.

Keep in mind that this was one of the website’s primary category pages that hadn’t been crawled in almost a month!

Other category sites were looked out for as well.

There were a few primary category sites that were never indexed by Googlebot. Possibly Google hasn’t heard of some of them.

The importance of this information cannot be overstated when you’re trying to increase a website’s exposure.

Using the Crawl metrics report, you can quickly find out information like this.

Final Thoughts

The Crawl data report is a useful SEO tool even though you may use Google Search Console for years without ever seeing it.

A huge site’s rich material must be swiftly found and indexed by Google. This tool can assist you in identifying indexing problems and optimizing your crawl budget.