Why do we care about website errors if everything looks like it's working?
Because Google has found all these errors and is downgrading the search ranking of our site accordingly.
Google Search Console makes identifying and fixing server and URL errors much easier than it would be otherwise.
Example:
We recently did a partial redesign of our agency site ACROGlobal.com. Looking at the pages of the finished product - over and over - we could see nothing wrong with it.
But then after allowing time for Google to crawl the revised site, we looked at Google Search Console.
Tip: First thing after logging in to Search Console and selecting a domain to look at, make sure Search Console is getting its data from the preferred version of your site's URLs that you chose when setting up Search Console: http://domain.com or http://www.domain.com. Data will differ between the two. If you haven't set a preferred version, look at data from both versions. One may have no data - pick the other one. Then at the end of this exercise, choose a preferred version for future use.
At the top right of the Dashboard, you can see which version Search Console is choosing to present :
If that's not the preferred version, change it in the dropdown.
The left-side menu of Search Console looks like this:
To see what errors Google has found, click Crawl Errors.
Here's what we found there:
That screen tells us several things:
- Google detected no site errors , that is, Googlebot wasn't blocked from crawling any important parts of the site by our robots.txt file or otherwise.
- Google's Desktop bot found one URL - the subdirectory /services/ as indicated near the bottom of the screen- with a "soft 404" error. That means the URL doesn't exist but the server didn't return a 404 error when the bot requested it.
- Google's Desktop bot also found 25 404 (Not Found) errors.
Dealing first with the soft 404 error:
The reason this is occurring is: for security reasons, along with several legitimate page URLs, /services/ contains an index file consisting of simply "<p><strong>ACCESS DENIED</strong></p>". Obviously there should be no links to this, but Google found one.
So where is this link?
Toi find out, we clicked on services/ in the screen above, and on the Linked from tab of the following screen found this:
No wonder we didn't immediately spot this - it's on a different site! Fix is very straightforward - just open the source code of those 3 pages and remove the offending link.
Now for the 25 404 errors:
Clicking on the Not Found tab of the Crawl Errors screen produces the following (showing here only the first three 404s, for simplicity):
Each of those URLs (pages) existed at one time or another on old versions of ACROGlobal.com, but not on the new version. We checked each one to find out where the broken link is. Here's what we found by clicking on "analytics.htm" and going to the Linked From tab of the next screen:
So despite our fine-tooth-combing of our new site version, we missed this broken link on five of our own pages. Easy fix.
But sacré bleu! The same broken link also exists on the blog of the HEC Montreal Business School! We'll have to ask them to fix that!
Removing or changing all of these URLs producing crawl errors will make Google happier with ACROGlobal.com and possibly other of our sites also.
That makes this sometimes tedious process well worthwhile.
More on the Search Console later.
In the meantime, if you have questions of comments, please send them along.
Comments on Google Search Console: Finding and fixing website errors