The indicators of site health define below are adapted from the details provided using the Site Audit tool of SEMrush.

Site Health
ERRORS
4xx errors – A 4xx error means that a webpage cannot be accessed. This is usually the result of broken links. These errors prevent users and search engine robots from accessing your webpages, and can negatively affect both user experience and search engine crawlability. This will in turn lead to a drop in traffic driven to your website. Please be aware that crawler may detect a working link as broken if your website blocks our crawler from accessing it. This usually happens due to the following reasons: – DDoS protection system. – Overloaded or misconfigured server. – “Disallow” entries in your robots.txt. HOW TO FIX IT – Please follow all links reported as 4xx. If a webpage returns an error, remove the link leading to the error page or replace it with another resource. If the links reported as 4xx do work when accessed with a browser, you can try either of the following: – Contact your web hosting support team – Instruct search engine robots not to crawl your website too frequently by specifying the “crawl-delay” directive in your robots.txt
5xx errors – These errors prevent users and search engine robots from accessing your webpages, and can negatively affect user experience and search engines’ crawlability. This will in turn lead to a drop in traffic driven to your website. HOW TO FIX IT – Investigate the causes of these errors and try to fix them.
AMP Pages with HTML Issues
AMP Pages with Style and Layout Issues
AMP Pages with Templating Issues
Broken canonical URLs – By setting a rel=”canonical” element on your page, you can inform search engines of which version of a page you want to show up in search results. When using canonical tags, it is important to make sure that the URL you include in your rel=”canonical” element leads to a page that actually exists. Canonical links that lead to non-existent webpages complicate the process of crawling and indexing your content and, as a result, decrease crawling efficiency and lead to unnecessary crawl budget waste. HOW TO FIX IT – Review all broken canonical links. If a canonical URL applies to a non-existent webpage, remove it or replace it with another resource.
Broken internal images – An internal broken image is an image that can’t be displayed because it no longer exists, its URL is misspelled, or because the file path is not valid. Broken images may jeopardize your search rankings because they provide a poor user experience and signal to search engines that your page is low quality. HOW TO FIX IT – Replace all broken images or delete them.
Broken internal links – Broken internal links can cause a webpage to return an error status. This can occur due to an incorrect or malformed URL, or because the page the link is leading to is broken or no longer exists, etc. Multiple broken internal links may discourage users from visiting other pages of your website. Also, broken links prevent crawlers from indexing your site properly. As a result, your website rank may be downgraded. Please note that crawler may detect a working link as broken if your website blocks our crawler from accessing it. This may happen due to the following reasons: – DDoS protection system. – Overloaded or misconfigured server. – “Disallow” entries in your robots.txt. HOW TO FIX IT – Please follow all the links reported as broken. If a webpage returns an error, remove the link leading to the error page or replace it with another resource. If the links reported as broken do work when accessed with a browser, you may try either of the following: – Contact your web hosting support team. – Instruct search engine robots not to crawl your website too frequently by specifying the “crawl-delay” directive in your robots.txt.
Certificate Expiration – If you allow your certificate to expire, users accessing your website will be presented with a warning message, which usually stops them from going further and may lead to a drop in your organic search traffic. HOW TO FIX IT – Ask your website administrator to renew the certificate and run periodic checks to avoid any future issues.
Certificate registered to incorrect name – If the domain name to which your SSL certificate is registered doesn’t match the name displayed in the address bar, web browsers will block users from visiting your website by showing them a name mismatch error, and this will in turn negatively affect your organic search traffic. HOW TO FIX IT – Contact your website administrator and ask them to install the correct certificate.
DNS resolution issue – A DNS resolution error is reported when crawler can’t resolve the hostname when trying to access your webpage. HOW TO FIX IT – Please contact your web hosting technical support and ask them to investigate and fix the issue.
Duplicate content – Webpages are considered duplicate if they contain identical or nearly identical content. Excessive duplicate content may confuse search engines as to which page to index and which one to prioritize in search results. Using duplicated content across multiple pages may lead to traffic loss and poor placement in search results, and it may even provoke search engines to ban your page. Please note that crawler may flag your webpages as duplicates if there is too much text in your website’s navigation compared to the amount of unique text on your page. HOW TO FIX IT Here are a few ways to fix duplicate content: – Provide some unique content on the webpage. – Remove duplicate content. – Add a rel=”canonical” link to one of your duplicate pages to inform search engines which page to show in search results.
Duplicate meta descriptions – Crawler reports pages that have duplicate meta descriptions only if they are exact matches.
A <meta description> tag is a short summary of a webpage’s content that helps search engines understand what the page is about and can be shown to users in search results.
Duplicate meta descriptions on different pages mean a lost opportunity to use more relevant keywords. Also, duplicate meta descriptions make it difficult for search engines and users to differentiate between different webpages. It is better to have no meta description at all than to have a duplicate one. HOW TO FIX IT – Provide a unique, relevant meta description for each of your webpages.
Duplicate title tag – Crawler reports pages that have duplicate title tags only if they are exact matches. Duplicate <title> tags make it difficult for search engines to determine which of a website’s pages is relevant for a specific search query, and which one should be prioritized in search results. Pages with duplicate titles have a lower chance of ranking well and are at risk of being banned. Moreover, identical <title> tags confuse users as to which webpage they should follow. HOW TO FIX IT – Provide a unique and concise title for each of your pages that contains your most important keywords.
Hreflang conflicts within page source code – If you’re running a multilingual website, it is necessary to help users from other countries find your content in the language that is most appropriate for them. This is where the hreflang (rel=”alternate” hreflang=”x”) attribute comes in handy. This attribute helps search engines understand which page should be shown to visitors based on their location. It is very important to properly synchronize your hreflang attributes within your page’s source code, otherwise you may experience unexpected search engine behavior. For more information, see this article. HOW TO FIX IT To avoid any conflicts, we recommend that you review your hreflang attributes within your page’s source code and fix any of the following issues: – Conflicting hreflang and rel=canonical URLs – Conflicting hreflang URLs – No self-referencing hreflang URLs
Incorrect pages found in sitemap.xml – A sitemap.xml file makes it easier for crawlers to discover the pages on your website. Only good pages intended for your visitors should be included in your sitemap.xml file.
This error is triggered if your sitemap.xml contains URLs leading to webpages with the same content. Populating your file with such URLs will confuse search engine robots as to which URL they should index and prioritize in search results. Most likely, search engines will index only one of those URLs, and this URL may not be the one you’d like to be promoted in search results. HOW TO FIX IT – Review your sitemap.xml for any URLs pointing to copies of original webpages, and remove all of them except the one you’d like to be recognized by search engines as the preferred version.

Invalid robots.txt format – If your robots.txt file is poorly configured, it can lead to disastrous results. One mistake can damage your search rankings, ruining all your search engine optimization efforts. HOW TO FIX IT – Review your robots.txt file and fix all errors. For information on how to configure your robots.txt, please see this article.

Invalid sitemap.xml format – If your sitemap.xml file has any errors, search engines will not be able to process the data it contains, and they will ignore it. HOW TO FIX IT – Review your sitemap.xml file and fix all errors. For information on how to configure your sitemap.xml, please see this article.

Issues with hreflang values – A hreflang (rel=”alternate” hreflang=”x”) attribute helps search engines understand which page should be shown to visitors based on their location. Utilizing this attribute is necessary if you’re running a multilingual website and would like to help users from other countries find your content in the language that is most appropriate to them. It is very important to properly implement hreflang attributes, otherwise search engines will not be able to show the correct language version of your page to the relevant audience.
Issues with incorrect hreflang links. HOW TO FIX IT – Make sure that your hreflang attributes are used correctly. Here are a few ways to avoid hreflang implementation issues: – Specify the correct language code – Specify the correct country code – Use hyphens to separate language and country values – Precede a country code with a language code – Do not use a country code alone

Issues with mixed content – If your website contains any elements that are not secured with HTTPS, this may lead to security issues. Moreover, browsers will warn users about loading unsecure content, and this may negatively affect user experience and reduce their confidence in your website. HOW TO FIX IT – Only embed HTTPS content on HTTPS pages.Large HTML page size – A webpage’s HTML size is the size of all HTML code contained on it. A page size that is too large (i.e., exceeding 2 MB) leads to a slower page load time, resulting in a poor user experience and a lower search engine ranking. HOW TO FIX IT – Review your page’s HTML code and consider optimizing its structure and/or removing inline scripts and styles.

Meta refresh redirects – A meta refresh tag instructs a web browser to redirect a user to a different page after a given interval. Generally, it is recommended that you avoid using a meta refresh tag as it is considered a poor, slow and outdated technique that may lead to SEO and usability issues. HOW TO FIX IT – Review all pages with a meta refresh tag. If this tag is used to redirect an old page to a new one, replace it with a 301 redirect.

Missing canonical tags in AMP pages – This issue is triggered if your AMP page has no canonical tag. When creating AMP pages, several requirements should be met: – If you have both an AMP and a non-AMP version of the same page, you should place canonical tags on both versions to prevent duplicate content issues  – If you have only an AMP version of your webpage, it must have a self-referential canonical tag. For more information, please see this Google article. HOW TO FIX IT – Add a rel=”canonical” tag in the <head> section of each AMP page.

Multiple canonical URLs – Multiple rel=”canonical” tags with different URLs specified for the same page confuse search engines and make it almost impossible for them to identify which URL is the actual canonical page. As a result, search engines will likely ignore all the canonical elements or pick the wrong one. That’s why it is recommended that you specify no more than one rel=”canonical” for a page. HOW TO FIX IT – Remove all canonical URLs except the one that you’d like to serve as the actual canonical page.

Neither canonical URL nor 301 redirect from HTTP homepage – If you’re running both HTTP and HTTPS versions of your homepage, it is very important to make sure that their coexistence doesn’t impede your SEO. Search engines are not able to figure out which page to index and which one to prioritize in search results. As a result, you may experience a lot of problems, including pages competing with each other, traffic loss and poor placement in search results. To avoid these issues, you must instruct search engines to only index the HTTPS version. HOW TO FIX IT – Do either of the following: – Redirect your HTTP page to the HTTPS version via a 301 redirect – Mark up your HTTPS version as the preferred one by adding a rel=”canonical” to your HTTP pages

Non-secure pages – This issue is triggered if crawler detects an HTTP page with a <input type=”password”> field. Using a <input type=”password”> field on your HTTP page is harmful to user security, as there is a high risk that user login credentials can be stolen. To protect users’ sensitive information from being compromised, Google Chrome will start informing users about the dangers of submitting their passwords on HTTP pages by labeling such pages as “non-secure” starting January 2017. This could have a negative impact on your bounce rate, as users will most likely feel uncomfortable and leave your page as quickly as possible. HOW TO FIX IT – Move your HTTP webpages that contain a password field to HTTPS.

Old security protocol version – Running SSL or old TLS protocol (version 1.0) is a security risk, which is why it is strongly recommended that you implement the newest protocol versions. HOW TO FIX IT – Update your security protocol to the latest version.

Pages not crawled – This issue indicates that crawler couldn’t access the webpage because the server either timed out or refused/closed the connection before our crawler could receive a response. HOW TO FIX IT – Please contact your web hosting technical support team and ask them to fix the issue.

Redirect chains and loops – Redirecting one URL to another is appropriate in many situations. However, if redirects are done incorrectly, it can lead to disastrous results. Two common examples of improper redirect usage are redirect chains and loops. Long redirect chains and infinite loops lead to a number of problems that can damage your SEO efforts. They make it difficult for search engines to crawl your site, which affects your crawl budget usage and how well your webpages are indexed, slows down your site’s load speed, and, as a result, may have a negative impact on your rankings and user experience. Please note that if you can’t spot a redirect chain with your browser, but it is reported in your Site Audit report, your website probably responds to crawlers’ and browsers’ requests differently, and you still need to fix the issue. HOW TO FIX IT – The best way to avoid any issues is to follow one general rule: do not use more than three redirects in a chain. If you are already experiencing issues with long redirect chains or loops, we recommend that you redirect each URL in the chain to your final destination page. We do not recommend that you simply remove redirects for intermediate pages as there can be other links pointing to your removed URLs, and, as a result, you may end up with 404 errors.
Slow page load speed – Page load speed is one of the most important ranking factors. The quicker your page loads, the higher the rankings it can receive. Moreover, fast-loading pages positively affect user experience and may increase your conversion rates. Please note that “page load speed” usually refers to the amount of time it takes for a webpage to be fully rendered by a browser. However, crawler only measures the time it takes to load a webpage’s HTML code – load times for images, JavaScript and CSS are not factored in. HOW TO FIX IT – The main factors that negatively affect your HTML page generation time are your server’s performance and the density of your webpage’s HTML code. So, try to clean up your webpage’s HTML code. If the problem is with your web server, you should think about moving to a better hosting service with more resources.

Title tag is missing or empty – A <title> tag is a key on-page SEO element. It appears in browsers and search results, and helps both search engines and users understand what your page is about. HOW TO FIX IT – If you don’t want to miss the opportunity to rank high in search results and gain a higher click-through rate, you should ensure that each of your website’s pages has a unique and concise title containing your most important keywords.

Viewport not configured – The viewport meta tag is an HTML tag that allows you to control a page’s viewport size and scale on mobile devices. This tag is indispensable if you want to make your website accessible and optimized for mobile devices. HOW TO FIX IT – Set the viewport meta tag for each page.

Couldn’t open the page’s URL – This issue is reported when crawler fails to access a page because of an invalid page URL. Common mistakes include the following: – Invalid URL syntax (e.g., no or an invalid protocol is specified, backslashes () are used). – Spelling mistakes. – Unnecessary additional characters. HOW TO FIX IT – How to fix: Make sure your page’s URL conforms to a standard scheme and doesn’t have any unnecessary characters or typos.

www resolve issues – Normally, a webpage can be accessed with or without adding www to its domain name. If you haven’t specified which version should be prioritized, search engines will crawl both versions, and the link juice will be split between them. Therefore, none of your page versions will get high positions in search results. HOW TO FIX IT – Set your preferred version in Google Search Console. For details, please see this article.

WARNINGS
Broken external images – A broken external image is an image that can’t be displayed because it no longer exists or because its URL is misspelled. Having too many broken external images negatively affects user experience and may be a signal to search engines that your website is poorly coded or maintained. HOW TO FIX IT – Contact the website owner using the error image and notify them about the issue.
Broken external links – Broken external links lead users from one website to another and bring them to non-existent webpages. Multiple broken links negatively affect user experience and may worsen your search engine rankings because crawlers may think that your website is poorly maintained or coded. Please note that crawler may detect a working link as broken. Generally, this happens if the server hosting the website you’re referring to blocks our crawler from accessing this website. HOW TO FIX IT – Please follow all links reported as broken. If a target webpage returns an error, remove the link leading to the error page or replace it with another resource. If the links reported as broken do work when accessed with a browser, you should contact the website’s owner and inform them about the issue.
Doctype not declared – A webpage’s doctype instructs web browsers which version of HTML or XHTML is being used. Declaring a doctype is extremely important in order for a page’s content to load properly. If no doctype is specified, this may lead to various problems, such as messed up page content or slow page load speed, and, as a result, negatively affect user experience. HOW TO FIX IT – Specify a doctype for each of your pages by adding a <!Doctype> element (e.g., “<!Doctype HTML5>”) to the very top of every webpage source, right before the <html> tag.
Duplicate content in h1 and title – It is a bad idea to duplicate your title tag content in your first-level header. If your page’s <title> and <h1> tags match, the latter may appear over-optimized to search engines. Also, using the same content in titles and headers means a lost opportunity to incorporate other relevant keywords for your page. HOW TO FIX IT – Try to create different content for your <title> and <h1> tags.
Encoding not declared – Providing a character encoding tells web browsers which set of characters must be used to display a webpage’s content. If a character encoding is not specified, browsers may not render the page content properly, which may result in a negative user experience. Moreover, search engines may consider pages without a character encoding to be of little help to users and, therefore, place them lower in search results than those with a specified encoding. HOW TO FIX IT – Declare a character encoding either by specifying one in the charset parameter of the HTTP Content-Type header (Content-Type: text/html; charset=utf-8) or by using a meta charset attribute in your webpage HTML (<meta charset=”utf-8″/>). For more details, please see these articles: Character Encoding – HTTP header and Character Encoding – HTML
Flash content used – Although, Flash-based pages may look nice, it is not recommended that you use Flash content for several reasons.
Most importantly, Flash content negatively impacts your website’s visibility because it cannot be properly indexed and crawled by search engines.
Secondly, using Flash content negatively affects your website’s performance. Search engines may consider it as a signal that your website isn’t worth ranking.
And finally, Flash content doesn’t work well on mobile devices. HOW TO FIX IT – Try to avoid Flash content as much as possible.
Frames used – <frame> tags are considered to be one of the most significant search engine optimization issues. Not only is it difficult for search engines to index and crawl content within <frame> tags, which may in turn lead to your page being excluded from search results, using these tags also negatively affects user experience. HOW TO FIX IT – How to fix: Try to avoid using <frame> tags whenever possible.
HTTP encryption not used – Google considers a website’s security as a ranking factor. Websites that do not support HTTPS connections may be less prominent in Google’s search results, while HTTPS-protected sites will rank higher with its search algorithms. HOW TO FIX IT – Switch your site to HTTPS.
HTTP URLs in sitemap.xml for HTTPS site – Your sitemap.xml should include the links that you want search engines to find and index. Using different URL versions in your sitemap could be misleading to search engines and may result in an incomplete crawling of your website. HOW TO FIX IT – Replace all HTTP URLs in your sitemap.xml with HTTPS URLs.
Links lead to HTTP pages for HTTPS site – If any link on website points to the old HTTP version of website, search engines can become confused as to which version of the page they should rank. HOW TO FIX IT – Replace all HTTP links with the new HTTPS versions.
Low text to HTML ratio – Your text to HTML ratio indicates the amount of actual text you have on your webpage compared to the amount of code. This warning is triggered when your text to HTML is 10% or less. Search engines have begun focusing on pages that contain more content. That’s why a higher text to HTML ratio means your page has a better chance of getting a good position in search results. Less code increases your page’s load speed and also helps your rankings. It also helps search engine robots crawl your website faster. HOW TO FIX IT – Split your webpage’s text content and code into separate files and compare their size. If the size of your code file exceeds the size of the text file, review your page’s HTML code and consider optimizing its structure and removing embedded scripts and styles.
Low word count – This issue is triggered if the number of words on your webpage is less than 200. The amount of text placed on your webpage is a quality signal to search engines. Search engines prefer to provide as much information to users as possible, so pages with longer content tend to be placed higher in search results, as opposed to those with lower word counts. HOW TO FIX IT – Improve your on-page content and be sure to include more than 200 meaningful words.
Missing ALT attributes – Alt attributes within <img> tags are used by search engines to understand the contents of your images. If you neglect alt attributes, you may miss the chance to get a better placement in search results because alt attributes allow you to rank in image search results. Not using alt attributes also negatively affects the experience of visually impaired users and those who have disabled images in their browsers. HOW TO FIX IT – Specify ‪‬‬a relevant alternative attribute inside an <img> tag for each image on your website, e.g., “<img src=”mylogo.png” alt=”This is my company logo”>”.
Missing h1 – While less important than <title> tags, h1 headings still help define your page’s topic for search engines and users. If an <h1> tag is empty or missing, search engines may place your page lower than they would otherwise. Besides, a lack of an <h1> tag breaks your page’s heading hierarchy, which is not SEO friendly. HOW TO FIX IT – Provide a concise, relevant h1 heading for each of your page.
Missing hreflang and lang attributes – This issue is reported if your page has neither lang nor hreflang attribute. When running a multilingual website, you should make sure that you’re doing it correctly. First, you should use a hreflang attribute to indicate to Google which pages should be shown to visitors based on their location. That way, you can rest assured that your users will always land on the correct language version of your website. You should also declare a language for your webpage’s content (i.e., lang attribute). Otherwise, your web text might not be recognized by search engines. It also may not appear in search results, or may be displayed incorrectly. HOW TO FIX IT – Perform the following: – Add a lang attribute to the <html> tag, e.g., “<html lang=”en”>” – Add a hreflang attribute to your page’s <head> tag, e.g., <link rel=”alternate” href=”http://example.com/” hreflang=”en”/>
Missing meta description – Though meta descriptions don’t have a direct influence on rankings, they are used by search engines to display your page’s description in search results. A good description helps users know what your page is about and encourages them to click on it. If your page’s meta description tag is missing, search engines will usually display its first sentence, which may be irrelevant and unappealing to users. HOW TO FIX IT – In order to gain a higher click-through rate, you should ensure that all of your webpages have meta descriptions that contain relevant keywords.
No SNI support – One of the common issues you may face when using HTTPS is when your web server doesn’t support Server Name Indication (SNI). Using SNI allows you to support multiple servers and host multiple certificates at the same IP address, which may improve security and trust. HOW TO FIX IT – Make sure that you web server supports SNI. Keep in mind that SNI is not supported by some older browsers, which is why you need to ensure that your audience uses browsers supporting SNI.
Nofollow attributes in internal links – The rel=”nofollow” attribute is an element in an <a> tag that tells crawlers not to follow the link (e.g., “<a href=”http://example.com/link” rel=”nofollow”>Nofollow link example</a>”).”Nofollow” links don’t pass any link juice to referred webpages. That’s why it is not recommended that you use nofollow attributes in internal links. You should let link juice flow freely throughout your website. Moreover, unintentional use of nofollow attributes may result in your webpage being ignored by search engine crawlers even if it contains a valuable content. HOW TO FIX IT – Make sure not to use nofollow attributes by mistake. Remove them from <a> tags, if necessary.
Sitemap.xml not found – A sitemap.xml file is used to list all URLs available for crawling. It can also include additional data about each URL.
Using a sitemap.xml file is quite beneficial. Not only does it provide easier navigation and better visibility to search engines, it also quickly informs search engines about any new or updated content on your website. Therefore, your website will be crawled faster and more intelligently. HOW TO FIX IT – Consider generating a sitemap.xml file if you don’t already have one.
Sitemap.xml not specified in robots.txt – If you have both a sitemap.xml and a robots.txt file on your website, it is a good practice to place a link to your sitemap.xml in your robots.txt, which will allow search engines to better understand what content they should crawl. HOW TO FIX IT – Specify the location of your sitemap.xml in your robots.txt.
Temporary redirects – Temporary redirects (i.e., a 302 and a 307 redirect) mean that a page has been temporarily moved to a new location. Search engines will continue to index the redirected page, and no link juice or traffic is passed to the new page, which is why temporary redirects can damage your search rankings if used by mistake. HOW TO FIX IT – Review all pages to make sure the use of 302 and 307 redirects is justified. If so, don’t forget to remove them when they are no longer needed. However, if you permanently move any page, replace a 302/307 redirect with a 301/308 one.
Title element is too long – Most search engines truncate titles containing more than 75 characters. Incomplete and shortened titles look unappealing to users and won’t entice them to click on your page. HOW TO FIX IT – Try to rewrite your page titles to be 75 characters or less.
Title element is too short – Generally, using short titles on webpages is a recommended practice. However, keep in mind that titles containing 10 characters or less do not provide enough information about what your webpage is about and limit your page’s potential to show up in search results for different keywords. HOW TO FIX IT – Add more descriptive text inside your page’s <title> tag.
Too many on-page links – This issue is triggered if a webpage contains more than three thousand links. Crawler doesn’t crawl more than three thousand on-page links. As a rule, other search engines crawlers treat webpages with too many links the same way: they crawl the first 3000 links on a page and ignore all the links that are over three thousand limit. Placing tons of links on a webpage can make your page look low quality and even spammy to search engines, which may cause your page to drop in rankings or not to show up in search results at all. Having too many on-page links is also bad for user experience. HOW TO FIX IT – Try to keep the number of on-page links to under 3000.
Too many URL parameters – Using too many URL parameters is not an SEO-friendly approach. Multiple parameters make URLs less enticing for users to click and may cause search engines to fail to index some of your most important pages. HOW TO FIX IT – Try to use no more than four parameters in your URLs.
Uncompressed pages – This issue is triggered if the Content-Encoding entity is not present in the response header. Page compression is essential to the process of optimizing your website. Using uncompressed pages leads to a slower page load time, resulting in a poor user experience and a lower search engine ranking. HOW TO FIX IT – Enable compression on your webpages for faster load time.
Underscores in URL – When it comes to URL structure, using underscores as word separators is not recommended because search engines may not interpret them correctly and may consider them to be a part of a word. Using hyphens instead of underscores makes it easier for search engines to understand what your page is about. Although using underscores doesn’t have a huge impact on webpage visibility, it decreases your page’s chances of appearing in search results, as opposed to when hyphens are used. HOW TO FIX IT – Replace underscores with hyphens. However, if your page ranks well, we do not recommend that you do this.
NOTICE
Blocked by X-Robots-Tag: noindex HTTP header – The x-robots-tag is an HTTP header that can be used to instruct search engines whether or not they can index or crawl a webpage. This tag supports the same directives as a regular meta robots tag and is typically used to control the crawling of non-HTML files. If a page is blocked from crawling with x-robots-tag, it will never appear in search results. HOW TO FIX IT – Make sure that pages with valuable content are not blocked from crawling by mistake.
Blocked from crawling – If a page cannot be accessed by search engines, it will never appear in search results. A page can be blocked from crawling either by a robots.txt file or a noindex meta tag.  HOW TO FIX IT – Make sure that pages with valuable content are not blocked from crawling by mistake.
Hreflang language mismatch issues – This issue is triggered if a language value specified in a hreflang attribute doesn’t match your page’s language, which is determined based on semantic analysis. Any mistakes in hreflang attributes may confuse search engines, and your hreflang attributes will most likely be interpreted incorrectly. So it’s worth taking the time to make sure you don’t have any issues with hreflang attributes. HOW TO FIX IT – Review all pages reported to have this issue and fix all hreflang attributes. Please note that crawler may report your webpage to have a “hreflang language mismatch” issue even if the hreflang value shows the correct language. This usually happens if your webpage is multilingual or has too little content.
Multiple h1 tags – Although multiple <h1> tags are allowed in HTML5, we still do not recommend that you use more than one <h1> tag per page. Including multiple <h1> tags may confuse users. HOW TO FIX IT – Use multiple <h2>-<h6> tags instead of an <h1>.
No HSTS support – HTTP Strict Transport Security (HSTS) informs web browsers that they can communicate with servers only through HTTPS connections. So, to ensure that you don’t serve unsecured content to your audience, we recommend that you implement HSTS support.  HOW TO FIX IT – Use a server that supports HSTS.
Nofollow attributes in external links – A nofollow attribute is an element in an <a> tag that tells crawlers not to follow the link. “Nofollow” links don’t pass any link juice or anchor texts to referred webpages. The unintentional use of nofollow attributes may have a negative impact on the crawling process and your rankings. HOW TO FIX IT – Make sure you haven’t used nofollow attributes by mistake. Remove them from <a> tags, if needed.
Orphaned pages (Google Analytics) – A webpage that is not linked to internally is called an orphaned page. It is very important to check your website for such pages. If a page has valuable content but is not linked to by another page on your website, it can miss out on the opportunity to receive enough link juice. Orphaned pages that no longer serve their purpose confuse your users and, as a result, negatively affect their experience. We identify orphaned pages on your website by comparing the number of pages we crawled to the number of pages in your Google Analytics account. That’s why to check your website for any orphaned pages, you need to connect your Google Analytics account. HOW TO FIX IT – Review all orphaned pages on your website and do either of the following:- If a page is no longer needed, remove it – If a page has valuable content and brings traffic to your website, link to it from another page on your website – If a page serves a specific need and requires no internal linking, leave it as is
Orphaned sitemap pages – An orphaned page is a webpage that is not linked to internally. Including orphaned pages in your sitemap.xml files is considered to be a bad practice, as these pages will be crawled by search engines. Crawling outdated orphaned pages will waste your crawl budget. If an orphaned page in your sitemap.xml file has valuable content, we recommend that you link to it internally. HOW TO FIX IT – Review all orphaned pages in your sitemap.xml files and do either of the following: – If a page is no longer needed, remove it – If a page has valuable content and brings traffic to your website, link to it from another page on your website – If a page serves a specific need and requires no internal linking, leave it as is
Pages have high Document Interactive Time – We all know that slow page-load speed negatively affects user experience. However, if a user can start interacting with your webpage within 1 second, they are much less likely to click away from this page. That’s why it is important to keep a close eye on the time it takes your most important webpages to become usable, known as the Average Document Interactive Time. HOW TO FIX IT – Make sure that users can start interacting with your most important pages as quickly as possible.
Robots.txt not found – A robots.txt file has an important impact on your overall SEO website’s performance. This file helps search engines determine what content on your website they should crawl. Utilizing a robots.txt file can cut the time search engine robots spend crawling and indexing your website. HOW TO FIX IT – If you don’t want specific content on your website to be crawled, creating a robots.txt file is recommended.
Too long URLs – According to Google, URLs longer than 100 characters are not SEO friendly. Excessive URL length intimidates users and discourages them from clicking or sharing it, thus hurting your page’s click-through rate and usability. Besides, some browsers may have difficulties parsing extremely long URLs. HOW TO FIX IT – Rewrite your URLs to be fewer than 100 characters.