If Google can not understand your page, it can not rank you
Google needs a complete picture of your web pages in order to understand it fully.
Test your site for blocked resources using the Google guidelines tool.
Google uses a web crawler named Googlebot to gather information about your website.
Every webmaster should know that a search engine crawler like Googlebot must be able to “crawl” your site in order for it to be included in search engine results.
The way search engine crawlers visit your web pages is determined by a file called robots.txt.
Google must have access to these resources in order to fully understand your webpage, but often these files are blocked by the robots.txt file.
How to check if your site is following this guideline
Use the Google guidelines tool to see what files (if any) are blocked from Googlebot.
Make sure that search engine spiders are able to see your site correctly to get better rankings.
Ensuring that your website is seen correctly by search engine spiders is vital.