Seo, in its many fundamental sense, trusts something above all others: Search engine spiders crawling and indexing your site.
But nearly every site is going to have pages that you don’t wish to consist of in this exploration.
In a best-case circumstance, these are doing nothing to drive traffic to your site actively, and in a worst-case, they might be diverting traffic from more vital pages.
Fortunately, Google enables web designers to tell search engine bots what pages and content to crawl and what to disregard. There are numerous methods to do this, the most typical being utilizing a robots.txt file or the meta robots tag.
We have an outstanding and in-depth description of the ins and outs of robots.txt, which you ought to certainly read.
But in high-level terms, it’s a plain text file that resides in your website’s root and follows the Robots Exclusion Protocol (REPRESENTATIVE).
Robots.txt supplies spiders with guidelines about the site as an entire, while meta robots tags include directions for specific pages.
Some meta robots tags you might employ include index, which tells search engines to include the page to their index; noindex, which tells it not to add a page to the index or include it in search results page; follow, which instructs an online search engine to follow the links on a page; nofollow, which tells it not to follow links, and an entire host of others.
Both robots.txt and meta robots tags are useful tools to keep in your tool kit, but there’s also another way to advise online search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to manage how your webpages are crawled and indexed by spiders. As part of the HTTP header reaction to a URL, it manages indexing for an entire page, in addition to the specific elements on that page.
And whereas utilizing meta robots tags is fairly uncomplicated, the X-Robots-Tag is a bit more complicated.
But this, obviously, raises the question:
When Should You Utilize The X-Robots-Tag?
According to Google, “Any regulation that can be used in a robots meta tag can likewise be specified as an X-Robots-Tag.”
While you can set robots.txt-related instructions in the headers of an HTTP action with both the meta robotics tag and X-Robots Tag, there are specific situations where you would want to use the X-Robots-Tag– the 2 most common being when:
- You wish to control how your non-HTML files are being crawled and indexed.
- You want to serve directives site-wide rather of on a page level.
For instance, if you want to obstruct a particular image or video from being crawled– the HTTP response approach makes this simple.
The X-Robots-Tag header is likewise useful because it enables you to integrate several tags within an HTTP response or utilize a comma-separated list of instructions to specify directives.
Maybe you don’t want a specific page to be cached and desire it to be unavailable after a certain date. You can utilize a combination of “noarchive” and “unavailable_after” tags to advise search engine bots to follow these guidelines.
Basically, the power of the X-Robots-Tag is that it is much more flexible than the meta robots tag.
The benefit of using an X-Robots-Tag with HTTP reactions is that it permits you to utilize regular expressions to perform crawl directives on non-HTML, in addition to use specifications on a larger, international level.
To help you understand the distinction in between these regulations, it’s useful to categorize them by type. That is, are they crawler directives or indexer directives?
Here’s a convenient cheat sheet to discuss:
|Spider Directives||Indexer Directives|
|Robots.txt– utilizes the user agent, permit, disallow, and sitemap instructions to define where on-site search engine bots are allowed to crawl and not enabled to crawl.||Meta Robotics tag– allows you to specify and prevent online search engine from revealing specific pages on a site in search engine result.
Nofollow– permits you to specify links that must not pass on authority or PageRank.
X-Robots-tag– allows you to control how defined file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you want to obstruct particular file types. An ideal technique would be to add the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be added to a website’s HTTP actions in an Apache server setup via.htaccess file.
Real-World Examples And Utilizes Of The X-Robots-Tag
So that sounds great in theory, but what does it look like in the real world? Let’s take a look.
Let’s state we wanted online search engine not to index.pdf file types. This configuration on Apache servers would look something like the below:
In Nginx, it would appear like the below:
area ~ * . pdf$
Now, let’s look at a various scenario. Let’s say we want to use the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, etc, from being indexed. You might do this with an X-Robots-Tag that would look like the below:
Please keep in mind that understanding how these directives work and the effect they have on one another is crucial.
For instance, what happens if both the X-Robots-Tag and a meta robotics tag are located when crawler bots discover a URL?
If that URL is blocked from robots.txt, then specific indexing and serving directives can not be found and will not be followed.
If regulations are to be followed, then the URLs consisting of those can not be disallowed from crawling.
Check For An X-Robots-Tag
There are a couple of different approaches that can be utilized to check for an X-Robots-Tag on the site.
The simplest way to inspect is to install a browser extension that will tell you X-Robots-Tag details about the URL.
Screenshot of Robots Exclusion Checker, December 2022
Another plugin you can utilize to identify whether an X-Robots-Tag is being utilized, for instance, is the Web Developer plugin.
By clicking on the plugin in your browser and navigating to “View Reaction Headers,” you can see the numerous HTTP headers being used.
Another approach that can be used for scaling in order to pinpoint problems on websites with a million pages is Shrieking Frog
. After running a website through Shrieking Frog, you can browse to the “X-Robots-Tag” column.
This will show you which areas of the site are utilizing the tag, along with which particular directives.
Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Site Comprehending and managing how search engines engage with your site is
the cornerstone of search engine optimization. And the X-Robots-Tag is a powerful tool you can utilize to do simply that. Just know: It’s not without its threats. It is very simple to slip up
and deindex your whole site. That stated, if you’re reading this piece, you’re most likely not an SEO novice.
So long as you utilize it wisely, take your time and inspect your work, you’ll discover the X-Robots-Tag to be a helpful addition to your toolbox. More Resources: Featured Image: Song_about_summer/ Best SMM Panel