Robot Meta directives

Robot Meta directives

What are robots meta tags?

Robots Meta directives(often called meta tags)are code pieces, supply instructions to crawlers regarding “How to crawl or index” content of a webpage. Robots.txt file directives give crawling suggestions to bots about website’s pages, on the other hand, Robot (HTML)meta tags supply more solidified instruction on “How to crawl or index” content of a page.
Robots Meta directives & its two types:
  1. Like the meta robots tag, these are part of the HTML page.
  2. Like the X-robots-tag, these are those that web server sends as HTTP headers.

Parameters like crawling or indexing instructions, supplied by a meta tag, is usable with meta robots and x-robots-tag, the only difference is the parameters’ way of communicating to crawlers.
The instructions for crawling & indexing, information on specific webpage to crawlers is provided by Meta directives. In case bots discovers these directives/tags, parameters serves as solid suggestions for crawler indexation behaviour.
Read the parameters down, to get an idea what search engine crawlers understand and follow. These parameters are not case-sensitive, also note there is possibility of only following a subset of these parameters by search engines.
Indexation-controlling parameters:
  • No index: communicate not to index a page to search engine.
  • Index: A default Meta tag, which communicates search engine for page indexing.
  • Follow: In the case page not indexed, CRAWLER follows links available on-page and provides equity to the linked pages.
  • No follow:Communicates crawler not to follow links available on the page and neither pass any equity to linked pages.
  • No image index:states crawler, not to index any image on-page.
  • No archive:Search engines should not show a cached link to this page on a SERP.
  • Nocache:Same as no archive, but only used by Internet Explorer and Firefox.
  • No snippet: Communicates no to make visible meta description of the page on SERP, to search engines.
  • [OBSOLETE]:Hinders search engines from using a page’s DMOZ description as the SERP snippet for this page.
  • Unavailable_after:Pages after a reasonable TIME, no longer get indexed by the search engines.

Robots meta directives: Kinds

Two kinds of meta robots tag are meta robots tag and x-robots-tag. Whatever the parameter used in meta robots tag can also be identified in an x-robots-tag. Let’s understand both meta robots and x-robots tag directives below:

Meta robots tag

The meta robots tag frequently called “meta robots” and informally known as “robots tag” is a portion of the web page’s HTML code and emerges within web page’s <.HEAD> section as code element.

You can replace “robots” with a specific user-agent name to supply directives to specific crawlers. For example, to target directive specifically to Googlebot, use the following code:

Also, you can use multiple directives in single meta directive, as long as all of them target to the same “robot”. As it’s clear more than one directive can be used on a page. See the example:

As you can understand with above coding the tag would communicate to deny indexing any page, to follow any link or to a snippet of the page on SERP. This tag would tell robots to not index any of the photographs on a page, follow any of the links, or show a snipping of the page once it seems on a SERP.
Separate tag is required for each bot, in case you use different meta robots tag directives.


In meta robot tag, you can control indexing behavior at the page level but x-robots-tag consist IN HTTP header to control whole page indexing as well as specific elements of a page. X-robots tag not only is used similar to meta robots for executing the same indexation directives but is significant because it has the flexibility and functionality that the meta robot tag does not offer. Particularly it allows the use of a regular expression, executing crawl directive on non-HTML files along with implementing parameters at a global level. Get access to your website’s header .php, .htaccess, or server access file to use the x-robots-tag. After this attach server configuration’s x-robots-tag markup, consisting of any parameters.
This article gives some informative examples of “x-robots-tag markup” look in case you use the below cases.
  • Controlling the regulation of content not written in hypertext mark-up language (like flash or video)
  • Blocking regulation of a selected component of a page (like a picture or video), however not of the whole page itself
  • Controlling regulation if you don’t have access to a page’s hypertext mark-up language (specifically, to the section) or if your website uses a worldwide header that can't be modified
  • Adding rules as to whether or not a page ought to be indexed (ex. If a user has commented over twenty times, index their profile page)

Robots meta directives & SEO best practices

  • IF the robot.txt file doesn’t allow URL to be crawled, the meta directives on a page(HTTP header OR HTML) also will be completely ignored and not be seen.
  • Parameters such as “no index”, “follow” should be used in most cases for restricting crawling or indexation as an alternative of robot.txt file refuses.
  • Note that for having a more secure approach or you don't want to show private information publicly searchable, meta directives will not work because of malicious crawlers(completely ignore meta directives). Password protection is a secured approach that works here.
  • It's really not necessary to use both meta robots & x-robots-tag on the same page.

Ready to make your career in Digital Marketing!

CTCDC give you an edge on Internet marketing, Be a pro in Brand awareness, leads generation and online sale.

Request for CallBack Get Curriculum

Our Placement Partners