Google, Yahoo!, and Microsoft all support XML Sitemaps.
Using the Sitemaps protocol you can supply the search engines with a list of all the URLs you would like them to crawl and index.
Adding a URL to a Sitemap file does not guarantee that a URL will be crawled or indexed.
However, it can result in pages that are not otherwise discovered or indexed by the search engine getting crawled and indexed. In addition, Sitemaps appear to help pages that have been relegated to Google’s supplemental index make their way into the main index.
XML Sitemaps should be used with and no be a replacement for the search engines’ normal, link-based crawl. Some of the benefits of having a Sitemap include:
- For pages the search engines already know about through regular spidering activities, they use the metadata you supply, such as the last date the content was modified and the frequency at which the page is changed, to improve how they crawl your site.
- For the pages the search engines don’t know about, they use the URLs you supply to increase their crawl coverage.
- For URLs that may have duplicates, the engines can use the XML Sitemaps data to help choose a canonical version.
- Verification/registration of XML Sitemaps may indicate positive trust/authority signals.
- The crawling/inclusion benefits of Sitemaps may have second-order positive effects, such as improved rankings or greater internal link popularity.