What is a page cache and what is it for? How to find in Google, Yandex cache. There is no saved copy in Yandex! Solving the problem Yandex saved copies of pages have disappeared





Sometimes site owners need to quickly update a site page or a saved copy of it. This may be necessary if incorrect or outdated information is published on the page, and it needs to be removed from the search or replaced with more recent information. It also happens that Contact Information The organization has changed, and it is necessary to promptly update this data in the saved copy.

For this purpose, a new tool has appeared in Yandex.Webmaster - “Reindexing”. Using it, you can significantly speed up the process of updating individual pages and their saved copies.

To update, you need to add addresses of outdated pages and transfer them to the search robot for priority crawling. They will be excluded from the search for several hours, after which they will appear in the search. updated versions.
Within a day, you can add five pages for reindexing, since the tool is designed for emergency cases when you need to quickly reindex a limited number of pages and update outdated information.


On October 21, the operation of the instrument was temporarily suspended for technical reasons.

","contentType":"text/html"),"proposedBody":("source":"As you know, the Yandex robot periodically crawls Internet sites and monitors changes that have occurred on the pages, that is, it carries out re-indexing. Information on the Internet is updated with such speed that no search engine is able to instantly index all content. The robot can re-index frequently updated documents several times a day, and rarely updated documents less often. At the same time, for each page in Yandex search, a saved copy is created.

Sometimes site owners need to quickly update a site page or a saved copy of it. This may be necessary if incorrect or outdated information is published on the page, and it needs to be removed from the search or replaced with more recent information. It also happens that an organization’s contact information has changed, and it is necessary to promptly update this data in a saved copy.

For this purpose, a new tool has appeared in Yandex.Webmaster - “Reindexing”. Using it, you can significantly speed up the process of updating individual pages and their saved copies.

To update, you need to add addresses of outdated pages and transfer them to the search robot for priority crawling. They will be excluded from the search for several hours, after which updated versions will appear in the search.
Within a day, you can add five pages for reindexing, since the tool is designed for emergency cases when you need to quickly reindex a limited number of pages and update outdated information.


On October 21, the operation of the instrument was temporarily suspended for technical reasons.

","html":"As you know, the Yandex robot periodically crawls Internet sites and monitors changes that have occurred on the pages, that is, it carries out re-indexing. Information on the Internet is updated at such a speed that no search engine is able to instantly index all content. The robot can re-index frequently updated documents several times a day, and rarely updated documents less often. In this case, for each page in Yandex search, a saved copy is created.

Sometimes site owners need to quickly update a site page or a saved copy of it. This may be necessary if incorrect or outdated information is published on the page, and it needs to be removed from the search or replaced with more recent information. It also happens that an organization’s contact information has changed, and it is necessary to promptly update this data in a saved copy.

For this purpose, a new tool has appeared in Yandex.Webmaster - “Reindexing”. Using it, you can significantly speed up the process of updating individual pages and their saved copies.

To update, you need to add addresses of outdated pages and transfer them to the search robot for priority crawling. They will be excluded from the search for several hours, after which updated versions will appear in the search.
Within a day, you can add five pages for reindexing, since the tool is designed for emergency cases when you need to quickly reindex a limited number of pages and update outdated information.


On October 21, the operation of the instrument was temporarily suspended for technical reasons.

","contentType":"text/html"),"authorId":"30364427","slug":"8841","canEdit":false,"canComment":false,"isBanned":false,"canPublish" :false,"viewType":"old","isDraft":false,"isOnModeration":false,"isSubscriber":false,"commentsCount":76,"modificationDate":"Thu Jan 01 1970 03:00:00 GMT +0000 (Coordinated Universal Time)","showPreview":true,"approvedPreview":("source":"As you know, the Yandex robot periodically crawls Internet sites and monitors changes that have occurred on the pages, that is, it re-indexes. Information on the Internet is updated at such a speed that no search engine is able to instantly index all content. The robot can re-index frequently updated documents several times a day, and rarely updated documents less often. At the same time, for each page in Yandex search, a saved copy is created.

Sometimes site owners need to quickly update a site page or a saved copy of it. This may be necessary if incorrect or outdated information is published on the page, and it needs to be removed from the search or replaced with more recent information. It also happens that an organization’s contact information has changed, and it is necessary to promptly update this data in a saved copy.

For this purpose, a new tool has appeared in Yandex.Webmaster - “Reindexing”. Using it, you can significantly speed up the process of updating individual pages and their saved copies.

To update, you need to add addresses of outdated pages and transfer them to the search robot for priority crawling. They will be excluded from the search for several hours, after which updated versions will appear in the search.
Within a day, you can add five pages for reindexing, since the tool is designed for emergency cases when you need to quickly reindex a limited number of pages and update outdated information.


On October 21, the operation of the instrument was temporarily suspended for technical reasons.

","html":"As you know, the Yandex robot periodically crawls Internet sites and monitors changes that have occurred on the pages, that is, it carries out re-indexing. Information on the Internet is updated at such a speed that no search engine is able to instantly index all content. The robot can re-index frequently updated documents several times a day, and rarely updated documents less often. In this case, for each page in Yandex search, a saved copy is created. ","contentType":"text/html"),"proposedPreview":("source":"As you know, the Yandex robot periodically crawls Internet sites and monitors changes that have occurred on the pages, that is, it re-indexes. Information on the Internet is updated with such speed that no search engine is able to instantly index all content. The robot can re-index frequently updated documents several times a day, and rarely updated documents less often. At the same time, for each page in Yandex search, a saved copy is created.

Sometimes site owners need to quickly update a site page or a saved copy of it. This may be necessary if incorrect or outdated information is published on the page, and it needs to be removed from the search or replaced with more recent information. It also happens that an organization’s contact information has changed, and it is necessary to promptly update this data in a saved copy.

For this purpose, a new tool has appeared in Yandex.Webmaster - “Reindexing”. Using it, you can significantly speed up the process of updating individual pages and their saved copies.

To update, you need to add addresses of outdated pages and transfer them to the search robot for priority crawling. They will be excluded from the search for several hours, after which updated versions will appear in the search.
Within a day, you can add five pages for reindexing, since the tool is designed for emergency cases when you need to quickly reindex a limited number of pages and update outdated information.


On October 21, the operation of the instrument was temporarily suspended for technical reasons.

","html":"As you know, the Yandex robot periodically crawls Internet sites and monitors changes that have occurred on the pages, that is, it carries out re-indexing. Information on the Internet is updated at such a speed that no search engine is able to instantly index all content. The robot can re-index frequently updated documents several times a day, and rarely updated documents less often. In this case, for each page in Yandex search, a saved copy is created. ","contentType":"text/html"),"titleImage":null,"tags":[("displayName":"Ya.Webmaster","slug":"ya-vebmaster","categoryId":" 4875257","url":"/blog??tag=ya-vebmaster")],"isModerator":false,"publishCount":1,"commentsEnabled":true,"url":"/blog/8841", "urlTemplate":"/blog/%slug%","fullBlogUrl":"https://webmaster.yandex.ru/blog","addCommentUrl":"/blog/createComment/webmaster/8841","updateCommentUrl": "/blog/updateComment/webmaster/8841","addCommentWithCaptcha":"/blog/createWithCaptcha/webmaster/8841","changeCaptchaUrl":"/blog/api/captcha/new","putImageUrl":"/blog/image /put","urlBlog":"/blog","urlEditPost":"/blog/569df477cb28c8a50611b734/edit","urlSlug":"/blog/post/generateSlug","urlPublishPost":"/blog/569df477cb28c8a50611b734/publish ","urlUnpublishPost":"/blog/569df477cb28c8a50611b734/unpublish","urlRemovePost":"/blog/569df477cb28c8a50611b734/removePost","urlDraft":"/blog/8841/draft","urlDraftTemplate":" /blog/% slug%/draft","urlRemoveDraft":"/blog/569df477cb28c8a50611b734/removeDraft","urlTagSuggest":"/blog/api/suggest/webmaster","urlAfterDelete":"/blog","isAuthor":false," subscribeUrl":"/blog/api/subscribe/569df477cb28c8a50611b734","unsubscribeUrl":"/blog/api/unsubscribe/569df477cb28c8a50611b734","urlEditPostPage":"/blog/569df477cb28c8a50611b7 34/edit","urlForTranslate":"/blog/ post/translate","urlRelateIssue":"/blog/post/updateIssue","urlUpdateTranslate":"/blog/post/updateTranslate","urlLoadTranslate":"/blog/post/loadTranslate","urlTranslationStatus":"/ blog/8841/translationInfo","urlRelatedArticles":"/blog/api/relatedArticles/webmaster/8841","author":("id":"30364427","uid":("value":"30364427", "lite":false,"hosted":false),"aliases":(),"login":"webmaster","display_name":("name":"webmaster","avatar":("default": "0/0-0","empty":true)),,"address":" [email protected]","defaultAvatar":"0/0-0","imageSrc":"https://avatars.mds.yandex.net/get-yapic/0/0-0/islands-middle","isYandexStaff": false),"originalModificationDate":"1970-01-01T00:00:00.000Z","socialImage":("orig":("fullPath":"https://avatars.mds.yandex.net/get-yablogs /51778/file_1461153249801/orig")))))">

Tool for updating a saved copy in Yandex.Webmaster

Archived post.

As you know, the Yandex robot periodically crawls Internet sites and monitors changes that have occurred on the pages, that is, it carries out re-indexing. Information on the Internet is updated at such a speed that no search engine is able to instantly index all content. The robot can re-index frequently updated documents several times a day, and rarely updated documents less often. In this case, for each page in Yandex search, a saved copy is created.

Sometimes site owners need to quickly update a site page or a saved copy of it. This may be necessary if incorrect or outdated information is published on the page, and it needs to be removed from the search or replaced with more recent information. It also happens that an organization’s contact information has changed, and it is necessary to promptly update this data in a saved copy.

For this purpose, a new tool has appeared in Yandex.Webmaster - “Reindexing”. Using it, you can significantly speed up the process of updating individual pages and their saved copies.

To update, you need to add addresses of outdated pages and transfer them to the search robot for priority crawling. They will be excluded from the search for several hours, after which updated versions will appear in the search.
Within a day, you can add five pages for reindexing, since the tool is designed for emergency cases when you need to quickly reindex a limited number of pages and update outdated information.


On October 21, the operation of the instrument was temporarily suspended for technical reasons.

The word cache can be heard quite often in various areas of IT, but today we will deal with page cache site. The term itself means that search engines save copies of pages from a certain number, usually from the robot’s last visit to the site. You can find and use a copy (cache) of the page at any time for your needs.

It's pretty good that search engines save pages on their servers for a while and give us a chance to take advantage of this. A lot of resources and money are allocated to storing cached pages, but they pay for their help, since we still need to go to their search engines.

Why do we need a cache (copies) of pages?

There are different situations when working with websites.

As always, you have a lot of work, but little time and not enough attention for everything. There are times when work is being done on the site, suppose a design change or minor edits to the template or text. And at one point you realize that you made a mistake somewhere and the text disappeared or part of the site design disappeared. Well, this happens and everyone has probably dealt with this.

On this moment, you don’t have backups, and you also don’t remember what everything looked like initially. In this case, a copy of the page, which can be found in the cache of both Yandex and Google, can help, see how it was originally and correct it.

Or second case, You have changed the text a little in order to improve it and want to see whether the page on which you made the changes has been updated or not. You can check using a page that is in the cache; to do this, look for this page and look at the result.

There is also a situation when the site is not accessible, for one reason or another, and you need to go to it. In this case, a copy of the page can help, which can be found in the following ways.

In general, I think it has become clear that using a page cache is necessary and useful.

How to find a page in Google, Yandex cache

First, let's look at how to search in the Google search engine.

Method number 1.

You are visiting the page search engine and write down the address of the page you want to find and view a copy of. I'll take our site as an example:

We write the name of the page or site in search bar, press “Enter” and see where the page you were looking for is displayed. We look at the snippet and there is a URL (address) to the right of it with a small down arrow, click on it and we will see the “Saved copy” item. Click on it and we will be transferred to a copy of the page from a certain date.

Method number 2.

The method can be called semi-automatic, since you need to copy the address below and substitute the domain of your site instead of site.ru. As a result, you will receive the same copy of the page.

http://webcache.googleusercontent.com/search?q=cache:site.ru

Method No. 3.

You can view the cache using browser plugins or online services. I use for these purposes.


Here you can see when the robot last visited the resource, and accordingly, a copy of the page will be for this date.

Now let's look at how to search for a cache in the Yandex search engine.

Method number 1.

The method is the same as for the Google system. We go to the search engine page and enter the address of the page you want to find and view a copy. I’ll take our website as an example again and write it down:

We enter the name of the page or site in the search bar, press “Enter” and see the search results, where the page you were looking for is displayed. We look at the snippet and there is a small downward arrow to the right of it, click on it and the “Saved copy” item appears. Click on it and we will be transferred to a copy of the page from a certain date.


Method number 2.

We use additional browser plugins. Read a little higher, everything is the same as for Google.

If the page is not in , then there is a high probability that it is not in the cache. If the page was previously in the index, then it may be preserved in it.

How to clear cache in Yandex, Google

It may be necessary to remove a page from the Yandex or Google cache or even hide a page that was previously indexed and cached from prying eyes. To do this, you need to wait until the search engine itself discards this page naturally if you have previously deleted it. You can prevent the page from being indexed in a file or use the tag:

Just be careful with the tag, do not put it in the general site template, because it will prohibit caching of the entire site. For these purposes, it is best to use additional plugins or programmers who have previously done such work.

Now let's see how you can clear the cache (clear, delete page) using the Google and Yandex search engines.

Clear page cache in Google

Search engine Google system I approached this issue from the right side and created such a tool as “ Remove URLs» in Webmaster Tools. To use it you need to go to webmaster tools at:

www.google.com/webmasters/


Clear page cache in Google Webmaster

In order to clear the cache or delete the entire page (or you can also immediately delete and clear the cache together), you need to click on the “ Temporarily hide" and enter the url address of the page that needs to be cleared and click the button " Continue«.


Now in this window, when you click on the list “ Request type"You can see several ways to delete and clear both a page from the Google index and clear the cache.

  1. If you need to completely delete the page and cache, then use the first method.
  2. If you just need to clean it, then use the second method. As a rule, for our example we need to use it. The page remains in the index, but the cache is deleted and the next time the robot arrives, it will appear there again.
  3. If you need to temporarily hide, then use the third method. It is used when pages do not have time to fill with quality content. In this case, it would be better to hide it for a while.

As soon as you choose one of the methods, in this case 2nd, click on the button “ Send request«.


After clicking, we get a page where you can see that this page added for deletion from the cache and is in the status " Expectation". Now all that remains is to wait. Typically, this procedure takes from several minutes to several hours.

If you have entered the page incorrectly and want to cancel, you can click on the “ Cancel«.


After you go to the Remove URLs tool after some time, you will be able to see the status as Completed. This means that Google robot visited the page and cleared its history.

Clear (delete) page in Yandex

The Yandex search engine has a similar tool in its webmaster tools, but there is one “BUT” here. There is no cache clearing as such; you can completely delete a page from the PS index and at the same time its entire history will be deleted.

In order to use this tool, you need to go to Yandex webmaster using the link:

webmaster.yandex.ua/delurl.xml

and enter the required URL in the line.


The search engine will exclude this address after some time “AP”. As a rule, Yandex takes a couple of seconds to do this, so you will need to wait.

If you have questions, ask them in the comments, we are always in touch!

Hello! Today is a post about a painful issue for most novice website builders. I had to answer the same question very often in the comments - how to remove pages from search, which were previously indexed, but due to circumstances were deleted and no longer exist, but are still in the search engine index. Or the search contains pages that are prohibited from indexing.

You can’t really expand on the comments, so after the next question I decided to give this topic special attention. First, let's figure out how such pages could end up in searches. I will give examples based on my own experience, so if I forget something, please fill it in.

Why are closed and deleted pages in search?

There may be several reasons, and I will try to highlight some of them in the form of a small list with explanations. Before we begin, I will explain what I mean by “extra” (closed) pages: service or other pages prohibited from indexing by rules or meta tag.

Non-existent pages are searched for the following reasons:

  • The most common thing is that the page has been deleted and no longer exists.
  • Manual editing of a web page address, as a result of which a document that is already in the search becomes unavailable for viewing. Particular attention to this point should be given to beginners who, due to their little knowledge, neglect the functioning of the resource.
  • Continuing the thought about the structure, let me remind you that by default, after installing WordPress on the hosting, it does not meet the requirements of internal optimization and consists of alphanumeric identifiers. This is due to CNC, and a lot of non-working addresses appear, which will remain in the search engine index for a long time. Therefore, apply the basic rule: if you decide to change the structure, use 301 redirects from old addresses to new ones. Perfect option— complete all site settings BEFORE opening it; a local server can be useful for this.
  • The server is not configured correctly. A non-existent page should return an error code of 404 or 3xx.

Extra pages appear in the index under the following conditions:

  • The pages, as it seems to you, are closed, but in fact they are open to search robots and can be searched without restrictions (or robots.txt is not written correctly). To check the PS access rights to pages, use the appropriate tools for.
  • They were indexed before they were closed in available ways.
  • These pages are linked to by other sites or internal pages within the same domain.

So, we figured out the reasons. It is worth noting that after eliminating the cause, non-existent or extra pages may remain in the search database for a long time - it all depends on the frequency of visits to the site by the robot.

How to delete a page from the Yandex search engine

For removing URLs from Yandex Just follow the link and insert the address of the page that you want to remove from the search results into the text field of the form.

The main condition for a successful deletion request:

  • the page must be closed from indexing by robots rules or the noindex meta tag on this page - if the page exists but should not participate in the search results;
  • when trying to access a page, the server should return a 404 error - if the page has been deleted and no longer exists.

The next time a robot crawls the site, deletion requests will be completed and the pages will disappear from search results.

How to remove a page from Google search engine

To remove pages from, proceed in the same way. Open Webmaster Tools and find the Remove URLs option in the Optimization drop-down list and follow the link.

We have a special form with which we create a new deletion request:

Click continue and follow further instructions to select the reason for deletion. In my opinion, the word “reason” is not quite suitable for this, but that’s not the point...

Of the options presented to us, we have:

  • removing page page from results Google search and from the search engine cache;
  • removing only the page from the cache;
  • deleting a directory with all addresses included in it.

A very convenient function for deleting an entire catalog, when you have to delete several pages, for example from one category. You can monitor the status of your deletion request on the same tools page with the option to cancel. For successful removing pages from Google the same conditions are required as for . The request is usually completed as quickly as possible and the page immediately disappears from the search results.

What means “There is no saved copy in Yandex!” and how it affects the site as a whole. Firstly, if you sell links from your website, then the absence of pages in the Yandex cache will negatively affect the webmaster’s income.

For example, Seopult has a parameter that controls the presence of a page in the search engine cache.

It's called nic (no index cache)- this means that the page does not have a “saved copy”.

Currently, the Yandex index is checked in Seopult. In the future they plan to add checking in Google.

Here's what it looks like on a graph. For a long time the trust was equal to nine, but then there was a sharp drop.


I began to look for the reason for the lack of a saved copy of the site in search index. And I even wrote to TrustLink support.

Good afternoon. Please tell me what could be causing the drop in trust on my blog. Over the last two Yandex updates, the XT parameter has decreased from 9 to 7. At the same time, income in Trustlink has decreased.

Hello! This indicator is not an official representation of Yandex, therefore we do not know the reasons for its decline

That is, the decrease in the number of links placed by SEOPULT is not related to this. Why is there a decrease in income?

When checking, some of the pages on which links were purchased were missing from the Yandex cache. The links were removed, which is why the income dropped.

Can you tell me why the pages are missing from the Yandex cache? Is it in the index, but not in the cache? How can I influence them to end up in the cache?

This is already a question for Yandex technical support, often the cache update occurs a little later than the update/index, hence the problem

Yes exactly. To achieve maximum link effectiveness, the page must be cached.

Then I asked a question to Yandex technical support.

Good afternoon.

There is currently no saved copy in Yandex. Tell me, please, what is the reason. The blog runs on WordPress.

In addition, my blog had trust xt = 9. Over the last two updates, trust dropped to 7. I’m trying to improve my blog, and here are two of them negative points. What could this be connected with and how can the situation be corrected?

Website address: //www.site

Sincerely yours, Ilya.

And he continued to look for the reason.

Related article: How to Find Backlinks

It turns out that after updating the plugins, the checkbox next to the noarchive value was enabled. As a result, on each page of my blog there was a line prohibiting page caching. This may be why I lost two trust units.

By removing this tag, turning off the checkbox in Robots Meta plugin, I was convinced of its absence on the pages of my blog.

Add noarchive meta tag

Prevents archive.org and Google from putting copies of your pages into their archive/cache.to put copies of your pages into their archive/cache.

Be careful when setting up the Robots Meta plugin for WordPress!

Having learned about the presence of a tag that prohibits caching, I wrote to the Trustlink support.

Hello. I have already found out the reason for the trust drop and the absence of a copy of the blog in the Yandex cache. Apparently, when updating WordPress plugins, the noarchive tag was present on the pages. Having discovered this, I immediately removed it and today the trust again became 9, immediately rising by 2 units. It was in vain that the optimizers removed their links.

Hello! Expect purchases to resume soon.

And then I receive a response from Yandex support service.

Hello!

The fact is that at the time of the last indexing of the pages, their code contained the noarchive meta tag. This is an explicit ban on showing the saved copy in search results. Now the tag has been removed, but the saved copy will not appear until the robot updates the documents in our search database.

In some cases, the robot may consider the changes made on the page to be insignificant, for example, if the text on the page has practically not changed or the changes concern only the html markup. Such documents are not updated in our search database, since the changes made do not affect the search in any way.

Sincerely, Platon Shchukin

Yandex support service

//help.yandex.ru/

The next day I checked my blog again in the //xtool.ru/ service. And lo and behold! Instant increase by 2 units!

If anyone now thinks that in this case we are talking about a backup copy of the site, then they are mistaken. A saved copy of the site and backup copy site is far from the same thing. You will not be able to restore a site from a saved copy of the site.

There is a web archive on the Internet where saved copies of sites are located. If your site is still very young and only a few months old, then most likely there is no saved copy of the site in the web archive. If your site has been on the Internet for quite a long time, then a saved copy of the site should be there.

This web archive is located at http://archive.org/web/ and there you can see what your site looked like at a certain period of time. Let me make a reservation right away that copies of sites are not saved every day, and sometimes not even every month. Although, of course, it is impossible to restore a site from a saved copy of the site, but if you are lucky, you can restore the original sources.

Sometimes there are situations when some kind of failure occurred on the site, or the site was hacked and some information was lost. In this case, although not always, a saved copy of the site can help. For example, I pulled out some of my articles from a saved copy of the site, which I already considered hopelessly lost.

This is done very simply. Visit the site http://archive.org/web/ In the input field, enter the address of the site whose saved copy you want to view and click on the "Browse History" button. Now in the image in the input field you can see the address of my website.

You get to another page and see for what year you can view the saved copy of the site. There are black marks where the site was saved.

Select the year for which you want to view the saved copy of the site. The days when the site was saved are in the light blue circle. Clicking on the date in the blue circle will open a saved copy of the site. Other dates are not active.

The saved copy of the site loads quite slowly. There are many sites in the web archive.