I'm curious - if a production web page is removed for whatever reason, I've read that a permanent redirect should be used for SEO continuity.

So, if I'm using ASP for my server side code, what is the best option:

1) Keep the actual ASP page there, and just add a permanent redirect in the ASP code at the top

2) Remove the actual page, and add the redirect in IIS on the server

3) Something else?

Please Advise.

Thanks!

I'd choose 2. If the content is deleted and you want the page to be un-indexed, a 404 error page would be the best option. If you can redirect to similar content, then you can think of a 301 redirect.

pritaeas,

Yes, we want the search engines to un-index the page, so you suggest removing the content?

We already have a custom 404 setup where if the user enters an invalid URL, it says the page cannot be found, and redirects them to our main homepage after 2 seconds with a meta tag refresh.

As far as SEO, is this ok?

I was under the impression when a page is removed permanently a permanent redirect should be used for search engines....with a custom 404, would a search engine even "see" that the page is no longer there?

Yes. A 404 header in a page, will instruct the bot that the page is removed.

We already have a custom 404 setup where if the user enters an invalid URL

Web crawlers are smart enough now to detect these "custom" 404 pages as a possible 404. At least that is what I have seen in some of the webmaster tools. In any event, if you want to have the page un-indexed, I would go with pritaeas' suggestion.

The redirects should only be put into place if you do not want the page URL to be un-indexed.

Yes. A 404 header in a page, will instruct the bot that the page is removed.

forgive me - but I need to clarify.

So if I delete the physical "page.asp" page, I can then rely on the custom 404 in IIS to take care of everything? (So all I need to do is delete the page).

Correct?

I guess that would work - I think I can delete any page at this point, and the custom 404 will let the search engines know the page isn't there anymore...

If the web server sends back a 404, that will work. Sometimes, developers create a "custom" page that is really a normal page, but just redirected to it rather than sending back a 404. What I have seen is that the web crawlers are now interpretting these "custom" pages as a "soft-404". To ensure that the web crawler sees it as a 404, make sure that a 404 is sent back in the header. If the browser gets a 200, that means the page is OK even though the page content may allude to the fact that the visitor got to that page because the original request was not found.

Also, if you have a whole directory of pages that you want to block, you can easily add a robots.txt file in the root, and deny access to a particular folder. If you have content that has been indexed, the next time the crawler comes around, it will be denied access to the files based on the robots.txt rules and begin to un-index the content.

I hope that makes sense...

Yes Jorge - that makes sense - thank you!

I think at this point it would be best to include a hard 404 as you suggested, in my page header, and just leave the page there. Because our IIS does redirect to a "soft" 404.

Thanks again!

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.