how possible is it, that. a site like about.com, apears so many things in search results, but their main page those not contain all the pages you are being directed to. that is, you maybe searching information on finance, and you find it in about.com, but when you look in all the links on the site, you will not find any link related to the search topic. is there a way of adding more information to your web site without having to add a long list of topics?

lets say i had to put something like

1 How to start a small business
2 how to make money as an affiliate
and the third topic is <How to become a good speaker> but actually i dont want to put it on my page, but i really need the imformation to link people to my site.

The other two links are visible and a visitor can choose witch one to click on first to read, but now is it possible that i can add the third page but make it hidden, but can be viewed through a search engine.??

Member Avatar for LastMitch

The other two links are visible and a visitor can choose witch one to click on first to read, but now is it possible that i can add the third page but make it hidden, but can be viewed through a search engine.??

Do you have any code that you did regarding what you just mention?

If you do post the code.

The question you are asking is more related to Internet Market rather than HTML & CSS.

Agree with LastMitch, this should be filed under the SEO (Search Engine Optimization) forum.

Your question is a little confusing, but I think I get the gist. The important thing is to understand how search engines work on a rudimentary level. Basically, every search engine has one or more robots, also known as spiders or crawlers, which go to different websites and follow the links there. The crawlers then take the URL and name and other information about each page they visit, and store it in the search engine's database (a.k.a, the index).

That being the case, a page does not need to be currently linked to from the homepage of a website in order to be found by a crawler and then indexed. Let's say the New York Times published an article 3 months ago on their homepage. That article is probably gone from the homepage by now, but it's still online on their website -- and any crawler that crawled it back then will still be able to find it at the same URL. On top of this, if any other web page on nytimes.com (for example, an article on a related topic) links to this page, it reaffirms for the search engines that the page still exists, so they maintain it in their database.

Does that answer the question at all?

If you want to tell search engines about unlinked content on your site, content that isn't normally discoverable by crawling pages, you may find sitemaps help.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.