No matter what our age is,what our nationality is we always rely on search engines for getting our facts right, to get some data which we know can be known best when found online.we always depend on search engines, search engines could be any Google or altavista. but all works in same way. I came across this topic through online course of Google i.e "powersearching with Google". so at the end of this post I am going to post the link for that course, incase you want to learn power of searching from Google.
Whenever we want to search some data we head to search engines or say Google for most of us, but have you ever noticed that like us there would be many people who use search engine at that very moment. Search engines receives millions and millions of queries per day, so how exactly would it serve so many people? our common assumption is that Google gives us the page which is most viewed by most people on internet, and your assumption is right also. But you should even consider the amount of websites which are started daily. for example, say 1000 of sites mentioning user interface on your mobile devices, so you cannot assume that no one is ever going to write about that topic in future. yesterday 1000's were there today you may get 5000. so search engine has to scan all the websites that it receives daily, because they can't leave any website without considering or checking.so we can clearly assume that on daily basis it has to add hundreds and hundreds of websites to its databases, so we can have most relevant and useful data. According to Wikipedia :- " A web search engine is designed to search for information on the World Wide Web". so basically you get sites related to term you searched on search engine.Search engines don't create any website or give you any data but it gives link or reference to the data you searched in search engine, that to in the form of websites or links to that websites. And these search engines give you result irrespective of the name the blogger or writer on that particular website has named his article as.There are many instance where the heading of the articles is not at all matching with the term you had searched in search engine then to you got the reference of that site and the material or the data in that file is what you were searching for.so how do search engine gives you reference of the sites from billions of site out there on internet?
so how would this search engine would be working?
The main three things on which search engines works are:
- They search the Internet -- or select pieces of the Internet -- based on important words.
- They keep an index of the words they find, and where they find them.
- They allow users to look for words or combinations of words found in that index.
Any website you give search engine permission to scan( there are many personal blogs and sites on the net that many don't want to share with whole world) or include it in their database are searched throughout. all the terms are noticed by it and stored in its databases with the location of your site. so next time when someone comes searching for term that was mentioned in your website or page is shown there but according to the priority given to your site by search engine which notices web traffic to your site.so scanning of the pages are done according to the popularity of the page and that site. so the page with more visitors will be scanned first, and so on. so what is that program called as? it is known as "spiders", it is software robot which is made to scan any webpage that is added or modified on internet (that page has to give access to spiders,otherwise it won't be listed on net). it not just scans the page as it is, it scans the site and saves the words used there, that words are stored in its database along with the location of the site. The process in which whole document is scanned is known as web crawling ( so you can use more technical term for scanning as crawling).All the spiders have there own approach
- The words within the page
- Where the words were found
" Google began as an academic search engine. In the paper that describes how the system was built, Sergey Brin and Lawrence Page give an example of how quickly their spiders can work. They built their initial system to use multiple spiders, usually three at one time. Each spider could keep about 300 connections to Web pages open at a time. At its peak performance, using four spiders, their system could crawl over 100 pages per second, generating around 600 kilobytes of data each second.Keeping everything running quickly meant building a system to feed necessary information to the spiders. The early Google system had a server dedicated to providing URLs to the spiders. Rather than depending on an Internet service provider for the domain name server (DNS) that translates a server's name into an address, Google had its own DNS, in order to keep delays to a minimum."
how search engine works? explained here.
click on this link to register for registering for power searching seminar by Google, you also will be getting an certificate (that to from Google.com).There are many thing that would surprise you there. so guys if you want, do register there and get some great knowledge from master himself and certificate as complimentary gift.
*many things in this post are referred from howstuffworks, just to give you right technical knowledge about the topic and not with intention of making any revenue through it.
0 comments:
Post a Comment