in

How Search Engines Work

Search

Search engines are the name given to computer programs that enable us to reach the information we want to reach quickly and easily without getting lost in the infinity of the internet.

In order to answer the queries sent to it, a search engine has to “see” the information on the internet in advance. In other words, a search engine can only provide you with pages that it “saw” and “can remember“. He / she cannot provide the pages that he has not seen before or if he has not, he cannot remember. But how does a computer see and remember pages? Here you come in through our door.

Pages on the internet are linked by links. In this way, it is possible to move from one page to another. Moreover, normally the pages link to pages relevant to them. For example, it is very unlikely that you will see a link to a page on the Turkish page about cat food, even on a Turkish page that talks about cardiac surgeries. Using the feature, search engines start from a site and start surfing the internet. When they see a page, they look at the page and try to understand its content, just as a user would look with a browser such as Internet Explorer or Firefox. Then they write the content somewhere in their memory (hard disks) and follow the links on the page, reach the other pages and do the same there. So they visit as many sites as possible while browsing the internet and try to remember.

They remember the pages as a basis, thanks to the words on the page. In the lists called “Index”, they keep the words on the page, such as the contents of the beginning of a book or the index at the end. Thus, they try to keep in their memory which words are used in which pages they see. More advanced search engines, the frequency (number) of passing words on the page, the location of the words on the page, the position relative to each other, the words used for external links within the page, the title of the page, the titles on the page, the words written in capital letters, the color of the articles, the dominant throughout the site The subject pays attention to many features such as the content of the other pages linking to the page and the content of the external links linked by the page and they store them in the indexes.

The purpose of indexing the page instead of storing it as it is is to provide easy access to the information on the page when necessary. To make an analogy, we still need the table of contents although we have the entire book. When looking for a topic in the book we have, we first look at chapter titles, page titles, etc. Or when we refer to another book in an important book for us, we count that book as relevant and try to review it. Search engines apply a similar idea for internet sites and sites.

When a query comes from the user, the search engines immediately look at their index and try to find the pages with the words in the query. Then, they sort between these pages according to various criteria and show the results to the user.

To summarize once again, search engines basically consist of three parts. The first part is called the crawler (crawler) or spider (spider) in English and it is the part that travels through the pages and collects its content. The second module is the module that examines the content of pages collected from the internet and stores them in indexes. The last module, the query module, sorts the queries coming from the user in the indexes created in the second section and shows them to the user.

Report

Written by RockedBuzz

One Comment

Leave a Reply

    One Ping

    1. Pingback:

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Nano 2

    What is NanoTechnology?

    ssl

    What is SSL? How does it work?