Thursday 18 September 2014

Search engines have come a long way since their introduction to the Internet. Back in the mid-‘90s, web designers and web developers were starting to realize that the Internet had the potential to become a go-to hub for any and all sorts of information. Embedding keywords within a website’s HTML code has become standard practice for all websites to make them visible for the user. However, not all websites actually give useful information for the user, and as such, search engines needed to be something more than just a tool that presents websites with the most number of hits.

Today’s search engines are more complex compared to their predecessors. Searched websites are acquired by way of certain algorithms that check any website that has the inputted keywords within its HTML code and displays the results for the user. Each search engine employs a unique algorithm that can fish out sites bearing the relevant keyword. The same algorithms also have the capacity to discern which websites are using unscrupulous means that deceive search engines in order to get higher rankings in search results even though they do not have useful or relevant information for the user.


Discerning relevant data is paramount in today’s world where information is readily available with the press of a button. Having search engines that can detect whether or not a site has reliable information is a huge help in ensuring that everyone has their facts straight.

0 comments:

Post a Comment

Popular Posts