What is the World Wide Web. The World Wide Web

Today, networking has become commonplace. Going online is sometimes easier than getting up from the couch to turn on the TV because the remote again disappeared somewhere :). Why, many don’t even watch TV anymore, because the network has everything you need, well, except that they don’t feed ... yet.

But who invented what we use daily, hourly? You know? Until now, I had no idea. And invented the Internet Sir Timothy John Berners-Lee. He is the one inventor of the World Wide Web and author of many other major developments in this area.

Timothy John Berners-Lee was born on June 8, 1955 in London, in an unusual family. His parents were mathematicians Conway Berners-Lee and Mary Lee Woods, who were researching one of the first computers, the Manchester Mark I.

I must say that time itself was conducive to various kinds of technological breakthroughs in the field of IT technologies: a few years earlier, Vannevar Bush (a scientist from the USA) proposed the so-called hypertext. This is a unique phenomenon, which was an alternative to the usual linear structure of development, narration, etc. and had a noticeable impact on many areas of life - from science to art.

And just a few years after the birth of Tim Berners-Lee, Ted Nelson proposed the creation of a "documentary universe" where all the texts ever written by mankind would be linked together with what we would today call "cross-references" . In anticipation of the invention of the Internet, all these and many other events, of course, created fertile ground and suggested appropriate reflections.

At the age of 12, the parents sent the boy to the Emanuel private school in the town of Wandsworth, where he showed an interest in the exact sciences. After leaving school, he entered college at Oxford, where, together with his comrades, he was caught in a hacker attack and for this they were deprived of the right to access educational computers. This unfortunate circumstance prompted Tim for the first time to independently assemble a computer based on the M6800 processor, with an ordinary TV instead of a monitor and a broken calculator instead of a keyboard.

Berners-Lee graduated from Oxford in 1976 with a degree in physics, after which he began his career at Plessey Telecommunications Ltd. The scope of his activity at that time was distributed transactions. After a couple of years, he moved to another company - DG Nash Ltd, where he developed software for printers. It was here that he first created a kind of analogue of the future operating system capable of multitasking.

The next place of work was the European Laboratory for Nuclear Research, located in Geneva (Switzerland). Here, as a software consultant, Berners-Lee wrote the Enquire program, which used the method of random associations. The principle of its work, in many ways, was a help for the creation of the World Wide Web.

This was followed by three years of work as a systems architect and research work at CERN, where he developed a number of distributed systems for data collection. Here, in 1989, he first introduced a project based on hypertext - the founder of the modern Internet. This project was later called the World Wide Web. world wide web).

In a nutshell, its essence was as follows: the publication of hypertext documents that would be interconnected by hyperlinks. This made it possible to significantly facilitate the search for information, its systematization and storage. Initially, the project was supposed to be implemented in the internal CERN network for local research needs, as a modern alternative to the library and other data repositories. At the same time, data download and access to them were possible from any computer connected to the WWW.

Work on the project continued from 1991 to 1993 in the form of collecting user feedback, coordination and all kinds of improvements to the World Wide Web. In particular, the first versions of the URL protocols (as a special case of the URI identifier), HTTP and HTML were already proposed then. The first web browser based on World Wide Web hypertext and a WYSIWYG editor were also introduced.

In 1991, the very first website was launched, which had the address . Its contents were introductory and auxiliary information regarding the World Wide Web: how to install a web server, how to connect to the Internet, how to use a web browser. There was also an online catalog with links to other sites.

Since 1994, Berners-Lee has held the 3Com Founders Chair at the MIT Informatics Laboratory (now the Computer Science and Artificial Intelligence Laboratory, with the Massachusetts Institute of Massachusetts), where he serves as principal investigator.

In 1994, he founded at the Laboratory, which to this day develops and implements standards for the Internet. In particular, the Consortium is working to ensure that the World Wide Web develops in a stable and continuous manner - in line with the latest user requirements and the level of technological progress.

In 1999, the famous book by Berners-Lee called "". It describes in detail the process of working on a key project in the life of the author, talks about the prospects for the development of the Internet and Internet technologies, and outlines a number of important principles. Among them:

- the importance of web 2.0, the direct participation of users in the creation and editing of website content (a vivid example of Wikipedia and social networks);
- the close relationship of all resources with each other through cross-references in combination with equal positions of each of them;
— the moral responsibility of scientists implementing certain IT technologies.

Berners-Lee has been a professor at the University of Southampton since 2004, where he works on the Semantic Web Project. It is a new version of the World Wide Web, where all data is suitable for processing using special programs. This is a kind of “add-on”, assuming that each resource will have not only plain text “for people”, but also specially encoded content that is understandable to a computer.

In 2005, his second book, Traversing the Semantic Web: Unleashing the Full Potential of the World Wide Web, was published.

Tim Berners-Lee is currently a Knight Commander by Queen Elizabeth II, a Distinguished Fellow of the British Computer Society, a Foreign Member of the US National Academy of Sciences, and many others. His work has received many awards, including the Order of Merit, a place in the list of "100 Greatest Minds of the Century" according to Time Magazine (1999), the Quadriga Award in the "Knowledge Network" nomination (2005), the M.S. Gorbachev Prize in the nomination "Perestroika" - "The Man Who Changed the World" (2011), etc.

Unlike many of his successful brothers, like, or, Berners-Lee has never been distinguished by a special desire to monetize and receive super profits from his projects and inventions. His manner of communication is characterized as a "rapid stream of thought", accompanied by rare digressions and self-irony. In a word, there are all the signs of a genius living in his own "virtual" world, which, at the same time, has had a colossal impact on the world of today.

Structure and principles of the World Wide Web

World Wide Web around Wikipedia

The World Wide Web is made up of millions of Internet web servers located around the world. A web server is a program that runs on a computer connected to a network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard drive and sends it over the network to the requesting computer. More complex web servers are capable of dynamically allocating resources in response to an HTTP request. To identify resources (often files or parts thereof) on the World Wide Web, uniform resource identifiers (URIs) are used. Uniform Resource Identifier). Uniform URL resource locators are used to locate resources on the web. Uniform Resource Locator). These URL locators combine URI identification technology and the DNS domain name system. Domain Name System) - a domain name (or directly an address in numeric notation) is part of the URL to designate a computer (more precisely, one of its network interfaces) that executes the code of the desired web server.

To view information received from the web server, a special program is used on the client computer - a web browser. The main function of a web browser is to display hypertext. The World Wide Web is inextricably linked with the concepts of hypertext and hyperlinks. Most of the information on the Internet is hypertext. To facilitate the creation, storage and display of hypertext on the World Wide Web, HTML is traditionally used. HyperText Markup Language), hypertext markup language. The work of marking up hypertext is called layout; the markup master is called a webmaster or webmaster (without a hyphen). After HTML markup, the resulting hypertext is placed in a file; such an HTML file is the main resource of the World Wide Web. Once an HTML file is made available to a web server, it is called a “web page.” A collection of web pages makes up a website. Hyperlinks are added to the hypertext of web pages. Hyperlinks help World Wide Web users easily navigate between resources (files), regardless of whether the resources are located on the local computer or on a remote server. Web hyperlinks are based on URL technology.

World Wide Web Technologies

To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform design styles for many web pages. Another innovation worth paying attention to is the URN resource designation system. Uniform Resource Name).

A popular concept for the development of the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable to computers. The Semantic Web is a concept of a network in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web opens up access to clearly structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely adopted and implemented wisely, the Semantic Web has the potential to spark a revolution on the Internet. To create a computer-readable description of a resource, the Semantic Web uses the RDF (English) format. Resource Description Framework ), which is based on XML syntax and uses URIs to identify resources. New in this area is RDFS (English) Russian (English) RDF Schema) and SPARQL (eng. Protocol And RDF Query Language ) (pronounced "sparkle"), a new query language for fast access to RDF data.

History of the World Wide Web

Tim Berners-Lee and, to a lesser extent, Robert Cayo are considered the inventors of the World Wide Web. Tim Berners-Lee is the originator of HTTP, URI/URL and HTML technologies. In 1980 he worked at the European Council for Nuclear Research (French). Conseil Européen pour la Recherche Nucléaire, CERN ) software consultant. It was there, in Geneva (Switzerland), that he wrote the Enquire program for his own needs. Enquire, can be loosely translated as "Interrogator"), which used random associations to store data and laid the conceptual foundation for the World Wide Web.

The world's first website was hosted by Berners-Lee on August 6, 1991 on the first web server available at http://info.cern.ch/, (). Resource defined the concept World Wide Web, contained instructions for setting up a web server, using a browser, etc. This site was also the world's first Internet directory because Tim Berners-Lee later posted and maintained a list of links to other sites there.

The first photograph on the World Wide Web was of the parody filk band Les Horribles Cernettes. Tim Bernes-Lee asked the group leader for scans of them after the CERN Hardronic Festival.

And yet, the theoretical foundations of the web were laid much earlier than Berners-Lee. Back in 1945, Vannaver Bush developed the concept of Memex. (English) Russian - auxiliary mechanical means of “expanding human memory”. Memex is a device in which a person stores all his books and records (and, ideally, all his knowledge that can be formally described) and which provides the necessary information with sufficient speed and flexibility. It is an extension and addition to human memory. Bush also predicted comprehensive indexing of text and multimedia resources with the ability to quickly find the necessary information. The next significant step towards the World Wide Web was the creation of hypertext (a term coined by Ted Nelson in 1965).

  • The Semantic Web involves improving the coherence and relevance of information on the World Wide Web through the introduction of new metadata formats.
  • The Social Web relies on the work of organizing the information available on the Web, carried out by the Web users themselves. In the second direction, developments that are part of the semantic web are actively used as tools (RSS and other web channel formats, OPML, XHTML microformats). Partially semanticized sections of the Wikipedia Category Tree help users consciously navigate the information space, however, very soft requirements for subcategories do not give reason to hope for the expansion of such sections. In this regard, attempts to compile knowledge atlases may be of interest.

There is also a popular concept Web 2.0, which summarizes several directions of development of the World Wide Web.

Methods for actively displaying information on the World Wide Web

Information on the web can be displayed either passively (that is, the user can only read it) or actively - then the user can add information and edit it. Methods for actively displaying information on the World Wide Web include:

It should be noted that this division is very arbitrary. So, say, a blog or guest book can be considered a special case of a forum, which, in turn, is a special case of a content management system. Usually the difference is manifested in the purpose, approach and positioning of a particular product.

Some information from websites can also be accessed through speech. India has already begun testing a system that makes the text content of pages accessible even to people who cannot read and write.

The World Wide Web is sometimes ironically called the Wild Wild Web, in reference to the title of the film Wild Wild West.

see also

Notes

Literature

  • Fielding, R.; Gettys, J.; Mogul, J.; Fristik, G.; Mazinter, L.; Leach, P.; Berners-Lee, T. (June 1999). “Hypertext Transfer Protocol - http://1.1” (Information Sciences Institute).
  • Berners-Lee, Tim; Bray, Tim; Connolly, Dan; Cotton, Paul; Fielding, Roy; Jeckle, Mario; Lilly, Chris; Mendelsohn, Noah; Orcard, David; Walsh, Norman; Williams, Stuart (December 15, 2004). "Architecture of the World Wide Web, Volume One" (W3C).
  • Polo, Luciano World Wide Web Technology Architecture: A Conceptual Analysis. New Devices(2003). Archived from the original on August 24, 2011. Retrieved July 31, 2005.

Links

  • Official website of the World Wide Web Consortium (W3C) (English)
  • Tim Berners-Lee, Mark Fischetti. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. - New York: HarperCollins Publishers (English) Russian . - 256 p. - ISBN 0-06-251587-X, ISBN 978-0-06-251587-2(English)
Other organizations involved in the development of the World Wide Web and the Internet in general

World Wide Web (abbreviated world wide web or WWW) is a unity of information resources that are interconnected by means of telecommunications and are based on a hypertext representation of data scattered throughout the world.

The year of birth of the World Wide Web is considered to be 1989. It was this year that Tim Berners-Lee proposed a common hypertext project, which later became known as the World Wide Web.

The creator of the “web” Tim Bernes-Lee, working in the laboratory of elementary particle physics of the European Center for Nuclear Research “CERN” in Geneva (Switzerland), together with partner Robert Caillot, worked on the problems of applying hypertext ideas to build an information environment that would simplify the exchange of information between physicists .

The result of this work was a document that examined concepts that are fundamental to the “web” in its modern form, and proposed URIs, the HTTP protocol, and the HTML language. Without these technologies it is no longer possible to imagine the modern Internet.

Berners-Lee created the world's first web server and the world's first hypertext web browser. On the world's first website, he described what the World Wide Web was and how to set up a web server, how to use a browser, etc. This site was also the world's first Internet catalogue.

Since 1994, the most important tasks for the development of the World Wide Web have been taken over by the World Wide Web Consortium ( World Wide Web Consortium, WZS), which was organized and still headed by Kim Bernes-Lee. The consortium develops and implements technology standards for the Internet and the World Wide Web. WZS mission: “Unleash the full potential of the World Wide Web by creating protocols and principles that guarantee the long-term development of the Network.” WZS is developing “Recommendations” to achieve compatibility between software products and equipment of various companies, which makes the World Wide Web more advanced, universal and convenient.

Search engines: composition, functions, operating principles.

Search system is a software and hardware complex designed to search the Internet and respond to a user request, specified in the form of a text phrase (search query), by producing a list of links to sources of information, in order of relevance (in accordance with the request). The largest international search engines: Google, "Yahoo", "MSN". On the Russian Internet it is - "Yandex", "Rambler", "Aport".

Let's describe main characteristics of search engines :

    Completeness

Completeness is one of the main characteristics of a search system, which is the ratio of the number of documents found by request to the total number of documents on the Internet that satisfy the given request. For example, if there are 100 pages on the Internet containing the phrase “how to choose a car,” and only 60 of them were found for the corresponding query, then the completeness of the search will be 0.6. Obviously, the more complete the search, the less likely it is that the user will not find the document he needs, provided that it exists on the Internet at all.

    Accuracy

Accuracy is another main characteristic of a search engine, which is determined by the degree to which the found documents match the user's query. For example, if the query “how to choose a car” contains 100 documents, 50 of them contain the phrase “how to choose a car”, and the rest simply contain these words (“how to choose the right radio and install it in a car”), then the search accuracy is considered equal to 50/100 (=0.5). The more accurate the search, the faster the user will find the documents he needs, the less various kinds of “garbage” will be found among them, the less often the found documents will not correspond to the request.

    Relevance

Relevance is an equally important component of search, which is characterized by the time that passes from the moment documents are published on the Internet until they are entered into the search engine index database. For example, the day after interesting news appeared, a large number of users turned to search engines with relevant queries. Objectively, less than a day has passed since the publication of news information on this topic, but the main documents have already been indexed and available for search, thanks to the existence of the so-called “fast database” of large search engines, which is updated several times a day.

    Search speed

Search speed is closely related to its load resistance. For example, according to Rambler Internet Holding LLC, today, during business hours, the Rambler search engine receives about 60 requests per second. Such workload requires reducing the processing time of an individual request. Here the interests of the user and the search engine coincide: the visitor wants to get results as quickly as possible, and the search engine must process the request as quickly as possible, so as not to slow down the calculation of subsequent queries.

    Visibility

Visual presentation of results is an important component of convenient search. For most queries, the search engine finds hundreds, or even thousands, of documents. Due to unclear queries or inaccurate searches, even the first pages of search results do not always contain only the necessary information. This means that the user often has to perform his own search within the found list. Various elements of the search engine results page help you navigate the search results. Detailed explanations of the search results page, for example for Yandex, can be found at the link http://help.yandex.ru/search/?id=481937.

A Brief History of the Development of Search Engines

In the initial period of Internet development, the number of its users was small, and the amount of available information was relatively small. For the most part, only research staff had access to the Internet. At this time, the task of searching for information on the Internet was not as urgent as it is now.

One of the first ways to organize access to network information resources was the creation of open directories of sites, links to resources in which were grouped according to topic. The first such project was the Yahoo.com website, which opened in the spring of 1994. After the number of sites in the Yahoo directory increased significantly, the ability to search for the necessary information in the directory was added. In the full sense, it was not yet a search engine, since the search area was limited only to the resources present in the catalog, and not to all Internet resources.

Link directories were widely used in the past, but have almost completely lost their popularity at present. Since even modern catalogs, huge in volume, contain information only about a negligible part of the Internet. The largest directory of the DMOZ network (also called the Open Directory Project) contains information about 5 million resources, while the Google search engine database consists of more than 8 billion documents.

The first full-fledged search engine was the WebCrawler project, published in 1994.

In 1995, search engines Lycos and AltaVista appeared. The latter has been a leader in the field of information search on the Internet for many years.

In 1997, Sergey Brin and Larry Page created the Google search engine as part of a research project at Stanford University. Google is currently the most popular search engine in the world!

In September 1997, the Yandex search engine, which is the most popular on the Russian-language Internet, was officially announced.

Currently, there are three main international search engines - Google, Yahoo and MSN, which have their own databases and search algorithms. Most other search engines (of which there are a large number) use in one form or another the results of the three listed. For example, AOL search (search.aol.com) uses the Google database, while AltaVista, Lycos and AllTheWeb use the Yahoo database.

Composition and principles of operation of the search system

In Russia, the main search engine is Yandex, followed by Rambler.ru, Google.ru, Aport.ru, Mail.ru. Moreover, at the moment, Mail.ru uses the Yandex search engine and database.

Almost all major search engines have their own structure, different from others. However, it is possible to identify the main components common to all search engines. Differences in structure can only be in the form of implementation of the mechanisms of interaction of these components.

Indexing module

The indexing module consists of three auxiliary programs (robots):

Spider – a program designed to download web pages. The spider downloads the page and retrieves all internal links from that page. The html code of each page is downloaded. Robots use HTTP protocols to download pages. The spider works as follows. The robot sends the request “get/path/document” and some other HTTP request commands to the server. In response, the robot receives a text stream containing service information and the document itself.

    Page URL

    date the page was downloaded

    Server response http header

    page body (html code)

Crawler (“traveling” spider) – a program that automatically follows all links found on the page. Selects all links present on the page. Its job is to determine where the spider should go next, based on links or based on a predetermined list of addresses. Crawler, following the links found, searches for new documents that are still unknown to the search engine.

Indexer (robot indexer) - a program that analyzes web pages downloaded by spiders. The indexer parses the page into its component parts and analyzes them using its own lexical and morphological algorithms. Various page elements are analyzed, such as text, headings, links, structural and style features, special service HTML tags, etc.

Thus, the indexing module allows you to crawl a given set of resources using links, download encountered pages, extract links to new pages from received documents, and perform a complete analysis of these documents.

Database

A database, or search engine index, is a data storage system, an information array in which specially converted parameters of all documents downloaded and processed by the indexing module are stored.

Search server

The search server is the most important element of the entire system, since the quality and speed of the search directly depend on the algorithms that underlie its functioning.

The search server works as follows:

    The request received from the user is subjected to morphological analysis. The information environment of each document contained in the database is generated (which will subsequently be displayed in the form of a snippet, that is, text information corresponding to the request on the search results page).

    The received data is passed as input parameters to a special ranking module. Data is processed for all documents, as a result of which each document has its own rating that characterizes the relevance of the query entered by the user and the various components of this document stored in the search engine index.

    Depending on the user’s choice, this rating can be adjusted by additional conditions (for example, the so-called “advanced search”).

    Next, a snippet is generated, that is, for each document found, the title, a short abstract that best matches the query, and a link to the document itself are extracted from the document table, and the words found are highlighted.

    The resulting search results are transmitted to the user in the form of a SERP (Search Engine Result Page) – a search results page.

As you can see, all these components are closely related to each other and work in interaction, forming a clear, rather complex mechanism for the operation of the search system, which requires huge amounts of resources.

No search engine covers all Internet resources.

Each search engine collects information about Internet resources using its own unique methods and forms its own periodically updated database. Access to this database is granted to the user.

Search engines implement two ways to search for a resource:

    Search by topic catalogs - information is presented in the form of a hierarchical structure. At the top level there are general categories (“Internet”, “Business”, “Art”, “Education”, etc.), at the next level the categories are divided into sections, etc. The lowest level is links to specific web pages or other information resources.

    Keyword search (index search or detailed search) - the user sends to the search engine request, consisting of keywords. System returns to the user a list of resources found upon request.

Most search engines combine both search methods.

Search engines can be local, global, regional and specialized.

In the Russian part of the Internet (Runet), the most popular general purpose search engines are Rambler (www.rambler.ru), Yandex (www.yandex.ru), Aport (www.aport.ru), Google (www.google.ru).

Most search enginesimplemented in the form of portals.

Portal (from English.portal- main entrance, gate) is a website that integrates various Internet services: search tools, mail, news, dictionaries, etc.

Portals can be specialized (like,www. Museum. en) and general (for example,www. km. en).

Search by keywords

The set of keywords used to search is also called the search criterion or search topic.

A request can consist of either one word or a combination of words combined by operators - symbols by which the system determines what action it needs to perform. For example: the request “Moscow St. Petersburg” contains the AND operator (this is how a space is perceived), which indicates that one should search for documents that contain both words - Moscow and St. Petersburg.

In order for the search to be relevant (from the English relevant - relevant, relevant), several general rules should be taken into account:

    Regardless of the form in which the word is used in the query, the search takes into account all its word forms according to the rules of the Russian language. For example, the query “ticket” will also find the words “ticket”, “ticket”, etc.

    Capital letters should only be used in proper names to avoid viewing unnecessary references. At the request of “blacksmiths,” for example, documents will be found that talk about both blacksmiths and Kuznetsovs.

    It is advisable to narrow your search using a few keywords.

    If the required address is not among the first twenty addresses found, you should change the request.

Each search engine uses its own query language. To get acquainted with it, use the built-in help of the search engine

Large sites may have built-in information retrieval systems within their web pages.

Queries in such search systems, as a rule, are built according to the same rules as in global search engines, however, familiarity with the help here will not be superfluous.

Advanced Search

Search engines can provide a mechanism for the user to create a complex query. Following a link Advanced Search makes it possible to edit search parameters, specify additional parameters and select the most convenient form for displaying search results. The following describes the parameters that can be set during an advanced search in the Yanex and Rambler systems.

Parameter description

Name in Yandex

Name inRambler

Where to look for keywords (document title, body text, etc.)

Dictionary filter

Search by text...

What words should or should not be present in the document and how accurate the match should be

Dictionary filter

Search for query words... Exclude documents containing the following words...

How far apart should keywords be located?

Dictionary filter

Distance between query words...

Restriction on document date

Document date...

Limit your search to one or more sites

Site/Top

Search documents only on the following sites...

Limiting search by document language

Document language...

Search for documents containing a picture with a specific name or signature

Image

Finding pages that contain objects

Special objects

Search results presentation form

Issue format

Displaying search results

Some search engines (for example, Yandex) allow you to enter queries in natural language. You write what you need to find (for example: ordering train tickets from Moscow to St. Petersburg). The system analyzes the request and produces the result. If you are not satisfied with it, switch to the query language.

For any modern inhabitant of the planet, a computer without access to the Internet is a useless thing. The World Wide Web serves as a fast, convenient and better way to interact with the outside world, but this was not always the case. Back in the mid-20th century, this word meant absolutely nothing.

Let's remember the past

So when was the Internet created, by whom and for what? The founders of the idea, oddly enough, are American specialists. It all started in October 1957, when the Soviet Union launched an artificial Earth satellite, which prompted the Americans to take decisive action.

The US Department of Defense, sensing the clear superiority of the Russian nation, decided to create a reliable and efficient information exchange system. Such a system was supposed to help the country in case of a sudden war. Such a difficult responsibility was placed on America's leading universities.

Thanks to good funding, the Stanford research center and the universities of Los Angeles, Santa Barbara, and Utah were able to bring the idea to life by 1969. Four educational institutions were united into a common network called “Advanced Research Projects Agency Network” (abbr. ARPANET).

Date of “Birth” of the World Wide Web

Already in the first months it was impossible not to appreciate the effectiveness of electronic innovation. The system began to actively develop, receiving many approvals from many scientists and researchers of the last century. At the end of October 1969, the first successful communication session between the two universities was carried out.

October 29, 1969 is the date the Internet appeared. California Institute employee Charlie Cline established a remote connection, which was confirmed through a telephone conversation by Stanford employee Bill Duvall. Of course, not everything went smoothly, but communication was still established.

Development process

As they say, good things stay on the shelf. This expression was no exception for the network. Two years after remote communication was established, our beloved email was invented. This happened on October 2, 1971 thanks to the work of Ray Tomlinson, a leading engineer at the scientific corporation BBN TECHNOLOGIES.

The researcher's idea is to create a dividing mark between the user login and the domain. Without thinking, we still actively use this symbol, calling it the simple human word “dog”. Ray helped bring the network to the masses, connecting hundreds of thousands of interested people.

But even then the concept and concept of the World Wide Web did not exist. There was only a cloud space for exchanging data over a considerable distance, which included sending emails and various types of mailing lists, news groups, and private message boards.

Author of the true world wide web

From 1971 to 1989, tremendous work was done to expand the capabilities of the Internet network. Data transfer protocols, which Jonathan Postel worked hard on, are actively developing. A domain name system was developed. A protocol was successfully implemented to allow real communication.

And only in 1989, an employee of IMAGE COMPUTER SYSTEMS LTD, dealing with communication software and online systems architecture, proposed the “World Wide Web” doctrine to the company’s management. The name of the founder of the plan is Timothy John Bernes-Lee.

Bernes-Lee is an excellent graduate of Oxford University with a BA in Physics. He came up with the name of the concept “World Wide Web” on his own, based on his work and based on the name of the well-known protocol. We are all used to calling it a “triple double” or “BBW” (www).

By the end of 1989, in the USA and Europe, not only email was in demand, but also real-time communication, various news feeds, and commercial activity was developing. Tim Bernes-Lee does not stop there, but continues to modernize the newfangled system.

New face

A talented physicist-programmer develops a web server and the first web browser in history. It was through his efforts that the following were created: a page editor, a traditional way of writing a site address, hypertext markup language (HTML), and data transfer protocols. In 1990, Belgian Robert Caillot joined him.

Robert was employed at the European Center for Nuclear Research (CERN). He headed the department that at that time dealt with computing systems in the data processing department. Kayo's efforts were aimed at obtaining core funding for Tim Bernes' project.

In addition to financial and organizational issues, Robert Caillot took an active part in the development and promotion of the Internet. However, he did not reserve the rights of a co-author, as a result of which he was practically forgotten. In history, only the name of researcher Tim Bernes-Lee is increasingly heard.

Conclusion

I wonder if all the people mentioned thought that in 2016 the world would literally plunge into the vastness of the Internet. Satellite communications, video communication and more will be installed. Each country will have its own term for the global Internet, reflecting its linguistic affiliation (RUNET) and demonstrating national domains.

By the way, the first domain of the Russian Federation (RU) was registered in the spring of 1994. Now each of the readers knows when, how and by whom the Internet was invented and implemented. Today it is an advanced achievement of science and technology, which is an organic part of modern society.

Initially, the Internet was a computer network for transmitting information, developed at the initiative of the US Department of Defense. The reason was given by the first artificial Earth satellite launched by the Soviet Union in 1957. The US military decided that in this case they needed an ultra-reliable communication system. ARPANET was not a secret for long and soon began to be actively used by various branches of science.

The first successful remote communication session was conducted in 1969 from Los Angeles to Stanford. In 1971, an instantly popular program for sending email over the Internet was developed. The first foreign organizations to connect to the network were in the UK and Norway. With the installation of the transatlantic telephone cable to these countries, ARPANET became an international network.

The ARPANET was perhaps a more advanced communication system, but it was not the only one. And only by 1983, when the American network was filled with the first news groups, bulletin boards and switched to using the TCP/IP protocol, which made it possible to integrate into other computer networks, ARPANET became the Internet. Literally a year later, this title began to gradually pass to NSFNet - an inter-university network that had a large capacity and accumulated 10 thousand connected computers in an annual period. The first Internet chat appeared in 1988, and in 1989 Tim Berners-Lee proposed the concept of the World Wide Web.

world wide web

In 1990, ARPANET finally lost to NSFNet. It is worth noting that both of them were developed by the same scientific organizations, only the first was commissioned by the US defense services, and the second was on its own initiative. However, this competitive pairing led to scientific developments and discoveries that made the World Wide Web a reality, which became publicly available in 1991. Berners Lee, who proposed the concept, over the next two years developed the HTTP (hypertext) protocol, the HTML language, and URL identifiers, which are more familiar to ordinary users as Internet addresses, sites, and pages.

The World Wide Web is a system that provides access to files on a server computer connected to the Internet. This is partly why today the concepts of the web and the Internet often replace each other. In fact, the Internet is a communication technology, a kind of information space, and the World Wide Web fills it. This spider network consists of many millions of web servers - computers and their systems that are responsible for the operation of websites and pages. To access web resources (download, view) from a regular computer, a browser program is used. Web, WWW are synonyms for the World Wide Web. WWW users number in the billions.

2023 minbanktelebank.ru
Business. Earnings. Credit. Cryptocurrency