The cyberprotocol redefines search engines with Web 3.0

Disclaimer: The text below is an advertising article that was not written by reporters at

Cyber ​​is building an alternative to web 2 search engines like Google and Yandex. Creation of a search engine on the Web 3.0. Existing search engines, of which Google remains the most popular, have fundamental problems in their architecture. The concept of search engine built on Web 3.0 is a new concept, even revolutionary. However, to understand the importance of what Cyber ​​Protocol does, it is imperative to understand how Web 2.0 search engines work and how Cyber ​​works to resolve issues.

How Google Works

Google is the leading search engine on the Internet. 80% of search queries are made through the search engine, making it the dominant service in the space. However, there are a number of issues that arise when using the search engine. On the one hand, indexing and how it is performed remains a mystery to engine users. There are countless theories about how search results are displayed when a user enters a query.

This is a problem with the search engine because users cannot be sure what results they will get when entering a particular query. For example, Google’s algorithm will show two different search results to two different users making the same query. It does this by using data that has been collected from users over time to present them with results suited to their growth history, and then adjusts those results accordingly.

Problems with Web 2.0 search engines

A fundamental problem with Google’s services concerns the indexing of links. Now the link indexing mechanism is important as this is how content gets ranked on search engines. It determines the order in which the search results will be displayed when a user types a query.

It has been postulated that Google indexes based on the amount of content relevant to the query, but there is no definitive proof of this. Instead, the site with less query-related information will sometimes appear before a site with more relevance. If this happens, it means that Google is not indexing the content based on its relevance to the query. How do they do it then?

Google takes into account a user’s location, previous queries, local laws, and more. With this, it tailors the search results to each user, leading to the omission of results for one user which would be shown to another user. This convoluted process makes it inefficient for the user entering the search query because they may not see the information they need.

The Web 2.0 search engine architecture also works with protocols such as TCP / IP, DNS, URL, and HTTP / S. All of these protocols use addressed locations, also known as URL links. These links are presented to a user when they enter a query and by clicking on these addressed locations (URL links) the user is taken to a third party website where the content they are looking for is located. This mechanism can cause problems.

One of the problems with using these protocols is the ease of content tampering. Any existing content on the Internet can be edited, deleted or blocked at any time. In addition, the use of hyperlinks allows malicious actors to easily replace content with dangerous or harmful content. Content on the Internet is likely to be blocked by local authorities to pursue a political objective.

How will Cyber ​​solve these problems?

Cyber ​​has built an experimental Web 3.0 search engine prototype that it is currently testing. is basically a browser within a browser that allows users to surf the web. Users can search and browse content using its built-in ipfs node, index content, and interact with decentralized applications.

A Web 3.0 search engine differs greatly from Web 2.0 in that it removes the opaque indexing mechanism used by Google and others. Crawlers are not required to collect information about changes to the content of a site, as all changes will be updated in search results.

Individual users wield power over search engine results through a peer-to-peer network of participants. It works the same as torrenting, where reliable storage is provided, content cannot be censored, and access to content can be arranged without a reliable connection. In a Web 3.0 search engine, the risk of censorship or loss of privacy is completely eliminated.

Web 3.0 search engines have a public database that is open to everyone. Whereas centralized databases like Google and Yandex feature centralized databases with limited public access.

How does categorize the content?

The ranking of content in a Web 3.0 search engine differs greatly from existing search engines. Oracle content is what Cyber ​​is based on and what it does is provide a collaborative, dynamic and distributed knowledge graph that is the direct result of the work of all participants of a decentralized network.

Web 3.0 search engines rank relevant content using a cyberlink rather than a hyperlink. To upload content to a knowledge graph, a transaction is first performed using a cyberlink. Similar to torrenting, a user becomes a distribution point after finding content and downloading it. It works the same as payload fields on the Ethereum network, although the cyberlink data is structured.

Cyberlink rankings are implemented through tokenomics. Users will find content through hashes stored by another user. To modify content, a user will need to modify the hash. This makes it easier to find content without knowing the location of the server. This way, permalinks can be exchanged without breaking them.

Cyber ​​has also developed a ranking algorithm called cyberRank. It works similarly to PageRank but differs in the protection it offers to users. CyberRank protects the knowledge graph against spam, cyber attacks, and selfish user behavior through a cost-effective mechanism.

Tokenomics for content ranking

The entire Cyber ​​network will be controlled using its tokenomics. Users will need to have tokens in order to rank the content in the knowledge graph. Tokens will provide indexing and ranking capabilities, providing access to knowledge graph resources.

The tokens will allow users to index the content V (volts) and classify it as A (amps). Although in order to do this the H (hydrogen) token will need to be held by the user for some time. H (hydrogen) is obtained by liquid staking of BOOT (Bostrom) and CYB (Cyber) tokens similar to income staking on Polkadot, Solana or Cosmos.

70% of the tokens will be distributed to Ethereum users in Genesis. Users’ activities on the network will be analyzed to determine if they are eligible for the airdrop. This will put the majority of the tokens in the hands of users who have already proven to create value on blockchains.

What to expect in a decentralized search engine

Search results in a Web 3.0 search engine will not look exactly like existing engines. On the one hand, the search results will include the desired content that can be read or viewed without clicking on links to a third party site.

Second, payment buttons for online stores can be integrated directly into search snippets. In addition to buttons for interacting with applications on any blockchain included in search results.

With the launch of the Bostrom Canary Network, users can now participate in the Superintelligence bootstrapping process with the help of Cyb.

Rosemary S. Bishop