faq/doc - Fix

git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@215 6c8d7289-2bf4-0310-a012-ef5d649a1542
pull/1/head
orbiter 20 years ago
parent 9af1bf4b38
commit a73e6de005

@ -1,11 +1,11 @@
<html>
<head>
<title>YACY: FAQ</title>
<title>YaCy: FAQ</title>
<meta http-equiv="content-type" content="text/html;charset=iso-8859-1">
<!-- <meta name="Content-Language" content="German, Deutsch, de, at, ch"> -->
<meta name="Content-Language" content="English, Englisch">
<meta name="keywords" content="YACY HTTP Proxy search engine spider indexer java network open free download Mac Windwos Software development">
<meta name="description" content="YACY Software HTTP Proxy Freeware Home Page">
<meta name="keywords" content="YaCy HTTP Proxy search engine spider indexer java network open free download Mac Windwos Software development">
<meta name="description" content="YaCy Software HTTP Proxy Freeware Home Page">
<meta name="copyright" content="Michael Christen">
<script src="navigation.js" type="text/javascript"></script>
<link rel="stylesheet" media="all" href="style.css">
@ -19,7 +19,7 @@ globalheader();
<h2>FAQ</h2>
<p>YACY is not only a distributed search engine, but also a caching HTTP proxy.
<p>YaCy is not only a distributed search engine, but also a caching HTTP proxy.
Both application parts benefit from each other.</p>
@ -28,20 +28,20 @@ Both application parts benefit from each other.</p>
We wanted to avoid that you start a search service ony for that very time when you submit a search query.
This would give the Search Engine too less online time.
So we looked for a cause the you would like to run the Search Engine during all the time that you are online.
By giving you the surplus value of a caching proxy, the reason was found.
The already built-in blacklist for the proxy is another surplus value.
By giving you the additional value of a caching proxy, the reason was found.
The built-in blacklist (url filter, useful i.e. to block ads) for the proxy is another increase in value.
</p>
<h3>Why is this Proxy also a Search Engine?</h3>
<p>YACY has a built-in <i>caching</i> proxy, which means that YACY has a lot of indexig information
<p>YaCy has a built-in <i>caching</i> proxy, which means that YaCy has a lot of indexig information
'for free' without crawling. This may not be a very usual function of a proxy, but a very useful one:
you see a lot of information when you browse the internet and maybe you would like to search exactly
only what you have seen. Beside this interesting feature, you can use YACY to index an intranet
simply by using the proxy; you don't need to additionally set up another search/indexing process
and maybe also of databases. YACY gives you an 'instant' database and an 'instant' search service.</p>
only what you have seen. Beside this interesting feature, you can use YaCy to index an intranet
simply by using the proxy; you don't need to additionally set up another search/indexing process or databases.
YaCy gives you an 'instant' database and an 'instant' search service.</p>
<h3>Can I Crawl The Web With YACY?</h3>
<p>Yes! You can start your own crawl and trigger also distributed crawling, which means that your peer asks other peers to perform specific crawl tasks. You can specify many parameters that focus your crawl to a limited set of web pages.</p>
<h3>Can I Crawl The Web With YaCy?</h3>
<p>Yes! You can start your own crawl and you may also trigger distributed crawling, which means that your peer asks other peers to perform specific crawl tasks. You can specify many parameters that focus your crawl to a limited set of web pages.</p>
<h3>What do you mean with 'Global Search Engine'?</h3>
<p>The integrated indexing and search service can not only be used localy, but also <i>globaly</i>.
@ -49,7 +49,7 @@ Every proxy distributes some contact information to all other proxies that can b
and proxies exchange <i>but do not copy</i> their indexes to each other.
This is done in such a way, that every <i>peer</i> knows how to address the correct other
<i>peer</i> to retrieve a special search index.
Therefore the community of all proxies spawn a <i>distributed hash table</i> (DHT)
Therefore the community of all proxies spawns a <i>distributed hash table</i> (DHT)
which is used to share the <i>reverse word index</i> (RWI) to all operators and users of the proxies.
The applied logic of distribution and retrieval of RWI's on the DHT combines all participating proxies to
a <i>Distributed Search Engine</i>.
@ -66,12 +66,12 @@ We still distinguish three different <i>classes</i> of peers:
<li><i>senior</i> peers can be accessed by other peers and</li>
<li><i>principal</i> peers are like senior but can also upload network bootstrap information to ftp/http sites; this is necessary for the network bootstraping.</li>
</ul>
Junior peers can contribute to the network by submitting index files to senior/principal peers without beeing asked. (This function is currently very limited)
Junior peers can contribute to the network by submitting index files to senior/principal peers without being asked. (This function is currently very limited)
</p>
<h3>Search Engines need a lot of terabytes of space, don't they? How much space do I need on my machine?</h3>
<p>The global index is <i>shared</i>, but not <i>copied</i> to the peers.
If you run YACY, you need an average of the same space for the index as you need for the cache.
If you run YaCy, you need an average of the same space for the index as you need for the cache.
In fact, the global space for the index may reach the space of Terabytes, but not all of that on your machine!</p>
<h3>Search Engines must do crawling, don't they? Do you?</h3>
@ -80,84 +80,57 @@ If you <i>want</i> to crawl, you can do so and start your own crawl job with a c
<h3>Does this proxy with search engine create much traffic?</h3>
<p>No, it may create <i>less</i>. Because it does not need to do crawling, you don't have additional traffic.
In contrast, the proxy does <i>caching</i> which means that double-load of known pages is avoided and this possibly
speeds up your internet connection. Index sharing makes some traffic, but is only performed during idle-time of the proxy and of your internet usage.</p>
In contrast, the proxy does <i>caching</i> which means that repeated loading of known pages is avoided and this possibly
speeds up your internet connection. Index sharing creates some traffic, but is only performed during idle time of the proxy and of your internet usage.</p>
<h3>Full-text indexing threads on my machine? This will slow down my internet browsing too much.</h3>
<p>No, it does not, because indexing is only performed when the proxy is idle. This shifts the computing time to the moment when you read pages and you don't need computing time. Indexing is stopped automatically the next time you retrieve web pages through the proxy.</p>
<p>No, it won't, because indexing is only performed when the proxy is idle. This shifts the computing time to the moment when you read pages and you don't need computing time. Indexing is stopped automatically the next time you retrieve web pages through the proxy.</p>
<h3>Do I need a fast machine? Search Engines need big server farms, don't they?</h3>
<p>You don't need a fast machine to run YACY. You also don't need a lot of space. You can configure the amount of Megabytes that you want to spend for the cache and the index. Any time-critical task is delayed automatically and takes place when you are idle surfing. Whenever internet pages pass the proxy, any indexing (or if wanted: prefetch-crawling) is interrupted and delayed. The root server runs on a simple 500 MHz/20 GB Linux system. You don't need more.</p>
<p>You don't need a fast machine to run YaCy. You also don't need a lot of space. You can configure the amount of Megabytes that you want to spend for the cache and the index. Any time-critical task is delayed automatically and takes place when you are idle surfing. Whenever internet pages pass the proxy, any indexing (or if wanted: prefetch-crawling) is interrupted and delayed. The root server runs on a simple 500 MHz/20 GB Linux system. You don't need more.</p>
<h3>Does the caching procedure slow down or delay my internet usage?</h3>
<p>No. Any file that passes the proxy is <i>streamed</i> through the filter and caching process. At a certain point the information stream is duplicated; one copy is streamed to your browser, the other one to the cache. The files that pass the proxy are not delayed because they are <i>not</i> first stored and then passed to you, but streamed at the same time to you as it is streamed to the cache. Therefore your browser can do layout-while-loading as it would do without the proxy.</p>
<p>No. Any file that passes the proxy is <i>streamed</i> through the filter and caching process. At a certain point the information stream is duplicated; one copy is streamed to your browser, the other one to the cache. The files that pass the proxy are not delayed because they are <i>not</i> first stored and then passed to you, but streamed at the same time as they are streamed to the cache. Therefore your browser can do layout while loading as it would do without the proxy.</p>
<h3>How can you ensure actuality of the search results?</h3>
<h3>How can you ensure that search results are up-to-date?</h3>
<p>Nobody can. How can a 'normal' search engine ensure this? By doing 'brute force crawling'?
We have a better solution for acuality: browsing results of all people who run YACY.
Many people prefer to look at news pages every day, and by passing through the proxy the latest news also arrive in the distributed search engine.
This may take place possibly faster than it happens with a normal/crawling search engine.
And the search results reflect the 'general demand' of information, because it is the average of all contributors.</p>
We have a better solution to be up-to-date: browsing results of all people who run YaCy.
Many people prefer to look at news pages every day, and by passing through the proxy the latest news also arrive in the distributed search engine. This may take place possibly faster than it happens with a normal/crawling search engine.</p>
<h3>I don't want to wait for search results much time. How much time takes a search?</h3>
<p>Our architecture does not do peer-hopping, we also don't have a TTL (time to live). We expect that search results are <i>instantly</i> responded to the requester.
This can be done by asking the index-owning peer <i>directly</i> which is in fact possible by using DHT's (distributed hash tables).
Because we need some redundancy to catch up missing peers, we ask several peers simultanously. To collect their respond, we wait a little time of at most 10 seconds.
The user may configure a search time different than 10 seconds, but this is our target of <i>maximum</i> search time.</p>
<h3>I don't want to wait for search results very long. How long does a search take?</h3>
<p>Our architecture does not do peer-hopping, we also don't have a TTL (time to live). We expect that search results are <i>instantly</i> responded to the requester. This can be done by asking the index-owning peer <i>directly</i> which is in fact possible by using DHT's (distributed hash tables). Because we need some redundancy to compensate for missing peers, we ask several peers simultanously. To collect their response, we wait a little time of at most 10 seconds. The user may configure a search time different than 10 seconds, but this is our target of <i>maximum</i> search time.</p>
<h3>I am scared about the fact that the browsing results are distributed. What about privacy?</h3>
<p>Don't be scared. We have an architecture that hides your private browsing profile from others. For example: no-one of the words that are indexed from
the pages you have seen is stored in clear text on your computer. Instead, a hash is used which can not be computed back into the original word. Because
Index files travel along peers you cannot state if a specific link was visited by you or another peer-user, so this frees you from beeing responsible
about the index files on your machine.</p>
<p>Don't be scared. We have an architecture that hides your private browsing profile from others. For example: none of the words that are indexed from the pages you have seen is stored in clear text on your computer. Instead, a hash is used which can not be computed back into the original word. Because index files travel among peers, you cannot state if a specific link was visited by you or another peer-user, so this frees you from being responsible about the index files on your machine.</p>
<h3>Do I need to set up and run a separate database?</h3>
<p>No. YACY contains it's own database engine, which does not need any extra set-up or configuration.</p>
<p>No. YaCy contains it's own database engine, which does not need any extra set-up or configuration.</p>
<h3>What kind of database do you use? Is it fast enough?</h3>
<p>The database stores either tables or property-lists in filed AVL-Trees. These are height-regulated binary trees.
Such a search tree ensures a logarithmic order of computation time. For example a search within an AVL tree with one million entries needs
an average of 20 comparisments, and at most 24 in the worst case. This database is therefore extremely fast. It lacks an API like
SQL or the LDAP protocol, but it does not need one because it provides a highly specialized database structure.
The missing interface pays off with a very small organization overhead, which improves the speed further in comparisment with other databases
with SQL or LDAP api's. This database is fast enough
for millions of indexed web pages, maybe also for billions. The speed is sufficient for billions of pages, but not the file organization
structure at the moment, because the tree-files would become too big. We will provide a solution at the time we need such big tables.</p>
<p>The database stores either tables or property-lists in filed AVL-Trees. These are files with the data structure of height-regulated binary trees. Such a search tree ensures a logarithmic order of computation time. For example a search within an AVL tree with one million entries needs an average of 20 comparisons, and at most 24 in the worst case. This database is therefore extremely fast. It lacks an API like SQL or the LDAP protocol, but it does not need one because it provides a highly specialized database structure. The missing interface pays off with a very small organization overhead, which improves the speed further in comparison to other databases with SQL or LDAP api's. This database is fast enough for millions of indexed web pages, maybe also for billions.</p>
<h3>Why do you use your own database? Why not use mySQL or openLDAP?</h3>
<p>The database structure we need is very special. One demand is that the entries can be retrieved in logarithmic time <i>and</i> can be
enumerated in any order. Enumeration in a specific order is needed to create conjunctions of tables very fast. This is needed when someone
searches for several words. We implement the search word conjunction by pairwise and simultanous enumeration/comparisment of index trees/sequences.
This forces us to use binary trees as data structure. Another demand is that we need the ability to have many index tables, maybe <i>millions
of tables</i>. The size of the tables may be not big in average, but we need many of them. This is in contrast of the organization of
relational databases, where the focus is on management of very large tables, but not of many of them. A third demand is the ease of
installation and maintenance: the user shall not be forced to install a RBMS first, care about tablespaces and such. The integrated
database is completely service-free.</p>
<p>The database structure we need is very special. One demand is that the entries can be retrieved in logarithmic time <i>and</i> can be enumerated in any order. Enumeration in a specific order is needed to create conjunctions of tables very fast. This is needed when someone searches for several words. We implement the search word conjunction by pairwise and simultanous enumeration/comparisment of index trees/sequences. This forces us to use binary trees as data structure. Another demand is that we need the ability to have many index tables, maybe <i>millions of tables</i>. The size of the tables may be not big in average, but we need many of them. This is in contrast of the organization of relational databases, where the focus is on management of very large tables, but not of many of them. A third demand is the ease of installation and maintenance: the user shall not be forced to install a RBMS first, care about tablespaces and such. The integrated database is completely service-free.</p>
<h3>What does Senior Mode mean? What is Junior Mode?</h3>
<p><i>Junior</i> peers are such peers that cannot be reached from other peers, while <i>Senior</i> peers can be contacted.
If your peer has global access, it runs in Senior Mode. If it is hidden from others, it is in Junior Mode.
If your peer is in Senior Mode, it is an access point for index sharing and distribution. It can be contacted for search requests and it collects index files
from other peers. If your peer is in Junior Mode, it collects index files from your browsing and distributes them only to other Senior peers, but does not collect index files.
If your peer is in Senior Mode, it is an access point for index sharing and distribution. It can be contacted for search requests and it collects index files from other peers. If your peer is in Junior Mode, it collects index files from your browsing and distributes them only to other Senior peers, but does not collect index files.
</p>
<h3>Why should I run my proxy in Senior Mode?</h3>
<p>Some p2p-based file sharing software assign non-contributing peers very low priority. We think that that this is not always fair since sometimes the operator
does not always has the choice of opening the firewall or configuring the router accordingly. Our idea of 'information wares' and their exchange can also be
applied to junior peers: they must contribute to the global index by submitting their index <i>actively</i>, while senior peers contribute <i>passively</i>.
<p>Some p2p-based file sharing software assign non-contributing peers very low priority. We think that that this is not always fair since sometimes the operator does not have the choice of opening the firewall or configuring the router accordingly. Our idea of 'information wares' and their exchange can also be applied to junior peers: they must contribute to the global index by submitting their index <i>actively</i>, while senior peers contribute <i>passively</i>.
Therefore we don't need to give junior peers low priority: they contribute equally, so they may participate equally.
But enough senior peers are needed to make this architecture functional.
Since any peer contributes almost equally, either actively or passively, you should
decide to run in Senior Mode if you can.
But enough senior peers are needed to make this architecture functional. Since any peer contributes almost equally, either actively or passively, you should decide to run in Senior Mode if you can.
</p>
<h3>My proxy says it runs in 'Junior Mode'. How can I run it in Senior Mode?</h3>
<p>Open your firewall for port 8080 (or the port you configured) or program your router to act as a <i>virtual server</i>.</p>
<h3>How can I help?</h3>
<p>First of all: run YACY in senior mode. This helps to enrich the global index and to make YACY more attractive.
<p>First of all: run YaCy in senior mode. This helps to enrich the global index and to make YaCy more attractive.
If you want to add your own code, you are welcome; but please contact the author first and discuss your idea to see how it may fit into the overall architecture.
You can help a lot by simply giving us feed-back or telling us about new ideas. You can also help by telling other people about this software.
You can help a lot by simply giving us feedback or telling us about new ideas. You can also help by telling other people about this software.
And if you find an error or you see an exception, we welcome your defect report. Any feed-back is welcome.</p>
<!-- ----- HERE ENDS CONTENT PART ----- -->

@ -20,7 +20,7 @@ globalheader();
<h2>Technology</h2>
<p>YACY consists mainly of four parts: the <b>p2p index exchange</b> protocol, based on http; a <b>spider/indexer</b>; a <b>caching http proxy</b> which is not only a simple <i>surplus value</i> but also an <i>informtaion provider</i> for the indexing engine and the built-in <b>database engine</b> which makes installation and maintenance of yacy very easy.</p>
<p>YACY consists mainly of four parts: the <b>p2p index exchange</b> protocol, based on http; a <b>spider/indexer</b>; a <b>caching http proxy</b> which is not only a simple <i>increase in value</i> but also an <i>informtaion provider</i> for the indexing engine and the built-in <b>database engine</b> which makes installation and maintenance of yacy very easy.</p>
<center><img src="grafics/architecture.gif"></center>
<p>All parts of this architecture are included in the YACY distribution. The YACY search engine can be accessed through the built-in http server.</p>

@ -10,7 +10,7 @@
<h2>The YACY Lab</h2>
<p>
This is the place where we try new functions and future surplus-values of the AnomicHTTPProxy and the YACY search engine.
This is the place where we try new functions of the YaCy search engine.
All these things here are to be considered as probably unstable, and/or experimental.
You may try out these things but please do not care about bugs.</p>

Loading…
Cancel
Save