+The proxy 'scrapes' the content that it passes and creates an index that can be shared between every YACY Proxy daemons.
+You can use the indexing feature for intranet indexing:
+you instantly have a search service at hand to index all intranet-served web pages.
+You don't need to set up a separated search service. And the used PLASMA
+indexing is not a naive quick-hack but an properly engineered and extremely fast algorithm;
+it is capable of indexing a nearly unlimited number of pages, without slowing down the search process.
+
+
p2p-Based Global Search Engine
+The proxy contains an index-sharing p2p-based algorithm which creates a global distributed search engine.
+This spawns a world-wide global search index.
+The current release is a minimum implementation of this concept and shall prove it's functionality.
+
+
Caching HTTP and transparent HTTPS Proxy
+With optional pre-fetching. HTTP 1.1 with GET/HEAD/POST/CONNECT is supported. This is sufficient for nearly all public web pages.
+HTTP headers are transparently forwarded. HTTPS connections through target port 443 are transparently forwarded, non-443 connections are suppressed to enhance security. Both (HTTP and HTTPS) proxies share the same proxy port, which is by default port 8080.
+
+
+The proxy can block unwanted access by setting IP filters and http passwords.
+You can also enhance security by inspecting the source code, which is completely included.
+Check the code and re-build your own proxy.
+
+
Web/HTTP server
+The built-in HTTP server is the interface to the local and global search service;
+the server may not only be used to administrate the proxy, but also to serve as an intranet/internet web server.
+
+
Ideal Internet Cafe Proxy Solution
+Every Internet Cafe needs a caching proxy instead only a NAT to route the cafe's client traffic from the internet to maximize bandwidth.
+This can only be done using a caching proxy. This is naturally provided by the YACY Proxy. Future versions may also include
+billing support functions.
+
+
Terminal-Based
+the proxy does not need to have a window-based environment and can run on a screen-less router; therefore you may run the proxy on your already existing servers, whatever they are since YACY Proxy is written in java and will run also on your platform.
+
+
Open-Source
+This is a simple necessity for an application that implements a server.
+Don't use any other server software that does not come with the source code.
+Volunteers to extent the proxy are welcome!
+If you think you have a great idea how to extend/enhance/fix the proxy, please let me know.
+
+
Easy Installation
+You just need to decompress the release containter with your favourite decompressor (zip, rar, sit, tar etc. will do)
+and double-click the application wrapper for your OS. No restart necessary.
+Just double-click the application wrapper.
+
+
A first alpha version is available. Please consider that the application may behave not yet really performant, nor always correct. The basic functionality is available, but may contain bugs. Please see also the releases' 'wishlist.txt' with the list of all not-yet implemented features and known bugs.
+
download steps:
+
+
1st Step: Agree With License
+
If you download the software, you must agree to the applications GPL-based license.
The Release comes in different flavours: a general one with application wrappers for Unix/Linux, Macintosh OS X and Windows, and a specialized Windows version with Windows installer. Please choose from either one.
+
Please go to the installation page.
+If you upgrade from a previous version of YaCy, please migrate your data
+(simply move the DATA directory to your new application directory).
+
+
Final Step: Your Contribution is Appreciated
+
Open-Source/Freeware needs your contribution!
+Even if you are a non-programmer or first-time user of this software, you can help to
+
+
improve and extend functionality,
+
security,
+
ease-of-use and
+
usability
+
+of this distribution by
+
+
giving feed-back,
+
report bugs,
+
suggest new functions or
+
even get your hands on the source code or documentation.
+
+When you find a bug, please help to further improve the application by sending me a bug-report.
+The report should describe a complete set of actions that are necessary to reproduce the error.
+Please contact me here. Thank you!
+
+
+
+
+
diff --git a/doc/FAQ.html b/doc/FAQ.html
new file mode 100644
index 000000000..2cbd62b07
--- /dev/null
+++ b/doc/FAQ.html
@@ -0,0 +1,168 @@
+
+
+YACY: FAQ
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
FAQ
+
+
YACY is not only a distributed search engine, but also a caching HTTP proxy.
+Both application parts benefit from each other.
+
+
+
Why is this Search Engine also a Proxy?
+
+We wanted to avoid that you start a search service ony for that very time when you submit a search query.
+This would give the Search Engine too less online time.
+So we looked for a cause the you would like to run the Search Engine during all the time that you are online.
+By giving you the surplus value of a caching proxy, the reason was found.
+The already built-in blacklist for the proxy is another surplus value.
+
+
+
Why is this Proxy also a Search Engine?
+
YACY has a built-in caching proxy, which means that YACY has a lot of indexig information
+'for free' without crawling. This may not be a very usual function of a proxy, but a very useful one:
+you see a lot of information when you browse the internet and maybe you would like to search exactly
+only what you have seen. Beside this interesting feature, you can use YACY to index an intranet
+simply by using the proxy; you don't need to additionally set up another search/indexing process
+and maybe also of databases. YACY gives you an 'instant' database and an 'instant' search service.
+
+
Can I Crawl The Web With YACY?
+
Yes! You can start your own crawl and trigger also distributed crawling, which means that your peer asks other peers to perform specific crawl tasks. You can specify many parameters that focus your crawl to a limited set of web pages.
+
+
What do you mean with 'Global Search Engine'?
+
The integrated indexing and search service can not only be used localy, but also globaly.
+Every proxy distributes some contact information to all other proxies that can be reached in the internet,
+and proxies exchange but do not copy their indexes to each other.
+This is done in such a way, that every peer knows how to address the correct other
+peer to retrieve a special search index.
+Therefore the community of all proxies spawn a distributed hash table (DHT)
+which is used to share the reverse word index (RWI) to all operators and users of the proxies.
+The applied logic of distribution and retrieval of RWI's on the DHT combines all participating proxies to
+a Distributed Search Engine.
+To point out that this is in contrast to local indexing and searching,
+we call it a Global Search Engine.
+
+
+
Is there a central server? Does the search engine network need one?
+
No. The network architecture does not need a central server, and there is none.
+In fact there is a root server which is the 'first' peer, but any other peer has the same rights and tasks to perform.
+We still distinguish three different classes of peers:
+
+
junior peers are peers that cannot be reached from the internet because of routing problems or firewall settings;
+
senior peers can be accessed by other peers and
+
principal peers are like senior but can also upload network bootstrap information to ftp/http sites; this is necessary for the network bootstraping.
+
+Junior peers can contribute to the network by submitting index files to senior/principal peers without beeing asked. (This function is currently very limited)
+
+
+
Search Engines need a lot of terabytes of space, don't they? How much space do I need on my machine?
+
The global index is shared, but not copied to the peers.
+If you run YACY, you need an average of the same space for the index as you need for the cache.
+In fact, the global space for the index may reach the space of Terabytes, but not all of that on your machine!
+
+
Search Engines must do crawling, don't they? Do you?
+
No. They can do, but we collect information by simply using the information that passes the proxy.
+If you want to crawl, you can do so and start your own crawl job with a certain search depth.
+
+
Does this proxy with search engine create much traffic?
+
No, it may create less. Because it does not need to do crawling, you don't have additional traffic.
+In contrast, the proxy does caching which means that double-load of known pages is avoided and this possibly
+speeds up your internet connection. Index sharing makes some traffic, but is only performed during idle-time of the proxy and of your internet usage.
+
+
Full-text indexing threads on my machine? This will slow down my internet browsing too much.
+
No, it does not, because indexing is only performed when the proxy is idle. This shifts the computing time to the moment when you read pages and you don't need computing time. Indexing is stopped automatically the next time you retrieve web pages through the proxy.
+
+
Do I need a fast machine? Search Engines need big server farms, don't they?
+
You don't need a fast machine to run YACY. You also don't need a lot of space. You can configure the amount of Megabytes that you want to spend for the cache and the index. Any time-critical task is delayed automatically and takes place when you are idle surfing. Whenever internet pages pass the proxy, any indexing (or if wanted: prefetch-crawling) is interrupted and delayed. The root server runs on a simple 500 MHz/20 GB Linux system. You don't need more.
+
+
Does the caching procedure slow down or delay my internet usage?
+
No. Any file that passes the proxy is streamed through the filter and caching process. At a certain point the information stream is duplicated; one copy is streamed to your browser, the other one to the cache. The files that pass the proxy are not delayed because they are not first stored and then passed to you, but streamed at the same time to you as it is streamed to the cache. Therefore your browser can do layout-while-loading as it would do without the proxy.
+
+
How can you ensure actuality of the search results?
+
Nobody can. How can a 'normal' search engine ensure this? By doing 'brute force crawling'?
+We have a better solution for acuality: browsing results of all people who run YACY.
+Many people prefer to look at news pages every day, and by passing through the proxy the latest news also arrive in the distributed search engine.
+This may take place possibly faster than it happens with a normal/crawling search engine.
+And the search results reflect the 'general demand' of information, because it is the average of all contributors.
+
+
I don't want to wait for search results much time. How much time takes a search?
+
Our architecture does not do peer-hopping, we also don't have a TTL (time to live). We expect that search results are instantly responded to the requester.
+This can be done by asking the index-owning peer directly which is in fact possible by using DHT's (distributed hash tables).
+Because we need some redundancy to catch up missing peers, we ask several peers simultanously. To collect their respond, we wait a little time of at most 10 seconds.
+The user may configure a search time different than 10 seconds, but this is our target of maximum search time.
+
+
I am scared about the fact that the browsing results are distributed. What about privacy?
+
Don't be scared. We have an architecture that hides your private browsing profile from others. For example: no-one of the words that are indexed from
+the pages you have seen is stored in clear text on your computer. Instead, a hash is used which can not be computed back into the original word. Because
+Index files travel along peers you cannot state if a specific link was visited by you or another peer-user, so this frees you from beeing responsible
+about the index files on your machine.
+
+
Do I need to set up and run a separate database?
+
No. YACY contains it's own database engine, which does not need any extra set-up or configuration.
+
+
What kind of database do you use? Is it fast enough?
+
The database stores either tables or property-lists in filed AVL-Trees. These are height-regulated binary trees.
+Such a search tree ensures a logarithmic order of computation time. For example a search within an AVL tree with one million entries needs
+an average of 20 comparisments, and at most 24 in the worst case. This database is therefore extremely fast. It lacks an API like
+SQL or the LDAP protocol, but it does not need one because it provides a highly specialized database structure.
+The missing interface pays off with a very small organization overhead, which improves the speed further in comparisment with other databases
+with SQL or LDAP api's. This database is fast enough
+for millions of indexed web pages, maybe also for billions. The speed is sufficient for billions of pages, but not the file organization
+structure at the moment, because the tree-files would become too big. We will provide a solution at the time we need such big tables.
+
+
Why do you use your own database? Why not use mySQL or openLDAP?
+
The database structure we need is very special. One demand is that the entries can be retrieved in logarithmic time and can be
+enumerated in any order. Enumeration in a specific order is needed to create conjunctions of tables very fast. This is needed when someone
+searches for several words. We implement the search word conjunction by pairwise and simultanous enumeration/comparisment of index trees/sequences.
+This forces us to use binary trees as data structure. Another demand is that we need the ability to have many index tables, maybe millions
+of tables. The size of the tables may be not big in average, but we need many of them. This is in contrast of the organization of
+relational databases, where the focus is on management of very large tables, but not of many of them. A third demand is the ease of
+installation and maintenance: the user shall not be forced to install a RBMS first, care about tablespaces and such. The integrated
+database is completely service-free.
+
+
What does Senior Mode mean? What is Junior Mode?
+
Junior peers are such peers that cannot be reached from other peers, while Senior peers can be contacted.
+If your peer has global access, it runs in Senior Mode. If it is hidden from others, it is in Junior Mode.
+If your peer is in Senior Mode, it is an access point for index sharing and distribution. It can be contacted for search requests and it collects index files
+from other peers. If your peer is in Junior Mode, it collects index files from your browsing and distributes them only to other Senior peers, but does not collect index files.
+
+
+
Why should I run my proxy in Senior Mode?
+
Some p2p-based file sharing software assign non-contributing peers very low priority. We think that that this is not always fair since sometimes the operator
+does not always has the choice of opening the firewall or configuring the router accordingly. Our idea of 'information wares' and their exchange can also be
+applied to junior peers: they must contribute to the global index by submitting their index actively, while senior peers contribute passively.
+Therefore we don't need to give junior peers low priority: they contribute equally, so they may participate equally.
+But enough senior peers are needed to make this architecture functional.
+Since any peer contributes almost equally, either actively or passively, you should
+decide to run in Senior Mode if you can.
+
+
+
My proxy says it runs in 'Junior Mode'. How can I run it in Senior Mode?
+
Open your firewall for port 8080 (or the port you configured) or program your router to act as a virtual server.
+
+
How can I help?
+
First of all: run YACY in senior mode. This helps to enrich the global index and to make YACY more attractive.
+If you want to add your own code, you are welcome; but please contact the author first and discuss your idea to see how it may fit into the overall architecture.
+You can help a lot by simply giving us feed-back or telling us about new ideas. You can also help by telling other people about this software.
+And if you find an error or you see an exception, we welcome your defect report. Any feed-back is welcome.
+Dipl. Inf. Michael Christen
+Finkenhofstrasse 9
+60322 Frankfurt am Main
+Germany
+E-Mail:
+
+
+Trotz sorgfältiger inhaltlicher Kontrolle übernehme ich
+keine Haftung für die Inhalte externer Links.
+Für den Inhalt der verlinkten Seiten sind ausschliesslich
+deren Betreiber verantwortlich.
+Ich weise darauf hin, das eine Benutzung der angegebenen Demo-Peers nur unter Beachtung der YaCy Applikationslizenz erlaubt ist. Falls sie die Demo-Peers zur Web-Suche benutzen wollen, so ist dies nur zur Recherche von legalem Inhalt erlaubt. Die Verantwortung für den Inhalt der durch eine YaCy-Suche gefundene Webseite liegt nicht beim Betreiber des Such-Peers, sondern beim Betreiber der jeweiligen gefundenen Webseite; sie dürfen eine nicht von ihnen vorgenommene YaCy-Installation nur dann benutzen, wenn sie akzeptieren das der Peer-Betreiber nicht die Verantwortung für die verlinkten Webseiten übernimmt.
+
Since we provide YACY as a generic release for all operation systems and a special 'flavour' for Windows users, we distinguish two different processes for installation. Windows users may want to switch to the Windows installation instructions, however, the following description is more general and applies to all operation systems:
+
+
General Instructions:
+
+
Please follow these steps:
+
+
1st Step: de-compress the release
+
After downloading
+the latest release, simply decompress the archive with your favourite tool
+(which can be WinRar or WinZip on Windows, or Stuffit Expander on Mac OS X; Linux
+users type 'gunzip <release>.tar.gz' and 'tar -xf <release>.tar') and move the result to any place you want.
+
If you upgrade from a previous version of YACY, please migrate your settings and data.
+This is very easy: simply move (not copy) your DATA directory from the application root directory of the old YACY installation to the new application root directory. If done so, you don't need to do the other remaining configuration steps below again.
+
+
2nd Step: Configure Network Settings
+
Change the proxy settings either in your network configuration or directly in you browser. Check the 'Use HTTP Proxy' flag and configure the IP and port according to the location of the proxy. If you do a single-user installation without changing the configuration in #2, the IP/Host shall be set to '127.0.0.1' or 'localhost', and the Port shall be set to '8080'.
+
+
3rd Step: Start YACY
+
We supply some wrapper shell scripts to start the java processes:
+
+
on a MS-Windows system, double-click the file 'startYACY.bat'
+
on a Mac OS X system, double-click the file 'startYACY.command'
+
on a Linux system, start the file 'startYACY.sh'
+
+
+
+
4th Step: Administrate the proxy
+
After you started YACY, terminal-window will come up.
+That's the application; no windows, no user interface.
+You can now access YACY's administration interface by browsing to
+http://localhost:8080
+See the 'Settings' menu: you should set an administration password and checkt the access rules.
+The default settings are fine, so please change them only if you know what they mean.
+
+
5th Step: Use YACY and it's search service
+
Browse the internet using your web-browser. You should notice that your actions take effect as cache fill/cache hit log's in the httpProxy's terminal window. Whenever you vistited a page through the proxy, the page is indexed and can be search using the search page at
+http://localhost:8080.
+Please be aware that if your settings allow to access the http-server, then anybode else can also search your index as well. If you don't want this, you must set the 'IP-Number filter' of the 'Server Access Settings' in the 'Settings' menu to a string that applies to you local network scheme, like
+'localhost,127.0.0.1,192.168*,10*', which should be fine in most cases.
+
+
+
+
+
Instructions for Windows and the Internet Explorer
+
+
+
1st Step: Run Installer
+
The Windows release comes with it's own Installer in a single file. Just double-click the Installer file.
+
If you upgrade from a previous version of YACY, please migrate your settings and data.
+This is very easy: simply move (not copy) your DATA directory from the application root directory of the old proxy installation
+to the new application root directory. If done so, you don't need to do the other remaining configuration steps below again.
+
+
2nd Step: Configure Browser
+
In your Internet Explorer, open 'Extras' -> 'Internet Options':
+
+
Select 'Connections':
+
+
Click on 'Settings' of the 'LAN-Settings', even if you are using a dial-up connection:
+
+
Check the 'Proxyserver' check-box:
+
+
Enter the location of YACY server. If YACY runs on the same machine as the Browser, set 'localhost'. If you have not changed the initial configuration, the port is '8080'. Check the 'No Proxy for local addess' button. Then hit 'Extended':
+
+
Un-check the 'Use the same server for all protocols' - button. Then remove the proxy setting from 'FTP', 'Gopher' and 'Socks'. In the 'Exceptions' field, enter 'localhost;192.168;10':
+
+
Close all windows by clicking on 'Ok'
+
+
3rd Step: Start YACY
+
The installer creates a link to the application on the desktop. Just double-click the 'YACY Console' icon.
+
+
4th Step: Administrate YACY
+
After you started YACY, terminal-window will come up.
+That's the application; no windows, no user interface.
+You can now access YACY's administration interface by browsing to
+http://localhost:8080
+See the 'Settings' menu: you should set an administration password and checkt the access rules.
+The default settings are fine, so please change them only if you know what they mean.
+
+
5th Step: Use YACY and it's search service
+
Browse the internet using your web-browser. You should notice that your actions take effect as cache fill/cache hit log's in the httpProxy's terminal window. Whenever you vistited a page through the proxy, the page is indexed and can be search using the search page at
+http://localhost:8080.
+Please be aware that if your settings allow to access the http-server, then anybode else can also search your index as well. If you don't want this, you must set the 'IP-Number filter' of the 'Server Access Settings' in the 'Settings' menu to a string that applies to you local network scheme, like
+'localhost,127.0.0.1,192.168*,10*', which should be fine in most cases.
+
The copyright for YaCy belongs to Michael Peter Christen; Frankfurt, Germany; .
+
This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
It is allowed to freely copy and distribute this software
+as long as it is mentioned in connection with a link to the Anomic home page "http://www.anomic.de", or the YaCy home page "http://www.yacy.net".
+You are allowed to use this software for any legal purpose, private or commercial.
+
You agree that Anomic/YaCy Software and the author(s) is/are not responsible for cost, loss of data or any harm that
+may be caused by usage of this softare or this documentation. The usage of this software is on your own
+risk. The installation and usage (starting/running) of this software may allow other people or application
+to access your computer and any attached devices and is highly dependent on the configuration of the software
+which must be done by the user of the software; Anomic/YaCy Software and the author(s) is/are also not responsible
+for proper configuration and usage of the software, even if provoked by documentation provided together with
+the software.
+
This software is provided as-is, including possible bugs, errors, misbehaviours,
+failures, crashes, destructive effects to your software, data or system.
+Users and administrators of this software use and operate it on their own risk.
+
Attention: YaCy is a content provider for a peer-to-peer based index sharing
+and distribution network. It's purpose is to provide a world-wide global search index.
+This application generates and distributes an index of files that passes the proxy and of files that are generated by web crawling, and crawling jobs that can be requested by other YaCy installations.
+If you run the application you must agree that this software distributes automatically certain information about your system and network configuration and it also distributes index files that are created from the internet content that passed your system.
+You must also agree that your system receives index transmissions from other peers and that your system is used by other peers to load internet content, index it and passes it again back to other peers.
+The author(s) of YaCy cannot guarantee that a misuse of the index passing procedure, causing harm to your system or privacy or causing illegal content or harmfull behavior to other internet servers, can be avoided; the author(s) take no responsibility for such cases; you must agree that you take complete responsibility for such case.
+You must agree that you take complete responsibility about the information that is stored on your system, even if this information was passed to you from other YaCy-peers without notification.
+You must also agree that you are responsible to protect any users of your search peer against the use of your peer to index/search for illegal content; your local law applies.
+You are not allowed to use this software to create or search for any information that is banned or illegal in your country.
+If you want to search for illegal information, you are not allowed to use this software in any way, neither using it as private installation nor using any other available YaCy search peer as source for such information.
+
+
+
If you like to make changes to the software, you may do it. But if you re-distribute
+the software you must maintain the original copyright notice including this complete license statement.
Cited as world-wide unique proof-of-concept for a distributed p2p search engine
+ in the German issue of the MIT's Magazine of Innovation Technology Review in an article by Wolfgang Sander-Beuermann; issue 02/2005, page 29
+
Article in the german computer magazine c't 2/2005, page 40: "Suchmaschine sucht Tauschpartner"
This is essentially the release change-log. We have a release roadmap and releases published here will (hopefully) match the milestones from the roadmap's vision.
+
+
Release list in reverse order:
+
+
+
+
+
v0.36_build20050326
+
+
Enhanced thread control and added performance menu: this can be used to steer scheduling tasks and for profiling.
+
Enhanced search result ranking.
+
+
+
v0.35_build20050306
+
+
+
new Features
+
+
new user-profile management and remote access of profiles through the network-page
+
new cookie-monitor. Will be used to manage cookie-filter
+
new template engine and re-design of many administration pages as preparation for upcoming localization
+
now permanent storage of passive peers
+
enabled switch-of of proxy-cache
+
new proxy-indexing monitor and moved proxy-indexing configuration to that new page
+
more functions to DHT-management:
+
+
remote indexing tagets now selected by DHT rule
+
remote search now selects hierarchically with DHT-rule
+
+
enhanced access control to YaCy administration
+
+
passwords are now encoded to MD5-Hashes before stored to httpProxy.conf
+
brute-force password-hack prevention by additional delay's
+
added new 'steering' servlets for automated processes that need authorization
+
+
re-design
+
+
re-designed main menu: new sub-menu for proxy functions
+
re-design of Network Monitor page
+
re-design of seed database management and implementation of seed-action interface
+
+
+
fixed bugs:
+
+
fixed a bug with cache-control
+
fixed a bug with peer-list uploading
+
fixed a bug that provoked indexing of YaCy's own web pages
+
fixed a bug that prevented loading of some web pages: (JavaScript bug) doublequote/singlequote mixture removed
+
better binary-check on files before indexing
+
fixed misbehavior of Network-Page: re-design of enumeration method and auto-heal function in kelondroTree
+
+
+
+
+
v0.34_build20050208
+
+
Remote transmission of index (RWI) information to other peers with correct DHT position
+
+
implemented two new yacy-protocol - commands: yacy/transferRWI and yacy/transferURL for RWI partition transfer
+
selection of DHT positions and selection of correct RWI partitions for transmission
+
performing full flush of index if peer is running in junior mode: now these juniors can contribute to the global index.
+
default full receive of index transmission in senior peers; these peers will currently not transfer indexes. This is a test configuration and senior2senior RWI transmission will be enabled in future releases.
+
Configuration flags (grant/do not grant) in 'Index Control' menu.
+
+
Enhanced remote search
+
+
selelction of less result values: less traffic, faster response.
+
pre-sorting of results in remote peers before transmission: better results
+
+
more properties in seeds
+
+
Flags for "accept remote crawls" and "accept remote indexes"
+
Flags for "grant index distribution" and "grant index receive"
+
Control values for received/send RWI/URL
+
All flag values are shown on Network page
+
+
Bug-fixes:
+
+
no re-set of remote crawl delay after re-connect
+
proxy fail (shows white pages) fixed: better timeout value
Support for Stop-Words; default stopwords are included; stopwords are excluded for indexing and in search query results
+
Skin support
+
New start/stop-script for unix/linux daemon init process
+
File-Share entries can now have description entries
+
Enhanced File-Sharing Menu
+
+
Every entry can have a comment attached
+
Comments or picture preview visible in file list
+
File name and comment field can be indexed and globaly searched
+
Files found with search interface are dynamically linked to the actual IP of the peer hosting the file
+
+
+
+
v0.32_build20041221
+
+
New Crawling-Profiles for Crawl-Threads
+
+
every crawl start now defines it's own crawl job; new crawls do not interfere with previously started and still running jobs; all started jobs may run concurrently
+
new crawl properties: accept url's containing '?'; flag for storage of pages in proxy cache; flags for local and remote indexing
+
+
+
New Design, new documentation, new mascot 'Kaskelix' (appears on search page), new home page location http://www.yacy.net/yacy
+
Promotion-String on search page
+
New shutdown-trigger (no more file polling, new stop scripts)
+
Principal-peer gaining after file generation
+
New 'Log'-menu: view the application log on the web interface
+
Bug-fixes
+
+
Termination process should succeed now.
+
Cross-Site-Scripting bug removed
+
Removed deadlock occurred during concurrent crawl job starts
+
+
+
+
+
+
v0.31_build20041209
+
+
Integrated url filter for crawl jobs (Index Creation - page) and search requests (Search Page).
+
Removed a bug that caused sudden termination when a not-valid url was crawled.
+
Massively enhanced indexing speed by implementation of an additional word index cache.
+
Added button to delete/empty the crawl url stack.
+
Many minor changes.
+
+
+
v0.30_build20041125
+
+
Implemented Remote Crawling
+
+
Every Senior and Principal Peer may now start Remote Crawls: The initiating peer starts with the crawl and may assign URL's to qualified other peers. Those peers load the assigned resource, index them and return the index statistics back to the initiator. Executing peer may only be a Senior or Principal peer.
+
Extended URL management: URL's are now organized in three different sets: Noticed-URL's (not loaded but possibly queued for crawling), Error-URL's (not loaded but may be re-loaded to avoid index loss in case of temporary target server downtime or network problems) and Loaded-URL's. The Loaded-URL's are again divided into six categories:
+
+
remote index: retrieved by other peers
+
partly remote/local index: result of search queries
+
partly remote/local index: result of index transfer (to be implemented soon)
+
local index: result of proxy fetch/prefetch
+
local index: result of local crawling
+
local index: result of remote crawling requests
+
+
New monitoring pages: Local Index Monitor for results of LURL's (see above), cases 1-5 and the Global Index Monitor for case 6. Because the results of global crawls are not personal to the peer owner, the monitor page is not protected.
+
Options to allow or disallow remote crawling; either as initiating or executing peer.
+
Idle/Due-Time - management for each peer: to organize remote-crawl load-balancing, a delay time is used to schedule remote crawls. The seed management was extended to store and maintain these delay times.
removed DNS bottleneck (the java DNS blocks while accessed simultanously)
+
integrated DNS-prefetch
+
+
+
Implemented Shut-Down Procedure
+
+
Integrated notifier procedure in all threads.
+
The application now creates a file 'yacyProxy.control' after start-up.
+
To stop the yacyProxy, remove the control file.
+
Integrated a 'Shutdown' - button on the 'Status'-page which also triggers shut-down
+
After shut-down is initiated the application first processes all scheduled crawling- and indexing tasks which may last some minutes in the worst case.
+
+
+
Removed bugs
+
+
URL normalization
+
many minor bugs
+
+
+
+
+
+
v0.29_build20041022
+
+
New option to start explicit crawling jobs: a start url and a crawling depth
+(differently from the prefetch depth) can be set.
+
Integrated monitoring interface for prefetch/crawling activities.
+The user can now observe the crawling and indexing activity in detail.
+There is also a report page that lists all newly indexed pages with the option
+to delete these indexes again. The interface also reports the initiator
+of the crawling/indexing tasks which can be currently either the prefetch mechanism
+or explicit crawling requests. In future releases the initiator may also refer to
+remote crawling requests.
+
+
New caching procedure for database requests on file-system level.
+
+
Extended blacklist url matching: parts of a domain may now be matched with wildcards '*'. (the url's path may be matched with regular expressions)
+
The application will be re-named. Many parts now refer to the new application name 'yacy', but not all.
+
+
+
v0.28_build20041001
+
+
Search results are now searched again for characteristic word patterns.
+The patterns are statistically evaluated and are used to generate
+"search associations",
+shown as hints for further combined search.
+
+
Parallelized peer propagation process. This results in very rapid bootstraping.
+
+
Integrated new 'score' library for rapid element sorting - used for search
+patterns and rapid bootstraping. May help in future releases to speed up indexing.
+
+
Minor bug-fixes.
+
+
+
v0.27_build20040924
+
+
Bug fix in remote search result preparation.
+
Speed enhancements on search client when doing remote search.
+
Small changes in file sharing interface.
+
+
+
v0.26_build20040916
+
+
Introduced new 'virtual' TLD (top-level-domain) '.yacy' that the proxy resolves into the peers IP and port numbers:
+
+
Every yacy-peer can now be contacted using the peer's name as domain name:
+ Proxies users can obtain any other proxy-hosted pages using the url 'http://<peer-name>.yacy'.
+
Implemented sub-level domains for yacy TLD's: they are matched to subdirectories of the peer's individual web root HTDOCS. (see below)
+
+
Support for individual web pages:
+
+
Every proxy host can serve it's individual web page. We implemented two paths for each server: one default path pointing to <application-root>/htroot for administrative pages and an alternative path for individual use at <application-root>/DATA/HTDOCS.
+
The individual web pages may be accessed either using the new '.yacy' TLD's through another proxy, or optionaly by using the peer's IP:port address. The recommended default address of a proxy is 'http://www.<peer-name>.yacy', which is mapped to <application-root>/DATA/HTDOCS/www/.
+
Integrated an upload/download interface for individial web pages: additional accounts for uploaders and downloaders ensure appropriate authorization. The file-sharing web space can be browsed with an directory servlet. A default sub-domain is assigned to 'http://share.[peer-name].yacy', which is mapped into <application-root>/DATA/HTDOCS/share/.
+
Web clients not using the proxy may contact the new individual default subdomains using the URLs http://<peer-IP>:<peer-port>/www/ and http://<peer-IP>:<peer-port>/share/.
+
+
Several Bug-fixes:
+
+
Date bug appearing when accessing the proxy httpd with the proxy.
+
Additional Time-out catch-up at httpc when a file is submitted without length tag. Also extended general retrieve - time.out.
+
Terminal line restriction of 1000 bytes was too tight (cookies may have 4kb length).
+
Introduced global general time measurement for peer synchronization and balanced hello - round-robin.
+
+
Enhanced proxy-proxy - mode: 'no-proxy' settings as list of patterns to exceptional dis-allow usage of remote proxies.
+
Implemented multiple default-paths for urls pointing to directories.
+
Re-design of front-end menu structure.
+
Integrated Interface for principal configuration in Settings page
+
Re-named the release: integrated YACY name to emphasize that the proxy is mutating into the YACY Search Engine Node
+
+
+
v0.25_build20040822
+
+
New Index Administration Menu Item: RWI's (Reverse Word Indexes) may now be inspected.
+Each reference in a word index can be displayed in detail, and optionally be deleted.
+
Minor bug fixes in Bootstraping. Major Bug fixes in Index Storage (better Normal Form of URL's).
+
Better display of cache content in the Cache Administration.
+
+
+
v0.24_build20040816
+
+
New 'Cache' Menu item: The proxy cache can now be inspected. It shows a directory list with http response headers and content to each file in the proxy cache.
+
Faster Bootstraping: The connection policy was changed: as long as the proxy status is 'virgin', the most recent known connection is used for bootstraping; then later the least recent for peer distribution.
+
Better Formatting in Network Menu.
+
+
+
v0.23_build20040808
+
+
Blacklists now provide management of several lists and more import options.
+
code cleanup + many minor bugs
+
+
Messages now work (corrected POST implementation, this also cleaned the way to index distribution); improved message sending, displaying etc.
+
double links / unchecked '#', headlines wrong
+
httpd-speedup (no more temporary files, template prefetch without double-load)
+
much better Bootstraping and more intelligent yacy-peer updating
+
auto-migration of new settings from httpProxy.init
+
much better logging; extensive log configuration options for all parts of the application now in httpProxy.init
+
better search requesting (more results)
+
yacy protocol may now also use other proxies in proxy-proxy-mode
+
+
more documentation
+
+
permanent demo-page at yacy.net/home.html with wiki
+
new FAQ at http://www.yacy.net/yacy/FAQ.html
+
first step to move YACY to new home http://sourceforge.net/projects/yacy/
+
+
+
+
v0.22_build20040711
+
+
More security bug fixes (dementia accountia, '..' usage in server path, server blacklist too tight for local clients)
+
Another advance in better peer distribution and recognition (distinguishes between 'real' disconnected peers and 'hearsay' disconnected peers. Keeps track of online time. No preferences of principal peers in link distribution)
+
An option to switch the peer to online mode without using the proxy. This makes life much easier for newbie's.
+
A new message function. Within the Network page, one can hit the 'm' and may then send a message to the other peer. The owner of that
+peer can read the message in his/her private message inbox. This function is only in alpha statdium; it works only in rare cases and
+we don't know why. Only for testing.
+
Cleaned up the mess of different database and configuration files. All run-time data is now accumulated in the new folder 'DATA'. If you previously generated an index and want to migrate, you simply need to put your old PLASMADB folder into the new DATA folder.
+
Clean up of the source mess and partition of them into separate packages
+
Some design enhancements of the online interface
+
+
+
v0.21_build20040627
+
After an announcement on freshmeat.net we got many hits in the newly build p2p-network. We learned from the p2p-propagation behavior and
+implemented a lot of new routines to stabilize the YACY network.
+
+
Better peer analysis, statistics, propagation/distribution (more properties and bug fixes).
+
No more JavaScript in online Interface. New template logic for httpd and new online interface look-and-feel, using the new features.
+
New FAQ in documentation.
+
Protection against hacker and virus attacks: new self-configuring client-IP blocking in serverCore.java
+
More information and warnings about security settings to the operator to protect the own peer
+
Network statistics and monitor shows status of remote peers and the distributed index
+
+
+
v0.20_build20040614
+
The first step into the p2p-world: introduction the YACY (yet another cyberspace) p2p network propagation and information wares distribution system. YACY enables in this release a rudimentary index exchange so that you can use YACY to bootstrap a world-wide distributed search engine.
+
+
Added status page on web interface and automatic opening of web browser on status page. Can be switched off on the satus page.
+
Implemented still missing element removal and AVL balancing for element insterts in the kelondro database. This ensures logarithmic efforts on database access, which influences the proxy and the search service. Now only AVL balancing after removal must be implented, but it's missing is not critical.
+
Added blacklist enhancements and web interface for blacklist editing from Alexander Schier.
+
More and better documentation.
+
Many minor bug fixes, i.e. non-cacheabilty of web interface, exception catch-up on startup when proxy is used before coloured lists are loaded.
+
First p2p elements implemented: every peer on startup looks for other peers and announces it's own startup. The function does not yet actively implement an index exchange, but can repond to remote index queries.
+
+
+
v0.16_build20040503
+
This release is a major step to make the proxy enterprise-ready: we introduced several security mechanism and access
+restrictions for the proxy and the server. Every security setting can be configured through a web page. Thanks to the new
+HTTPS proxy, the proxy can now be considered as 'complete'.
+
+
implemented a HTTPS proxy, sharing the same proxy port with http;
+ this does not help for more/better indexing since the SSL data is simply passed through.
+ But we can now state to be a 'full' http and https proxy, usable in enterprise environments and internet cafe's.
+
two security layers for web server and proxy access: implemented Client-IP - filtering, which adds a virtual Firewall to
+ the application. Every client that does not match the client-IP-filter is blocked. The second layer is a PROXY password
+ protection. All attributes can be configured through a new web page at http://localhost:8080 (standard configuration).
+
to protect the configuration pages of the web server, we introduced a password protection for special pages on the web server.
+ Every page that ends with '_p.html' has a protection; the corresponding account can also be set through the local web server.
+ Users shall be encouraged to set this administration account first.
+
+
+
v0.15_build20040318
+
+
Extensive code re-engineering
+
+
Inserted and further generalized the proxy's genericServer into the AnomicFTPD project. After further enhancements within that project,
+ it was re-inserted in the HTTPProxy. The Switchboard interface now belongs to the genericServer, which is now called the serverCore.
+
Removed the old html parser and replaced it by the new htmlFilter library, which now parses the html files during reading from
+ the remote server. Real-time parsing during streaming html pages is done extremely fast and does not slow down file passing through
+ the proxy. The new htmlFile provides a filter interface, which is now used to filter out content that is defined by keywords.
+ Currently the bluelist 'httpProxy.blue' is used to define these words.
+
Re-engineered the crawler interface and implemented a crawler. Since the crawler does not work in all cases, it is still
+ disabled in this release. You can switch it on by setting the prefetchDepth in the configuration file httpProxy.init
+
+
Implemented a 304 response. This speeds up all responses in the case of a cache hit combined with a conditional request.
+ Since this combination is fairly common, it noticeable speeds up the proxy.
+
New documentation design
+
New Search Page design
+
+
+
+
v0.14-build20040213
+
+
More Structure to the whole system to lay the basis for the Crawler
+
+
The new structure will distinguish between the httpd with it's servlets, the file-servlet and proxy-servlet;
+ the crawler which also holds responsibility for the http cache that is used by the http proxy and the indexing
+ engine 'PLASMA', which is again accessed by the http file server. But even with the crawler concept on board here, we still don't have prefetch now.
+
Moved plasmaTextProvider to httpCrawlerScraper, httpdProxyCache to httpCrawlerCache and httpdSwitchboard to httpCrawlerSwitchboard
+
New configuration value proxyCacheSize: limits the memory amount of the cache; if the cache exceeds this value the oldest entries are deleted
+
+
Bug fixes:
+
+
Found and eliminated nasty bug that prevented using yahoo mail. (they send several cookies at once)
+
No more indexing of url's with 'cgi' in name or ending with '.js', '.ico', or '.css' (checking content-type for 'text' is not enough; some servers do not transfer right value)
+
Fixed search for words containing numbers and german Umlaute
+
adopted acrypt.java to no using javax.crypt, this was not supported by debian blackdown java 1.3.1. Furthermore, removed -server - flag from httpProxy.sh, that also made blackdown to crash. (you probably want to insert that flag again in your installation)
+
+
The proxy can now be configured to access another proxy
+
+
+
v0.13-build20040210
+
+
Bug fixes:
+
+
removed forced unzipping for special cases: either if the file to be transported is 'naturally' in gzip format (.gz, .tgz and .zip) or if zipping would not make sense because it would not yield any compression, as for images. Now the 'Accept-Encoding', created by the browser and send to the server has omitted gzip attributes in this cases. This should lead to less overhead (no gzip en/de-coding) and thus to more speed.
+
now transport of httpc failure response body (especially 404; seemed to be unneccesary, but is not)
+
search result bug (mixed up appearence) removed
+
+
Performance and structure enhancements:
+
+
Extended database capabilities to hold content of dynamic size; new files kelondroRA.java, kelondroAbstractRA.java, kelondroFileRA.java, kelondroDyn.java
+
Used new database features to store the response header information for all files in the cache into one database file. This saves 50% of the number of files in the cache (no more need for the .header - files)
+
Implemented a scheduling that moved the time of cache creation into an proxy-idle - time. This reduces the file operation on a single user system by 50% during web page retrievement.
integrated blacklist 'httpProxy.black' idea and data from Alexander Schier: forced 404 response for blacklisted hosts. This can be used to 'switch off' specific domains, especially AGIS servers. Can also be used for child protection/parental control. Does not filter content!
+
cache write bug if same file and directory name is used (can be done in URL, but not in cache file system) removed.
+
detailed 404 debugging response in case of failure
+
new config value maxSessions for limit the number of concurrent connections to the proxy
+
Host property bug in httpc for HTTP/1.1 servers removed: now better access to more servers
+
+
enhanced indexing and searching:
+
+
implemented rudimentary ranking and ordering of search results either by quality or by date
+
implemented bluelist 'httpProxy.blue': filtering of all blue-listed words in search expression, result-url and result-description
+
bugfix for combined search, fixed date attached to search results
+
+
first contact with Gnugle project and knowledge exchange
+
+
+
v0.11-build20040124
+
+
non-empty field servlet bug in index.java
+
greatly enhanced indexing
+
+
better structure: new classes plasmaIndexEntry, plasmaSearch, plasmaIndex, plasmaIndexCache, plasmaURL
+
index entry caching and transparent flushing implemented
+
+
catch-up of sleeping connections, enhanced idle check in genericServer.java
+
+
+
v0.1-build20040119
+
+
first time published on www.anomic.de!
+
client user agent forwarding according to 'yellow'-list
+
plasma database
+
+
new database sub-path DATABASE
+
new file kelondroRecords.java + kelondroTree.java
+
plasmaStore now saves and retrieves transparently urls in the kelondro database
+
no more XSUMP path, was not necessary for condenser; url attributes will be stored in new DB
+
indexing implemented; still imperformant since that needs caching (later)
+
rudimentary index access through new web page index.html and servlet index.java
YACY is written entirely in Java (Version Java2 / 1.2 and up). Any system that supports Java2 can run YACY. That means it runs on almost any commercial and free platforms and operation systems that are around. This includes of course Mac OS X, Windows (NT, W2K, XP) and Linux systems. For java support of your platform, please see the installation documentation.
+
+
+
Windows
+The Proxy runs seamless on any Windows System and comes with an easy-to-use installer application. Just install and use the proxy like any other Windows application. Please download the Windows Release Flavour of YACY instead the generic one.
+
+
+
Mac OS X
+The general distribution includes a Mac OS X wrapper shell, which is double-clickable. The application can be monitored and administrated through a web server that you can open with your Safari browser.
+
+
+
Linux/Unix
+The proxy environment is terminal-based, not windows-based. You can start the proxy in a console, and monitor it's actions through a log file. A wrapper shell script for easy startup is included. You can administrate the proxy remotely through the built-in http server with any browser.
+
YACY consists mainly of four parts: the p2p index exchange protocol, based on http; a spider/indexer; a caching http proxy which is not only a simple surplus value but also an informtaion provider for the indexing engine and the built-in database engine which makes installation and maintenance of yacy very easy.
+
+
All parts of this architecture are included in the YACY distribution. The YACY search engine can be accessed through the built-in http server.
+
+
+
Algorithms
+
+
For our software architecture we emphasize that always the approriate data structure and algorithm is used
+to ensure maximum performance. The right combination of structure and algorithm results in an ideal
+order of computability which is the key to performant application design. We reject the myth that
+the Java language is not appropriate for time-critical software; in contrast to that myth we
+believe that Java with it's clean and save-to-use dynamic data structures is most notably qualified
+to implement highly complex algorithms.
+
+
+
+
Transparent HTTP and HTTPS Proxy and Caching:
+The proxy implementation provides a fast content-passing, since every file that the proxy reads from the targeted server is streamed directly to the accessing client while the stream is copied to a RAM cache for later processing. This ensures that the proxy mode is extremely fast and does not interrupt browsing. Whenever the Proxy idles, it processes it's RAM cache to perform indexing and storage to a local file of the cache. Every HTTP header that was passed along with the file is stored in a database and is re-used later on when a cache hit appears. The proxy function has maximum priority above other tasks, like cache management or indexing functions.
+
+
+
Fast Database Implementation:
+We implemented a file-based AVL tree upon a random-access-file. Tree nodes can be dynamically allocated and de-allocated and an unused-node list is maintained. For the PLASMA search algorithm, an ordered access to search results are necessary, therefore we needed an indexing mechanism which stores the index in an ordered way. The database supports such access, and the resulting database tables are stored as a single file. The database does not need any set-up or maintenance tasks that must done by an administrator. It is completely self-organizing. The AVL property ensures maximum performance in terms of algorithmic order. Any database may grow to an unthinkable number of records: with one billion records a database request needs a theoretical maximum number of only 44 comparisments.
+
+
+
Sophisticated Page Indexing:
+The page indexing is done by the creation of a 'reverse word index': every page is parsed, the words are extracted and for every word a database table is maintained. The database tables are held in a file-based hash-table, so accessing a word index is extremely fast, resulting in an extremely fast search. Conjunctions of search words are easily found, because the search results for each word is ordered and can be pairwise enumerated. In terms of computability: the order of the searched access efford to the word index for a single word is O(log <number of words in database>). It is always constant fast, since the data structure provides a 'pre-calculated' result. This means, the result speed is independent from the number of indexed pages! It only slows down for a page-ranking, and is multiplied by the number of words that are searched simultanously. That means, the search efford for n words is O(n * log w). You can't do better (consider that n is always small, since you rarely search for more that 10 words).
+
+
+
Massive-Parallel Distributed Search Engine:
+This technology is the driving force behind the YACY implementation. A DHT (Distributed Hash Table) - like technique will be used to publish the word cache. The idea is, that word indexes travel along the peers before a search request arrives at a specific word index. A search for a specific word would be performed by computing the peer and point directly to the peer, that hosts the index. No peer-hopping or such, since search requests are time-critical (the user usually does not want to wait long). Redundancy must be implemented as well, to catch up the (often) occasions of disappearing peers. Privacy is ensured, since no peer can know which word index is stored, updated or passed since word indexes are stored under a word hash, not the word itself. Search mis-use is regulated by the p2p-laws of give-and-take: every peer must contribute in the crawl/proxy-and-index - process before it is allowed to search.
+
+
+
+
+
Privacy
+Sharing the index to other users may concern you about your privacy. We have made great efforts to keep and secure your privacy:
+
+
+
Private Index and Index Movement
+Your local word index does not only contain information that you created by surfing the internet, but also entries from other peers.
+Word index files travel along the proxy peers to form a distributed hash table. Therefore nobody can argue that information that
+is provided by your peer was also retrieved by your peer and therefore by your personal use of the internet. In fact it is very unlikely that
+information that can be found on your peer was created by you, since the search process targets only peers where it is likely because
+of the movement of the index to form the distributed hash table. During a test phase, all word indexes on your peer will be accessible.
+The future production release will constraint searches to indexes entries on your peer that have been created by other peers, which will
+ensure complete browsing privacy.
+
+
Word Index Storage and Content Responsibility
+The words that are stored in your local word index are stored using a word hash. That means that not any word is stored, but only the word hash.
+You cannot find any word that is indexed as clear text. You can also not re-translate the word hashes into the original word. This means that
+you don't know actually which words are stored in your system. The positive effect is, that you cannot be responsible for the words that
+are stored in your peer. But if you want to deny storage of specific words, you can put them into the 'bluelist' (in the file httpProxy.bluelist).
+No word that is in the bluelist can be stored, searched or even viewed through the proxy.
+
+
Peer Communication Encryption
+Information that is passed from one peer to another is encoded. That means that no information like search words,
+indexed URL's or URL descriptions is transported in clear text. Network sniffers cannot see the content that is exchanged.
+We also implemented an encryption method, where a temporary key, created by the requesting peer is used to encrypt the response
+(not yet active in test release, but non-ascii/base64 - encoding is in place).
+
+
Access Restrictions
+The proxy contains a two-stage access control: IP filter check and an account/password gateway that can be configured to access the proxy.
+The default setting denies access to your proxy from the internet, but allowes usage from the intranet. The proxy and it's security settings
+can be configured using the built-in web server for service pages; the access to this service pages itself can also be restricted again by using
+an IP filter and an account/password combination.
+
+
YACY's architecture with the PLASMA search engine and
+the P2P-based distributed index was developed and implemented by Michael Peter Christen.
+
+
However, this project is just at the beginning and needs contributions from other developers, since there are many ideas how this project can move on to a broad range of users.
+
+
There are also some long-term targets. If the index-sharing someday works fine, maybe the browser producer like Opera or Konqueror would like to use the p2p-se to index the browser's cache and therefore provide each user with an open-source, free search engine.
+
+
At this time, some contributions already have been made. These are:
+
+
Alexander Schier did much alpha-testing, gave valuable feed-back on my ideas and suggested his own. He suggested and implemented large parts of the popular blacklist feature. He supplied the 'Log'-menu function, the skin-feature, many minor changes, bug fixes and the Windows-Installer - version of yacy. Alex also provides and maintaines the german documentation for yacy.
+
Natali Christen contributed the YACY logo and the design of the Kaskelix mascot.
+
Wolfgang Sander-Beuermann, executive board member of the German search-engine association SuMa-eV
+and manager of the meta-search-engine metaGer provided web-space for the german documentation and computing resources for a demo peer. He also pushed the project by arranging promotional events.
+
Timo Leise suggested and implemented an extension to the blacklist feature: part-of-domain matching.
Matthias Kempka provided a linux-init start/stop - script
+
+
+
Further volunteers are very welcome.
+Please contact me if you have something that you are willing to do for this project. In any case: before you start something to do, please ask me in advance if I would like to integrate it later. Thank You!