Netwatcher: A Design of an Internet Filter Basedon an Adaptive Version of Naïve BayesJean AlmonordDaniel Yoon11 November 2003Massachusetts Institute of TechnologyFall Semester 2003MIT 6.805/6.806/STS085: Ethics and Law on the Electronic FrontierAdvisorsHal AbelsonDanny Weitzner

AbstractWe design an internet content filter named netwatcher based on an adaptive version ofNaïve Bayes. Upon interviewing librarians, we found out that overblocking was a majorproblem with current filters. Netwatcher is designed to reduce overblocking, to placemore control in the hands of librarians, and to be more adaptive in terms of retraining thelearning algorithm. The filter is divided into three parts: the learning algorithm, a list ofpre-defined block sites, and an online community of clients that can update each other.ThesisThis paper shows that a filter design based on an adaptive version of Naive Bayes, a learningalgorithm, is an effective solution for libraries wishing to receive E-rate and LSTA funding. Thefilter will serve as a text classifier differentiating between material that is either patently offensive,harmful to minors, or both, and material that is not. Librarians will benefit from using this designbecause it reduces the over blocking of sites that are not patently offensive, grants librarians morecontrol over the process of blocking sites, and takes the input of librarians into account forimproving the filter

Table of Contents1. Introduction1.1 Overview1.2 History and Legal Implications1.3 Current Solutions2 Design Solution2.1 Design Overview2.2 Design Decisions2.3 Simulations3 System Evaluation3.1 Law3.2 Architecture3.3 Market3.4 Norms4 Feasibility4.1. Alternative Designs4.2 Future Recommendations5 Conclusion6 References7 Appendix.481517171718

1.0 Introduction1.1 OverviewWith the advent of the internet, ordinary people have gained the ability to access a wealth ofmaterial very easily. Such a wealth of material forms the content of the internet, and is as diverseas human thought [1]. It ranges from educational material to purely pornographic material. Parentsseeking to protect their children from objectionable material have generally opted for internetfilters. Internet Filters uses certain algorithms to detect certain material online and preventcomputer users from accessing such material. No filter is as good as human judgment; as a result,there is still a lot of work to be done in improving current filters.This paper aims at proposing the design of a viable internet filter that specifically meets the needsof librarians seeking E-rate and LSTA funding. In designing an effective filter tailored to the needof librarians, there are several requirements that the filter must meet: The filter must never remove control from librarians. Instead, it must place morecontrol in the hands of librariansThe filter must reduce overblockingThe filter must be adaptive through the use o fits responsiveness to librarians.The design seeks to meet all three criteria. The design does not seek to create a filter thatprevents access to all sorts of user communication. It does not behave similar to a firewall; it doesnot seek to filter incoming nor outgoing telnet and ftp connections.1.2 Legal BackgroundIn response to mounting concerns that children would be exposed to objectionable material,Congress passed a series of legislation that placed restrictions on internet use in order to protectminors, including the Communication Decency Act (CDA) [2], the Child Online Protection Act(COPA) [3], and Child Internet Protection Act (CIPA) [4]. However while relevant portions ofboth the CDA and COPA were struck down by the U.S. Courts in Reno v. ACLU [5] and in ACLUv. Ashcroft [6], CIPA was upheld as constitutional by the Supreme Court in the in United States v.American Library Association [7]. Applying a rational basis review, Chief Justice Rehnquist andthree other justices pointed out that the government had a legitimate interest in protecting minorsfrom inappropriate material, public libraries had broad discretion to decide what material toprovide to their patrons, and that patrons who are adults could ask the librarian to unblockerroneously blocked sites or disable the filter completely.The provisions of CIPA state that public libraries or schools wishing to receive federal assistancemust install and use technologies for filtering or blocking material on the Internet for allcomputers with Internet access. The two types federal assistance which are adversely affected byCIPA is the E-rate program [8], which allows libraries to buy Internet access at a discount andLibrary Services and Technology Act [9] , LSTA, which assists libraries in acquiring computersystems and telecommunication technologies. The three types of material CIPA seeks to protectminors from are visual depictions that are obscene, that contains child pornography, or that areharmful to minors.Obscenity as defined by three prong test established in Miller v. California [10] and codified in 18.U.S.C. 1460 (2003) [11] is:

Whether the average person, applying contemporary community standards, would findthat the material, taken as a whole, appeals to the prurient interest;Whether the work depicts or describes, in a patently offensive way, sexual conductspecifically defined by the applicable state or federal law to be obscene; andWhether the work, taken as a whole, lacks serious literary, artistic, political, or scientificvalue.The definition of child pornography is stated in 18 U.S.C. 2256 (2003) [12] as any visual depictionof a minor under 18 years old engaging in sexually explicit conduct, which includes actual orsimulated sexual intercourse, bestiality, masturbation, sadistic or masochistic abuse, or"lascivious exhibition of the genitals or pubic area. In Ashcroft v. Free Speech Coalition [13], theU.S. Supreme Court ruled that any activity not actually involving a minor cannot be childpornography. The definition harmful to minors is stated in CIPA as: Any picture, image, graphic image file, or other visual depiction that taken as a wholeand with respect to minors, appeals to a prurient interest in nudity, sex, or excretion; Any picture, image, graphic image file, or other visual depiction that depicts, describes,or represents, in a patently offensive way with respect to what is suitable for minors, an actualor simulated sexual act or sexual contact, actual or simulated normal or perverted sexual acts,or a lewd exhibition of the genitals; and Any picture, image, graphic image file, or other visual depiction that taken as a wholelacks serious literary, artistic, political, or scientific value as to minors.1.3 Current SolutionsEven though CIPA states that only material that is obscene, that contains child pornography orthat is harmful to minors should be blocked, companies that build internet filters often block othermaterial that does fit into the criteria. This "overblocking" occurs because filtering softwaremanufacturers often inject their subjective biases in determining which site is objectionable.Whereas a filtering software manufacturer might not block material by the Ku Klux Klan becauseit sees it as primary source material in the context of a research report about the history of theAmerican south, another filtering software manufacturer may see the material as hate speech.Similarly, many currently available filters block materials pertaining to internet chat rooms,criminal skills, drugs, alcohol, tobacco, electronic commerce, free pages, gambling, hacking, hatespeech, violence, weapons, web-based email, and more. These categories are not included underthe legal definition of CIPA [14].Another factor causing overblocking is that filtering software manufacturers use simple andrelatively non-effective algorithms to determine which sites should be blocked. Mostcommercially available filters ‘remember’ the web page Uniform Resource Locators (URL) ofsites deemed inappropriate, however many times they block all the available pages of a site whenthose pages do not contain any objectionable material. PICS [define the acronym], a voluntaryself-rating system of evaluating websites, has eased the job of filtering internet content, howeverit still remains voluntary their use is limited to a certain number of the websites out on theInternet.[15] . Table One contains the details of the filtering techniques employed by eleven of themost commonly available filters.

A InernetFilteringWebpageURLList ofkeywordsAnalysisofcontextin whichkeywordsappearsPICSratingHumanreview YTable 1: Current Filtering Techniques: This is table detailing the techniques employed byeleven of the most commonly used filters in order to determine whether should be blocked orwhether a site should be allowed. Each of these techniques has their own strengths andweaknesses [16].Although it is difficult to accurately discern the number of sites that are overblocked either due tosubjective decisions or technological failures, the Electronics Frontier Foundation estimates thatfor every web page blocked as advertised, filters blocks one or more web pages incorrectly [17]. Interms of numbers, this is close to 10,000 sites minimum when the filters are placed in the leastrestrictive [18]. In another independent expert report, Ben Edelman of Harvard University notedthat 6,777 sites were improperly blocked by four different filters, Surfcontrol Cyber Patrol 6,N2H2 Internet Filtering 2.0, Secure Computing SmartFilter 3.0, and Websense Enterprise 4.3. [19].More important than the number of sites that are blocked, are the types of sites that are blocked.The Electronics Frontier Foundation reported that N2H2’s Bess, one of the most popular filtersavailable, blocked a page describing community partners of the Mary Street School [20]. Similarly,the Boston Globe reported that Netnanny, which is used by the Kingston Public Library, blockedaccess to blocked access to a Massachusetts Board of Library Commissioners website page thatexplains CIPA [21]. In the fall of 2003, we spoke to various local libraries around Boston,Cambridge, Everett Medford, Malden, Somerville, and Revere to confirm these results.To address the problem of overblocking, filter software manufacturers have incorporated avariety of features. While many commercially available filters do not allow users to review thelist of URLs that are blocked, they generally allow users to review the list of keywords or thecompany’s criteria for filtering a webpage. Some also allow users to edit the categories which are

blocked, permanently edit or completely override the filtering software’s list of material to beblocked, or create their own block list either from scratch or by starting with the manufacturer’slist. Table Two describes the available customization features in commonly used filters.A N2H2Inernet BessFilteringFiltersoftwaremanufactureralone decideswhat materialwill befilteredUsers canreview thelist ofkeywordsUsers canreview thelist of filteredweb pageaddressesUsers canreview thecompany’scriteria forfiltering aweb pageUsers areable to chosefrom pre-setcategorieswhat materialthey want tofilterUsers areable topermanentlyedit thecompany’slist ofmaterial to befilteredChiBrowChildSafeCyberPatrolCYBER- sitterADLTheInternetFilterIPrismNetSafenanny NYYYYYNYYNYNNYYYNNYYYN

Users areable tocompletelyoverride thecompany’slist ofmaterial to befilteredUsers areable todevelop theirown list ofinformationto be filtered,using themanufacturer’s list asstartersUsers areable todevelop theirown list ofmaterials tobe filtered,starting fromscratchNYYYYYYYYYNNNYYNNNYNYNNNYYNNNYYYNTable 2: Customization Tools: This is table detailing the customization tools that are availableon eleven of the most popular filters. They allow users to individually tailor the filter to suit theirneeds, however many of the features take a considerable amount of time and are not so easy touse [22].Instead of engaging in these relatively time-consuming and not-so-easy customization tactics,many libraries have sought to avoid filters completely. At Lynn Public Library, for instance,librarians have adopted an informal tap-on-the shoulder policy whereby if someone is looking atinappropriate material, a librarian will usually approach that person [23]. In Nahant, librarians areonly able to conduct the search one computer that is connected to the internet and staff patronsare not allowed to access the web [24]. Clearly however, these ‘alternative’ measures are not incompliance with CIPA, and come at the cost of losing E-rate and LSTA funding, which manylibraries could desperately use [25].2.0 Design Solution2.1 Design OverviewInternet content filtering is very similar to email spam filtering in many respects. Currently, mostemail spam filters are employing machine learning algorithms to learn about the attributes ofemail that can be categorized as spam. After learning about the attributes, the algorithms will

classify future email as spam or not based upon the learned attributes. Internet content filteringshould be a very similar process. In essence, internet filters should aim at being able to classifytext --- in this case content of sites – as filterable or not. Filterable in this case refers to materialthat is either harmful to minors, obscene, or contains child pornography [26]. As a result, asimilar exploitation of machine learning algorithms would greatly enhance the capability ofinternet content filters.Although the law, CIPA, is mainly concerned with filtering visual depictions and images, mostpornographic sites usually contain erotic stories and certain “linguistic description” of images[27] . In fact in the paper, Marketing Pornography on the Information Superhighway, theclassification scheme that was used to classify images relied heavily on the verbal descriptions.The study concluded that the verbal descriptions of sexually-explicit images are carefully wordedto entice consumers.Netwatcher seeks to build on the concept of utilizing machine learning algorithms. The designconsists of three basic parts: (1) a Naïve Bayes learning algorithm specifically trained to classifytext as filterable and not, (2) a list of pre-defined blocked sites that has priority over the learningalgorithm, (3) and an online community of clients that can intercommunicate and whose actionsof blocking and unblocking of sites are recorded as data to further train the learning algorithm.The paper delves a bit more into each part in the next three sections.As designed, the filter will serve as an intermediary between web browsers and the internet. Itwill effectively listen for both incoming and outgoing messages. It will screen out both themessages that originate from sites that are blocked and user request for information that isavailable on the blocked sites. It will run the algorithm on the incoming messages beforeforwarding them to the web browsers. Figure One contains a diagram of where the filter deWebincomingmessagesFigure 1: Location of Filter: The filter sits between the web browser andthe [port 80] world wide web and effectively screens messages.Each library will have a list of predefined blocked sites. This list is determined by each librarian.If upon reviewing a site, a librarian has deemed the site filterable, then the librarian can choose toadd the site to the list of blocked sites. If upon further review, the librarian felt mistaken about herjudgment, she can remove the site from the list of blocked sites. The list of predefined blocked

sites is given priority over the learning algorithms. Before classifying a site, the filter willexplicitly check to see whether or not the site is not already in the list of predefined blocked sites.If it is, it will be blocked. If it is not, then algorithm will classify it and filter it based upon theclassification.The online community will consist primarily of filtering clients and data storage servers. Eachindividual library will represent a client. The clients will communicate with each other. Theclients may choose to be updated whenever a library has chosen to block or unblock a site. Theclient may even choose to be updated when fifty percent of all clients have chosen of block a site,or simply never to be updated. The client must however update the servers will all of theinformation that it is blocking or unblocking.Client A – Library AList of blockedData Storage Server AStatistics and future trainingdata are kept thereNaïve Bayes Learning AlgorithmClient B- Library BList of blockedClient D – Library DList of blockedNaïve Bayes Learning AlgorithmFigure 2: An overview of the Online Community. In the diagram, libraries are represented asclients. Client A, B, and D always communicate with data storage server A in terms of whatthey are blocking and not blocking. We also see that kibrary A is also communicating withlibrary B.

2.2 Design Descriptions2.2.1 Naïve Bayes as the algorithm of choiceNaïve Bayes is the algorithm of choice because of its simplicity and efficiency in terms ofworking with large amounts of data. Many email spam filter are using Naïve Bayes and it hasproven to be a successful tool in learning the attributes of spam. As we have mentioned earlier,internet content filtering is similar to email spam filtering. It applies even in our case, where thegovernment is concerned with mainly filtering out visual depictions of pornography because mostpornographic websites usually contain many written depictions along erotic stories. In fact in thepaper, Marketing Pornography on the Information Superhighway, the classification scheme thatwas used to classify images relied heavily on the verbal descriptions [27].Naïve Bayes works by making predictions based upon the available information. The predictionsare called inferences. The inference of a category is an inverse transition from evidence tohypothesis. [28]. Given a set of documents, in our case web pages, one is aiming at classifyingthese documents. If one has a predefined set of documents that have been classified, one canreduce the classification problem to simply matching the similarities in documents. Thedocuments contain words and these words will serve as the evidence we need in our featurespace. The hypothesis will be to determine the proper classification of the documents. In simpleterms, Bayes algorithm states that (1) one should update the probability of a hypotheses based onevidence, and (2) one should choose the hypothesis with the maximum probability after theevidence has been incorporated [6.034 lecture notes].See Appendix A, for a more thoroughdescription of Naïve Bayes.Using a Naïve Bayes approach, the classified documents can be screened for all the wordscontained in them and the probably that these words occur in each category of documents. Forexample, if we were to classify each document with a one or zero with one denoting good andzero denoting bad. Then each word can take the value of one in which case a one would indicatethe presence and zero would the absence of a word in given document. Supposing that there areonly three words in all the documents, then the document in the table below can represent apossible set of classified documents.DocumentsFirstDocumentSecond DocumentThirdDocumentFourth DocumentFifthDocumentFirst World Second Word Third Word Document classification01101110010010001111Table 3: Classification Table : This is table containing classifications for three documentscontaining only a possible of three words. The document can be classified as a zero (bad) or aone (good). A zero for word indicates that it is nor present in the document, whereas a oneimplies that it is present.

According to the table, two documents are classified as a one and three are classified as zeroes.Out of the documents classified as ones, the first word has a probability of one half of beingpresent, and one half of being absence. For the probability of being present for the documentsclassified as zero, let’s call Pji the probability that word j is present in all documents specified asi. Similarly, if Aji is the probability of a word being absent in all documents classified as i. ThenAji is the equivalent of 1 – Pji. In our example, P11 is ½ and A11 is also ½. P10 is 2/3 and A10 1/3.Suppose one wanted to classify a document that contained word one but not word two nor wordthree from the classification table. According to Naïve Bayes, in order to determine the likelihoodof the document being classified as category I, then one needs to multiply the probabilities Pji orAji for each word based upon whether or not the words are present are absent in the document. Inour given document we would multiply P11 by A21 and A31, to determine the likelihood ofdocument falling into category one. To determine the likelihood of it falling into a category zero,one would multiply P10 by A20 and A30. The algorithm would classify the document bychoosing that category containing the higher likelihood. In classifying the first document, theinference probability of the document being a one can be obtained throught the product of A11,P21 and P31, which result in a half multiplied by zero and by 1. The result would be zero. Theinference probability of the document being a zero would be the product of A10 , P20, and P30,which result in The result would be four thirds.On a different note, in order for Naïve Bayes to work properly we need to implement a simplifiedversion of a page rank algorithm. [explain what you mean by a page rank algorithm, and how itsolves the problem of multiple pages on the site] One can not simply tell whether or not a websiteis harmful to minors based upon the content of only the first page of the website. As a result, wehave chosen to also classify the links to other pages on the webpage to a depth of two. Theobvious scenario is the scenario in which there is an entrance page to a pornographic site. Theentrance page will not be the sole indicator in determining whether or not the site should befilterable.2.2.2 Applying Naïve BayesAs explained above, we implemented the algorithm in a similar fashion.The goal was to classify websites as either filterable or not filterable, where the category offilterable meets the criteria of being obscene, harmful to minors, pornographic, or all three.Websites that should be filtered were classified as ones and websites that should not be filteredwere classified as zeroes. The words contained in the websites were used as the evidence.In determining the set of words that should be included in the word space, all two-letter wordswere eliminated. Most two-letter words do not add much meaning to the written descriptions anderotic stories posted by pornographic websites. As a result the words on the feature space do notcorrespond to a one to one mapping of words that are available in ordinary dictionary.Furthermore, many pornographic websites contain words that do not form part of ordinary usage.These words include “jizz”, “skank”, and “smut.”Given the fact that many inappropriate websites contain entry pages that require users to click onenter before proceeding to the site, it was decided that one page of a website can not properlyclassify the website itself. As a result, the classification of a page entailed also classifying its linksto degree of two for the training set. However, for reasons of efficiency, we restricted theclassification of documents to also classifying their links to a degree of one.

The version of Naïve Bayes used was also an improved version. It implements the Laplacecorrection, which entails correcting the fact that the likelihood of a document falling into acategory will be zero when a word is not present in the document. If a word is not present in thedocument, its inference probability will be ½ according to the Laplace corrections. Intuitively,this correction makes sense because the fact that a word is absent from a document should notgive any added information. This changes the calculations of the classification table abovebecause it guarantees that Since the feature space is extremely big, the likelihood calculationswill reduce to calculating very small probabilities. In order to simply the calculations, thelogarithmic function is applied to the product of the inferences. The result is a summation of thelogs of each individual inferences.2.2.3 Implementing Naïve BayesIn order to fully implement Naïve Bayes, one needed an exhaustive set of training data. To obtainthe training data, the members of we crawled through over a thousand websites. These websiteswere ranked as filterable and not filterable. In ranking the websites, we sought to explore as manylinks as possible in order to not solely rank a page of the website. After ranking the websites, wecompiled a list of URLs followed by their ranking of a zero or a one.In order to develop the feature space, we compiled a list of words that was used in the training setalong with their frequency of usage in the two different categories of websites. The feature setexcluded two-letter word. Note that there does not need to exist a one to one mapping betweenthe features in the feature space and the words in a normal English dictionary. Whereas a normalEnglish dictionary’s word space will contain words such as ‘to’, ‘be’, and ‘as’, these words willbe inconsequential to our design. Popular research has indicated that most two-letter words areinconsequential in conveying the meaning of text. Furthermore our feature space will containpossible combination of letters such as ‘skank’, ‘jizz’, and ‘smut’ because they commonly appearin sites that harmful to minors .[29].2.3 The Predefined Block ListEvery filter comes with an empty predefined block list. The list should contain a listing of sitesthat a librarian has deemed filterable. Librarians can remove or add sites to the list. The algorithmworks by checking to see if any users are requesting to view a site in the blocked list. This is doneby listening to the outgoing requests that going through port 80. If a user is attempting to view asite on the blocked list, then the filter will respond with a message stating that the site has beenblocked.2.4 The Online Community2.4.1 The ClientsThe online community consists of filtering clients and data storage servers. The libraries theclients and they can communicate with each other. The data storage servers will be preestablished center for data collection. All clients must update a number of the servers when theyhave chosen to block or unblock a site. Clients can do any of the following:(1) choose to be updated or not about other clients’ actions.(2) choose to be updated whenever a library has chosen to block or unblock a site.(3) choose to be updated when a certain percentage of clients have blocked a site.

2.4.2 The Data Storage ServersThe data storage servers serves as training data aggregation centers. They compile statistics oneach client’s behavior. They systematically present each client with more data to train andimprove the algorithm. They also systematically update the clients with a list of addresses of allthe servers. The storage servers continually listen for incoming requests from clients, or check tosee if a certain condition has been met to warrant updating a client.In order to achieve reliability the data storage servers will replicate the data on the client’sblocking and unblocking actions among themselves. At midnight, every day, the servers willupdate other servers with certain data taken from a certain interval of time from a certain set ofclients. The other servers will check to see if they already contain such data. If they do, they willnot copy it. If they don’t, they will copy the data.2.4.3 Client-Server InteractionClients interact with servers in two ways: (1) by making a request to the servers to be updated or(2) by receiving update messages from the servers. Request messages are of the type “pleasenotify me when 50% of libraries have blocked a site.” Update message will be of the types “Hereis the latest training data, train the algorithm,” or “here is list of addresses of all the servers.The client to server ratio will remain at 150 clients per server and each client will be assignedthree servers that it can make requests to. In the event that one of the servers is down, the clientwill be able to make a request to another server. In order to achieve reliability, each server willperiodically update the clients with a complete list of addresses of all the servers. In the event thatall three of a client’s assigned servers are down, the client will be able to contact another server.2.5 Simulations1008060non-filte rable40filte rable20total0de pth1de pth2Figure 3: Chart displaying percentage of correctness. The chart indicates the percentage of each categorythat was properly classified.We classified 50 websites after the first round of training the algorithm for a depth two and depthone analysis. Half of those websites should have been filtered and half should not have beenfiltered. Figure Three contains the actual percentages of sites that were properly classified. Forthe most part the filter never overblocked sites, but it did underblock a number of sites. After thealgorithm was retrained with added data, fifty more websites, it did not improve significantly.The only key difference to note was that the algorithm performed better with a depth of teoanalysis. We can attribute such underblocking to the cleverness of most pornographic sites

webmasters. Most of the pornographic sites that we analyzed contained very little text.Furthermore, the ones that Contained text embedded the text in images.1008060non-filte rable40f

The filter must never remove control from librarians. Instead, it must place more control in the hands of librarians The filter must reduce overblocking The filter must be adaptive through the use o fits responsiveness to librarians. The design seeks to meet all three criteria. The design does not seek to create a filter that