[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

PICS and intellectual freedom FAQ






---------- Forwarded message ----------
Date: Sun, 3 Aug 1997 10:04:04 -0700 (PDT)
From: Declan McCullagh <[email protected]>
To: [email protected]
Subject: PICS and intellectual freedom FAQ

[If you care about the debate over self-labeling your web pages -- not to
mention email and Usenet posts -- read this FAQ on PICS. Thanks to Paul
for putting this together. --Declan]


---------- Forwarded message ----------
Date: Sun, 27 Jul 1997 07:36:52 -0400
From: Paul Resnick <[email protected]>
To: [email protected]
Cc: [email protected]
Subject: PICS and Intellectual Freedom FAQ

7/27/97

Declan,

Given the renewed debate about the intellectual freedom implications of
rating and filtering generally, and PICS in particular, your readers may
find my FAQ on the subject enlightening. It introduces a number of useful
distinctions that seem to be missing from some recent discussion (rating
vs. filtering; self-rating vs. third-party rating; local vs. central
setting of filtering rules). There are some legitimate intellectual freedom
concerns with both rating and filtering, but it's important to get beyond
sweeping generalizations.

The URL is http://www.si.umich.edu/~presnick/pics/intfree/FAQ.htm

Paul Resnick
Associate Professor
University of Michigan
School of Information
chair, PICS Interest Group, World Wide Web Consortium

P.S. Please send comments about the FAQ to [email protected]. Since I'll be
out of email contact for the next week, however, you should not expect an
immediate response.

---------------

Date: Mon, 28 Jul 97 15:06:10 -0400
From: Larry Lessig <[email protected]>
To: [email protected], [email protected]
Subject: Re: PICS and Intellectual Freedom FAQ

Hello Declan:

The FAQ is excellent - honest and clear, and it would help move the 
debate along well. As a strong opponent of some aspects of PICS, I hope 
you get a chance to run this.

Let me know if you get to Cambridge, 

-----------------

[I took this from http://www.si.umich.edu/~presnick/pics/intfree/FAQ.htm
-- check out that URL for links and graphics. --Declan]


                 PICS, Censorship, & Intellectual Freedom FAQ
                                       
   Paul Resnick (comments to [email protected])
   
   Draft version 1.12 last revised June 26, 1997
   
                                   Abstract
                                       
   The published articles describing PICS (Communications of the ACM,
   Scientific American) have focused on individual controls over the
   materials that are received on a computer. While those articles also
   mention the possibility of more centralized controls (e.g., by
   employers or governments), they describe only briefly the technical
   details and the intellectual freedom implications of such centralized
   controls. The civil liberties community has raised some alarms about
   those intellectual freedom implications. The goals for this Frequently
   Asked Questions (FAQ) document are to:
     * Clarify some technical questions about individual and centralized
       content controls based on PICS.
     * Argue that the net impact of PICS will be to shift government
       policies away from centralized controls and toward individual
       controls, although this impact may be visible only at the margins.
     * Describe how the World Wide Web Consortium (W3C) presents PICS in
       the public policy arena.
       
                                  Background
                                       
   In 1995, policies were proposed in several countries, including the
   USA, to restrict the distribution of certain kinds of material over
   the Internet. In many but not all cases, protection of children was
   the stated goal for such policies (see, for example, CIEC: Citizens
   Internet Empowerment Coalition).
   
   The focus on restricting inappropriate materials at their source is
   not well suited to the international nature of the Internet, where an
   information source may be in a different legal jurisdiction than the
   recipient. Moreover, materials may be legal and appropriate for some
   recipients but not others, so that any decision about whether to block
   at the source will be incorrect for some audiences.
   
   PICS, the Platform for Internet Content Selection, is a set of
   technical specifications that facilitate recipient-centered controls
   on Internet content, rather than sender-centered controls. The
   following diagram illustrates recipient-centered controls:
   
   [INLINE]
   
   Filtering software sits between a child (or any Internet user) and the
   available content. It allows access to some materials, and blocks
   access to other materials. Some filtering software directly analyzes
   content, typically looking for particular keywords. This FAQ, however,
   does not deal with that kind of software; it deals, instead, with
   filtering software that decides what to allow and what to block based
   on two information sources.
     * The first source is a set of descriptive labels that are
       associated with the materials. Those labels may be provided by
       information publishers who describe their own work, or may be
       provided by independent reviewers. A single document may have
       several labels associated with it.
     * The second information source the filter uses is a set of
       filtering rules, which say what kinds of labels to pay attention
       to, and what particular values in the labels indicate acceptable
       or unacceptable materials.
       
   PICS was not the first technology based on the idea of
   recipient-centered controls. For example, SurfWatch was already on the
   market in the summer of 1995 when PICS development began. It is based
   on a particularly simple set of labels: a list of URLs to avoid. As
   another example, some firewalls that corporations had introduced for
   security purposes blocked access to certain IP addresses. PICS
   provides a set of technical specifications so that pieces of the
   picture could be provided by different entities, yet still work
   together.
   
   The first and most important distinction that PICS introduced is a
   separation between labeling and filtering. A label describes the
   content of something. A filter makes the content inaccessible to some
   audience. While both labeling and filtering may introduce social
   concerns, the concerns are somewhat different. More generally, there
   are six roles that could all be filled by different entities:
    1. Set labeling vocabulary and criteria for assigning labels
    2. Assign labels
    3. Distribute labels
    4. Write filtering software
    5. Set filtering criteria
    6. Install/run filtering software
       
   PICS itself actually fills none of the six roles listed above! PICS is
   a set of technical specifications that makes it possible for these
   roles to be played by independent entities.
   
   For example, RSACi and SafeSurf have each defined labeling vocabulary
   and criteria for rating. They each wrote down a vocabulary in a
   machine-readable format that PICS specifies. RSACi has four categories
   in its vocabulary, language, nudity, sex, and violence; SafeSurf has
   more categories. Because they write down their vocabularies in the
   PICS format, label distribution software (e.g., from IBM and Net
   Shepherd) and filtering software (e.g., from Microsoft, IBM, and
   others) can process labels based on those vocabularies. Even though
   RSACi and SafeSurf have each specified a labeling vocabulary and
   criteria for assigning labels, neither of them actually assigns
   labels: they leave it up to the authors of documents to apply to
   criteria to their own documents, or self-label as PICS documents call
   it. Other services, such as CyberPatrol and Net Shepherd, take on both
   of the first two roles, choosing the labeling vocabulary and employing
   people to actually assign labels.
   
                             Questions and Answers
                                       
                               What PICS Enables
                                       
  Can PICS be used for more than just content filtering?
  
   Yes. While the motivation for PICS was concern over children accessing
   inappropriate materials, it is a general "meta-data" system, meaning
   that labels can provide any kind of descriptive information about
   Internet materials. For example, a labeling vocabulary could indicate
   the literary quality of an item rather than its appropriateness for
   children. Most immediately, PICS labels could help in finding
   particularly desirable materials (see, for example, NetShepherd's
   label-informed Alta Vista search), and this is the main motivation for
   the ongoing work on a next generation label format that can include
   arbitrary text strings. More generally, the W3C is working to extend
   Web meta-data capabilities generally and is applying them specifically
   in the following projects:
   
   Digital Signature Project
          coupling the ability to make assertions with a cryptographic
          signature block that ensures integrity and authenticity.
          
   Intellectual Property Rights Management
          using a meta-data system to label Web resources with respect to
          their authors, owners, and rights management information.
          
   Privacy (P3)
          using a meta-data system to allow sites to make assertions
          about their privacy practices, and for users to express their
          preferences for the type of interaction they want to have with
          those sites.
          
   Regardless of content control, meta-data systems such as PICS are
   going to be an important part of the Web, because they enable more
   sophisticated commerce (build and manage trust relationships),
   communication, indexing, and searching services.
   
     "The promise of digital commerce is that it will allow you to use
     the Internet to purchase the services of the best organic gardening
     advisors or mad cow disease specialists, whether they live in Santa
     Clara or Timbuktu. To do this, you need to do more than verify that
     the person at the other end of the wire is who he says he is. You
     need to assess competence, reliability, judgment. In other words,
     you need a system of branding, but applied much more widely for
     highly specialized and hard-to-evaluate services and products. You
     need value-added services that will not only lead you to the right
     product or service but also rate its quality or otherwise vouch for
     it."
     
     Francis Fukayama
     
     (Forbes ASAP 12/96 p 69)
     
  Does PICS enable censorship?
  
   This seemingly straightforward question, upon closer inspection, turns
   out to be many different questions when asked by different people.
   Many people are concerned about governments assuming one or more of
   the roles described in the answer to the previous question. Others are
   concerned about employers setting filtering rules, abuse of power by
   independent labelers, or a chilling effect on speech even if speech is
   not banned outright. People also employ different definitions of
   censorship. The most expansive definition is, "any action by one
   person that makes otherwise available information unavailable to
   another person." Under this expansive definition, even a parent
   setting filtering rules for a child would count as censorship. PICS
   documents have adopted the more restrictive definition of censorship
   as actions that limit what an individual can distribute, and use the
   term "access controls" for restrictions on what individuals can
   receive. But the distinction blurs if a central authority restricts
   access for a set of people. Finally, people have different definitions
   of "enable." Some would say that PICS enables any application that
   uses PICS-compatible components, while we reserve the term "enables"
   for applications that can easily be implemented with PICS-compatible
   components but could not be easily implemented otherwise.
   
   Given the variety of implicit questions, it doesn't make sense to
   provide a blanket answer to the question of whether PICS enables
   censorship. This FAQ answers many of the specific questions that
   people often mean when they ask the more general question. For
   example, we ask questions about whether PICS makes it easier or harder
   for governments to impose labeling and filtering requirements. If you
   believe there's another specific question that should be addressed,
   please send it to [email protected], for possible inclusion in a later
   version.
   
  Could governments encourage or impose receiver-based controls? Does PICS make
  it easier or harder for governments to do so?
  
   Yes. A government could try to assume any or all of the six roles
   described above, although some controls might be harder than others to
   enforce. As described below, governments could assume some of these
   roles even without PICS, while other roles would be harder to assume
   if PICS had not been introduced. It's important to note that W3C does
   not endorse any particular government policy. The purpose of this FAQ
   is to explain the range of potential policies and to explore some of
   the impacts of those policies on both the climate of intellectual
   freedom and the technical infrastructure of the World Wide Web.
   Potential government policies:
    1. Set labeling vocabulary and criteria. A government could impose a
       labeling vocabulary and require all publishers (in the
       government's jurisdiction) to label their own materials according
       to that vocabulary. Alternatively, a government might try to
       achieve the same effect by encouraging an industry self-policing
       organization to choose a vocabulary and require subscribers to
       label their own materials. Civil liberties advocates in Australia
       are especially concerned about this (see The Net Labeling
       Delusion). PICS makes it somewhat easier for a government to
       impose a self-labeling requirement: without PICS, a government
       would have to specify a technical format for the labels, in
       addition to specifying the vocabulary and criteria, and there
       might not be any filtering software available that could easily
       process such labels.
    2. Assign labels. A government could assign labels to materials that
       are illegal or harmful. This option is most likely to be combined
       with government requirements that such materials be filtered (see
       #5 below) but it need not be; a government could merely provide
       such labels as an advisory service to consumers, who would be free
       to set their own rules, or ignore the labels entirely. If a
       government merely wants to label, and not impose any filtering
       criteria, then PICS again provides some assistance because it
       enables a separation of labeling from filtering. On the other
       hand, a government that wishes to require filtering of items it
       labels as illegal gets little benefit from PICS as compared to
       prior technologies, as discussed below in the question about
       national firewalls.
    3. Distribute labels. A government could operate or finance operation
       of a Web server to distribute labels (a PICS label bureau); the
       labels themselves might be provided by authors or independent
       third parties. Taken on its own, this would actually contribute to
       freedom of expression, since it would make it easier for
       independent organizations to express their opinions (in the form
       of labels) and make those opinions heard. Consumers would be free
       to ignore any labels they disagreed with. Again, since PICS
       separates labeling from filtering, it enables a government to
       assist in label distribution without necessarily imposing filters.
       If combined with mandatory filtering, however, a
       government-operated or financed label bureau could contribute to
       restrictions on intellectual freedom.
    4. Write filtering software. It's unlikely that a government would
       write filtering software rather than buying it; the supplier of
       filtering software probably has little impact on intellectual
       freedom.
    5. Set filtering criteria. A government could try to impose filtering
       criteria in several ways, including government-operated proxy
       servers (a national intranet), mandatory filtering by service
       providers or public institutions (e.g., schools and libraries), or
       liability for possession of materials that have been labeled a
       particular way. In some ways, by enabling independent entities to
       take on all the other roles, PICS highlights this as the primary
       political battleground. Each national and local jurisdiction will
       rely on its political and legal process to answer difficult policy
       questions: Should there be any government-imposed controls on what
       can be received in private or public spaces? If so, what should
       those controls be? Most kinds of mandatory filters could be
       implemented without PICS. One potential policy, however, mandatory
       filtering based on labels provided by non-government sources,
       would have been difficult to impose without PICS.
    6. Install/run filters. A Government could require that filtering
       software be made available to consumers, without mandating any
       filtering rules. For example, a government could require that all
       Internet Service Providers make filtering software available to
       its customers, or that all PC browsers or operating systems
       include such software. Absent PICS, governments could have imposed
       such requirements anyway, since proprietary products such as
       SurfWatch and NetNanny are available.
       
  Since PICS makes it easier to implement various kinds of controls, should we
  expect there to be more such controls overall?
  
   Yes; all other things being equal, when the price of something drops,
   more of it will be consumed.
   
  Does PICS encourage individual controls rather than government controls?
  
   Yes; for example, a national proxy-server/firewall combination that
   blocks access to a government-provided list of prohibited sites does
   not depend on interoperability of labels and filters provided by
   different organizations. While such a setup could use PICS-compatible
   technology, a proprietary technology provided by a single vendor would
   be just as effective. Other controls, based on individual or local
   choices, benefit more from mixing and matching filtering software and
   labels that come from different sources, which PICS enables. Thus,
   there should be some substitution of individual or local controls for
   centralized controls, although it is not obvious how strong this
   substitution effect will be. In both Europe and Australia initial
   calls for centralized controls gave way to government reports calling
   for greater reliance on individual recipient controls; the end results
   of these political processes, however, are yet to be determined.
   
                                   Labeling
                                       
  Does it matter whether labels are applied to IP addresses or to URLs?
  
   An IP address identifies the location of a computer on the Internet. A
   URL identifies the location of a document. To simplify a little, a URL
   has the form http://<domain-name>/<filename>. A web browser first
   resolves (translates) the domain-name into an IP address. It then
   contacts the computer at that address and asks it to send the
   particular filename. Thus, a label that applies to an IP address is a
   very broad label: it applies to every document that can be retrieved
   from that machine. Labeling of URLs permits more flexibility:
   different documents or directories of documents can be given different
   labels.
   
   This difference of granularity will, naturally, have an impact on
   filtering. Filters based on IP addresses will be cruder: if some but
   not all of the documents available at a particular IP address are
   undesirable, the filter will have to either block all or none of those
   documents. PICS, by contrast, permits labeling of individual URLs, and
   hence permits finer grain filters as well.
   
Self-labeling

  Does PICS make author self-labeling more effective?
  
   Yes. Without a common format for labels, authors could not label
   themselves in a way that filtering programs could make use of. PICS
   provides that format.
   
  Does PICS make a government requirement of self-labeling more practical to
  implement?
  
   It enables such a requirement to have more impact. A government
   requirement of self-labeling would have little impact if the labels
   were not usable by filtering programs. PICS provides the common format
   so that filtering software from one source can use labels provided by
   other sources (authors in this case).
   
  Does self-labeling depend on universal agreement on a labeling vocabulary and
  criteria for assigning labels to materials?
  
   Although universal agreement is not necessary, there does need to be
   some harmonization of vocabulary and labeling criteria, so that labels
   provided by different authors can be meaningfully compared.
   
  Does PICS make it easier for governments to cooperate in imposing
  self-labeling requirements?
  
   Yes. PICS provides a language-independent format for expressing
   labels. If governments agreed on a common set of criteria for
   assigning labels, the criteria could be expressed in multiple
   languages, yet still be used to generate labels that can be compared
   to each other.
   
  Is it effective for (some) authors to label their own materials as
  inappropriate for minors? What about labeling appropriate materials?
  
   Both kinds of labeling could be effective, but only if a high
   percentage of the materials of a particular type are labeled. If the
   inappropriate materials are labeled, then a filter can block access to
   the labeled items. If the appropriate materials are labeled, then a
   filter can block access to all the unlabeled items.
   
Third-party labeling

  Can an organization I dislike label my web site without my approval?
  
   Yes. Anyone can create a PICS label that describes any URL, and then
   distribute that label to anyone who wants to use that label. This is
   analogous to someone publishing a review of your web site in a
   newspaper or magazine.
   
  Isn't there a danger of abuse if a third-party labeler gets too powerful?
  
   If a lot of people use a particular organization's labels for
   filtering, that organization will indeed wield a lot of power. Such an
   organization could, for example, arbitrarily assign negative labels to
   materials from its commercial or political competitors. The most
   effective way to combat this danger is to carefully monitor the
   practices of labeling services, and to ensure diversity in the
   marketplace for such services, so that consumers can stop using
   services that abuse their power.
   
Other Social Concerns About Labeling

  Why did PICS use the term "label", with all of its negative associations?
  
   PICS documents use the term "label" broadly to refer to any
   machine-readable information that describes other information. Even
   information that merely classifies materials by topic or author
   (traditional card catalog information) would qualify as labels if
   expressed in a machine-readable format. The PICS developers recognized
   that the term "label" has a narrower meaning, with negative
   connotations, for librarians and some other audiences, but it was the
   most generic term the PICS creators could find without reverting to
   technical jargon like "metadata."
   
   In media with centralized distribution channels, such as movies,
   labeling and filtering are not easily separated. For example, unrated
   movies are simply not shown in many theaters in the USA. In addition
   to its technical contribution, PICS makes an intellectual contribution
   by more clearly separating the ideas of labeling and filtering. Many
   of the negative connotations associated with "labeling" really should
   be associated with centralized filtering instead. There are, however,
   some subtle questions about the impact of labeling itself, as
   articulated in the next two questions.
   
  Does the availability of labels impoverish political discussions about which
  materials should be filtered?
  
   Matt Blaze (personal communication) describes this concern with an
   analogy to discussions at local school board meeting about books to be
   read in a high school English class. Ideally, the discussion about a
   particular book should focus on the contents of the book, and not on
   the contents of a review of the book, or, worse yet, a label that says
   the book contains undesirable words.
   
   There will always be a tradeoff, however, between speed of
   decision-making and the ability to take into account subtleties and
   context. When a large number of decisions need to be made in a short
   time, some will have to be made based on less than full information.
   The challenge for society, then, will be to choose carefully which
   decisions merit full discussion, in which case labels should be
   irrelevant, and which decisions can be left to the imperfect summary
   information that a label can provide. The following excerpt from
   Filtering the Internet summarizes this concern and the need for
   eternal vigilance:
   
     "Another concern is that even without central censorship, any
     widely adopted vocabulary will encourage people to make lazy
     decisions that do not reflect their values. Today many parents who
     may not agree with the criteria used to assign movie ratings still
     forbid their children to see movies rated PG-13 or R; it is too
     hard for them to weigh the merits of each movie by themselves.
     
     Labeling organizations must choose vocabularies carefully to match
     the criteria that most people care about, but even so, no single
     vocabulary can serve everyone's needs. Labels concerned only with
     rating the level of sexual content at a site will be of no use to
     someone concerned about hate speech. And no labeling system is a
     full substitute for a thorough and thoughtful evaluation: movie
     reviews in a newspaper can be far more enlightening than any set of
     predefined codes."
     
  Will the expense of labeling "flatten" speech by leaving non-commercial
  speech unlabeled, and hence invisible?
  
   This is indeed a serious concern, explored in detail by Jonathan
   Weinberg in his law review article, Rating the Net. The following
   excerpt from Filtering the Internet acknowledges that materials of
   limited appeal may not reach even the audiences they would appeal to,
   but argues that labeling is merely a symptom rather than a cause of
   this underlying problem:
   
     "Perhaps most troubling is the suggestion that any labeling system,
     no matter how well conceived and executed, will tend to stifle
     noncommercial communication. Labeling requires human time and
     energy; many sites of limited interest will probably go unlabeled.
     Because of safety concerns, some people will block access to
     materials that are unlabeled or whose labels are untrusted. For
     such people, the Internet will function more like broadcasting,
     providing access only to sites with sufficient mass-market appeal
     to merit the cost of labeling.
     
     While lamentable, this problem is an inherent one that is not
     caused by labeling. In any medium, people tend to avoid the unknown
     when there are risks involved, and it is far easier to get
     information about material that is of wide interest than about
     items that appeal to a small audience."
     
                                   Filtering
                                       
  Does PICS make national firewalls easier to implement?
  
   No, but an effective national firewall would make it possible for a
   government to impose PICS-based filtering rules on its citizens. A
   firewall partitions a network into two components and imposes rules
   about what information flow between the two components. The goal of a
   national firewall is to put all the computers in the country into one
   component, and all computers outside the country into the other
   component. This is difficult to do, especially if people deliberately
   try to find out connections (e.g., telephone lines) between computers
   inside the country and those outside the country. Given a successful
   partition, however, PICS could be used to implement the filtering
   rules for a firewall. In particular, the government could identify
   prohibited sites outside the country that people inside the country
   could not access; such a filtering could be implemented based on
   PICS-formatted labels or, without relying on PICS-compatible
   technology, with a simple list of prohibited URLs.
   
  Does PICS enable ISP compliance with government requirements that they
  prohibit access to specific URLs?
  
   ISP compliance with government prohibition lists is already practical,
   even without PICS. It would also be possible to comply using
   PICS-based technologies. PICS does make it easier for ISPs to comply
   with a government requirement to block access to sites labeled by
   non-governmental entities (including those that are self-labeled by
   the authors of the sites).
   
  Are proxy-server based implementations of PICS filters compatible with the
  principle of individual controls?
  
   Yes. PICS enables mixing and matching of the five roles. In
   particular, a service provider could install and run filtering
   software on a proxy server, but allow individuals to choose what
   filtering rules will be executed for each account. AOL already offers
   a primitive version of this idea, not based on PICS; parents can turn
   the preset filtering rules on or off for each member of the family.
   
  Are client based implementations of PICS filters usable only for individual
  controls?
  
   No. Governments could require the use of filters on clients. The city
   of Boston, for example, requires public schools to install a
   client-based filtering product on all computers with Internet access,
   and requires public libraries to install a client-based filtering
   product on all computers designated for children.
   
  Does my country have a right to filter what I see?
  
   W3C leaves this question to the political and legal processes of each
   country. Some people argue that unrestricted access to information is
   a fundamental human rights question that transcends national
   sovereignty. W3C has not adopted that position.
   
  Does my employer have a right to filter what I see?
  
   W3C leaves this question to the political and legal processes of each
   country.
   
                       W3C's Roles and Responsibilities
                                       
  How does W3C view its role in policy debates about intellectual freedom?
  
   W3C's mission is to "realize the full potential of the Web." The
   following two points are taken from a talk by Jim Miller at the WWW6
   conference:
     * We wish to provide tools which encourage all cultures to feel free
       to use the Web while maintaining an inter-operable network
       architecture that encourages diversity without cultural
       fragmentation or domination
     * We provide feedback to policy makers regarding what is technically
       possible, how effective the technology may be in satisfying policy
       requirements, and the possible unintended consequences of proposed
       policies
       
   Thus, for example, when discussing the CDA-type legislation with
   government officials in the U.S. or abroad, it is appropriate for W3C
   to point out that sender-based restrictions are not likely to be
   effective at keeping all materials of a particular kind away from
   children, and that there could be unintended consequences in terms of
   chilling free speech or keeping the Web from reaching its full
   potential as a medium for communication and cultural exchange. W3C
   does not, however, debate with government officials about their
   perceived policy requirements. For example, Germany has a policy
   requirement of restricting access to hate speech while the U.S. does
   not: W3C does not try to convince either country that the other
   country's choice of policy requirements is better.
   
  Why does the CACM article suggest that governments might use blocking
  technology?
  
   Some people(see The Net Labeling Delusion) have criticized the
   following paragraph from the CACM article on PICS:
   Not everyone needs to block reception of the same materials. Parents
       may not wish to expose their children to sexual or violentimages.
       Businesses may want to prevent their employees from visiting
       recreational sites during hours of peak network usage. Governments
       may want to restrict reception of materials that are legal in
       other countries but not in their own. The off button (or
       disconnecting from the entire Net) is too crude: there should be
       some way to block only the inappropriate material.
       Appropriateness, however, is neither an objective nor a universal
       measure. It depends on at least three factors:
       
   1. The supervisor: parenting styles differ, as do philosophies of
       management and government.
       2. The recipient: whats appropriate for one fifteen year old may
       not be for an eight-year-old, or even all fifteen-year-olds.
       3. The context: a game or chat room that is appropriate to access
       at home may be inappropriate at work or school.
       
   The main point of this section is to underscore the fact that people
   disagree about what materials are appropriate in what contexts. This
   point is illustrated at several levels of granularity: invidual
   children, organizations, and governments. The critcism focuses on the
   mention of possible government blocking, which did not appear in an
   earlier draft of the paper. We believe the example about differences
   in laws between countries is useful in explaining why there is a need
   for flexible, receiver-based controls rather than the kind of
   sender-based controls (e.g., the CDA) that most policy discussions
   were focusing on at the time.
   
   The objection to the use of this example rests on an argument that
   governments should never designate any content as illegal. That
   argument is not widely accepted (in the U.S., for example, "obscenity"
   laws have been deemed constitutional, even though the CDA's
   "indecency" provisions were not). A more widely held position is that
   governments should not restrict political materials as a means of
   controlling their citizens. W3C leaves discussions about which
   materials should be illegal in a particular country to the political
   realm rather than the technological realm. W3C does, however, point
   out to policy makers, however, that it's not necessary to make
   materials illegal if they are offensive to some people but not others:
   end-user controls are a more flexible method of handling such
   materials.
   
  Could W3C have controlled the uses of PICS by licensing the technology?
  
   Licensing such a technology was not considered to be a feasible option
   during the time of the CDA. Not only would it have undercut the
   "neutrality" and appeal of the technology, the W3C then would have had
   to be in the position of determining who should and should not use it;
   this is not a role the W3C is competent to play.
   
  Is the W3C promoting the development of PICS into proxy server products?
  
   Yes. W3C is pleased that IBM has introduced a proxy server that can
   filter based on PICS labels, and encourages the development of other
   PICS-compatible servers. As discussed above, filter processing can be
   centralized at a proxy server while still permitting individuals to
   choose the filtering rules.
   
  What can I do now to promote uses of PICS that promote, rather than harm,
  intellectual freedom?
  
   In addition to acting in the political arena, it would probably be
   helpful to implement positive uses of labels, such as searching
   applications. It is surpassingly difficult for people unfamiliar with
   computers to imagine new applications. By building prototypes and
   demonstrating them, it may be possible to focus policy-makers'
   energies on those uses of technology that accord with your political
   values.
   
  What else can I read about labeling, filtering, and intellectual freedom?
  
    Governments
    
     * Australian Broadcast Authority report on its investigation into
       on-line services
     * European Parliament Green Paper: the Protection of Minors and
       Human Dignity in Audiovisual and Information Services
     * European Union Communication on illegal and harmful content on the
       Internet
     * Report of European Commission Working party on illegal and harmful
       content on the internet
     * Working Party Report
     * European Commission Forum for Exchange of Information on Internet
       Best Practices
     * Singapore Internet Regulations
       
    Media
    
     * Good Clean PICS: The most effective censorship technology the Net
       has ever seen may already be installed on your desktop (Simson
       Garfinkel in HotWired: February 1997)
     * Labels and Disclosure - Release 1.0 by Esther Dyson
     * MSNBC four-part series (October 1996)
          + Part 1: Censorship debate focuses on filters
          + Part 2: PICS adds new dimension to Web
          + Part 3: Internet watchdogs split over PICS
          + Part 4: Filters aren't a black-and-white issue
       
    Other Organizations
    
     * EFF: Public Interest Principles for Online Filtration, Ratings and
       Labeling Systems
     * CIEC: Citizens Internet Empowerment Coalition
     * ACLU CyberLiberties Campaign
     * ALA white paper (need link)
       
    Individuals
    
     * Rating the Net. Jonathan Weinberg. Hastings Communications and
       Entertainment Law Journal, Vol. 19, No. 2, pp. 453-482. (A
       balanced but critical academic's look at rating systems and their
       legal and social impact.)
     * The Campaign for Internet Freedom (anti-labeling/filtering web
       site in UK)
     * The Net Labeling Delusion (anti-labeling/filtering web site in
       Australia)
     * Fight-censorship mailing lists (Declan McCullagh's moderated and
       unmoderated lists; occasional discussion of PICS and related
       technologies).