What about an Open Web Health Report?

report_card_1We often talk about the “Open Web” or “the web as a platform” and it certainly resonates from some, but for others, not so much. It’s a murky concept for sure. Prior to my time at Mozilla, I must admit that I didn’t spend a lot of cycles thinking about the web as a platform, what’s important about it, the key attributes, much less its health. Like most of us, I just used it and assumed it would always be there. My sense is that people think about the open web about as much as they did the “environment” before the environmental movement first gained broad traction in the early ’70s.

Given that much of Mozilla’s mission is about nurturing and creating a healthy web environment, it seems we should have some way to understand and track its health. Just like a doctor wants to understand your symptoms before treatment, or a business tracks its inventory, maybe we need the same thing for the open web. Perhaps there’s a need for some kind of report that tracks key metrics that would give us qualitative and quantitative insight into the health of this so called open web.

There are plenty of reports that monitor traffic like Keynote or Akami’s State of the Internet report that highlights attack traffic, connection speeds, Internet penetration, etc. These are all good but there’s more to the health of the open web than traffic, speed, and adoption.

A clear understanding of the current state and trends should inform our strategy and let us know where, when, and if we have been successful. It would also tell us when we weren’t. Knowing the problem is certainly the first step to solutions. Ten years ago when one browser had roughly 90% market share it was easy to see the problem. Today – not so much.

So how would you do it? First there would have to be some common understanding of the attributes of the open web we want to monitor. This itself is no easy task, but the 80/20 rule seems applicable here. Tantek did some great work a few years ago when he articulated three principle abilities that were essential to the “open web” namely:

  1. publish content and applications on the web in open standards
  2. code and implement the web standards that that content/apps depend on
  3. access and use content / code / web-apps / implementations

In “Long Live the Web: A Call for Continued Open Standards and Neutrality” Tim Berners-Lee articulated universality as the key principle of the web. He also noted that “some of its most successful inhabitants have begun to chip away at its principles.” The FCC’s Open Internet Order articulated four key concepts that encapsulate the idea of net neutrality – one core principle. Google’s Sergey Brin described some of the same principles and threats in a 2012 Guardian interview. In some of our public policy work we attempted to identify “open web DNA” so we could better address policy threats. These all assume the existence of some common set of principles that underpin the open web.

The world is even more complicated today and I would posit that there are a wide range of additional metrics that collectively indicate the health of the open web and the vitality of the principles we care about. Many of these are not the traditional technical components, but commercial and external market factors that could serve as indicators for the abilities described above. For example, it may include factors like:

  • Diversity of service providers and ecosystems
  • Concentration of service providers, publishers, and applications
  • Adoption of open standards, APIs and languages
  • Security
  • User choice and control
  • Public awareness and activism
  • Content restrictions
  • Transparency
  • Interoperability
  • HTML5 developers
  • Relevant economic/growth indicators
  • Usage patterns and trends
  • Maybe even a disruption index

If this kind of report already exists, let’s use it more. If it doesn’t should we try to create it?

Patent Matters – Don’t Hate the Player, Hate the Game

The recent acquisition of the Netscape/AOL patent portfolio reminded me that an update on Mozilla’s patent strategy is long overdue. This post is about what we’ve done and what we could/should do in the future.

As you may have seen, there’s been a lot of patent litigation activity lately. The Yahoo suit against Facebook is one of the most surprising – at least to me. And the US Supreme Court just recently weighed in to re-affirm a long held axiom of patent jurisprudence that laws of nature are not patentable subject matter, so the judiciary is getting more active as well.

What’s driving the increase of patent activity? There are numerous drivers in my view including increased competition in the mobile space, the desire for competitive advantage particularly if a company is struggling in the market, and demands for incremental license revenues. Invariably, patent portfolios become more attractive tools for revenue and market competition when a business is not doing well or threatened.

The traditional strategy has been for each company to develop the largest possible patent portfolio to act as a deterrent against potential plaintiffs. This is known as a defensive approach. Others make no such claim at all, and still others do a bit of both depending on the circumstances. For early stage companies and start-ups, patent rights may also be important. If the business fails in the market, IP rights may turn out to be the most valuable asset for investors.

I personally struggle with the effectiveness of “build a big patent pool” as a one size fits all approach. It may not work if you’re way behind in the game or even conflicted about software patents. Also, if done organically, it simply takes too long. In other settings it may however make perfect sense, especially with enough resources and sufficient inventive material that is relevant to your competitors. I got to do this for a few years in my first in-house counsel job working for Mitchell Baker long ago where I was tasked with creating the initial Netscape patent portfolio.

So far Mozilla has not adopted the traditional strategy. A while back we made an exception to file four patent applications on some novel digital audio and video compression codecs co-invented with a contributor at the time. We assigned those applications to xiph.org, a non-profit focused on open video and audio codecs. The assignment included a defensive patent provision which prevents the patent from being used offensively. One of those applications has been published for examination as part of the standard USPTO patent application process. We believe that these applications may help in standards settings so we could achieve a better open standard for audio codecs. For better or worse, in the standards bodies participants use their IP to influence the standards and without some leverage, you’re left only with moral and technical arguments. We’ll see if our theory plays out in the future.

We haven’t filed other applications yet, but I don’t think the past should necessarily dictate the future. I can imagine many places where inventive developments are occurring that have strategic value to the industry, and where we want those protocols, techniques, and designs to stay open and royalty-free to the extent they are essential parts of a robust web platform. Ofcourse filing patent applications is one possible technique, but at those strategic intersections, I think we should entertain filing patent applications as one tool in our overall strategy.

In addition to patent filing strategies, there are other things we could  do including:

  • Adopting techniques to constrain offensive use, like the Inventors Patent Assignment with defensive use terms proposed by Twitter today. (+1 for Ben and Amac at Twitter for this)
  • Building out a robust defensive publication program. IBM wrote the book on this, maybe its time to make source code publications work the same way.
  • Developing an ongoing working prior art system available for defendants. We worked on a version of this a few years back, but the urgent beat out the important and no progress has been made since then.
  • Pooling patents with other like minded groups into safe pro-web entities with defensive protections. The pools need to be relevant to competitive threats for this to have value in my view.
  • Creating other disincentives to the offensive use of patents (similar to the MPL defensive patent provision) but relevant to larger parts of the web.

Sometime mid-year, I’d like to have a broader discussion to brainstorm further and prioritize efforts. Nonetheless, I’m pretty confident that given the changing landscape and markets, we’ll need to play in this domain more significantly one way or the other.

Comments supporting DMCA jailbreaking exemption

Every three years the US Copyright office, examines whether it will renew certain exemptions to the DMCA. In 2009 we submitted arguments supporting the EFF’s petition for the exemption of  jailbreaking from the DMCA. The Copyright office granted the exemption in 2010 which now expires at the end of 2012.

Although it seems a bit silly to have to do this every three years, we’re going to again file a brief supporting the exemption for jailbreaking, also known as “rooting.” EFF has more information here on the arguments and the process.

Based on feedback from developers around the Mozilla project, the brief will contend that rooting is important because it’s necessary to achieve competitive application performance on Android mobile platforms, to effectively debug applications, and for regression testing.  In addition, it’s even more critical now as mobile devices surpass desktop, and Internet access increasingly comes from mobile platforms.

We plan to file our comments on Friday afternoon. If you have ideas or thoughts that could be incorporated in the brief, please let us know. Alternatively, you can file your own comments, or if your flavor is petitions go here.

Homeland Security Request to Take Down MafiaaFire Add-on

From time to time, we receive government requests for information, usually market information and occasionally subpoenas. Recently the US Department of Homeland Security contacted Mozilla and requested that we remove the MafiaaFire add-on.  The ICE Homeland Security Investigations unit alleged that the add-on circumvented a seizure order DHS had obtained against a number of domain names.   Mafiaafire, like several other similar  add-ons already available through AMO, redirects the user from one domain name to another similar to a mail forwarding service.  In this case, Mafiaafire redirects traffic from seized domains to other domains. Here the seized domain names allegedly were used to stream content protected by copyrights of  professional sports franchises and other media concerns.

Our approach is to comply with valid court orders, warrants, and legal mandates, but in this case there was no such court order.  Thus, to evaluate Homeland Security’s request, we asked them several questions similar to those below to understand the legal justification:

  • Have any courts determined that the Mafiaafire add-on is unlawful or illegal in any way? If so, on what basis? (Please provide any relevant rulings)
  • Is Mozilla legally obligated to disable the add-on or is this request based on other reasons? If other reasons, can you please specify.
  • Can you please provide a copy of the relevant seizure order upon which your request to Mozilla to take down the Mafiaafire  add-on is based?

To date we’ve received no response from Homeland Security nor any court order.

One of the fundamental issues here is under what conditions do intermediaries accede to government requests that have a censorship effect and which may threaten the open Internet. Others have commented on these practices already.  In this case, the underlying justification arises from content holders legitimate desire to combat piracy.  The problem stems from the use of these government powers in service of private content holders when it can have unintended and harmful consequences.  Longterm, the challenge is to find better mechanisms that provide both real due process and transparency without infringing upon developer and user freedoms traditionally associated with the Internet.  More to come.

New European Commission Privacy Recommendations

The EC released its new privacy recommendations on Thursday to update the 15 year old EU privacy regime.  The report contains the Commission’s findings from their analysis over the past year and announces an intention to investigate a number areas in more depth with the goal of proposing legislation in 2011.  The impetus as described by the Commission is that today’s challenges “require the EU to develop a comprehensive and coherent approach guaranteeing that the fundamental right to data protection for individuals is fully respected within the EU and beyond.”

I suspect that for some the principles may be perceived as new administrative overhead and obstacles to an “optimum user experience.”  My quick take (personal opinion) is that the findings and areas of study represent a move in the right direction.  Ofcourse, the devil is in the details which will evolve over the coming year, so we’ll see. As the EC develops its new framework, finding reasonable and practical ways to implement the proposals will be essential to their success.

This is even more interesting given that the US Federal Trade Commission has indicated its coming out with recommendations soon. These would also likely result in legislation next year as well.  It would be great (if not just common sense) to see as much harmonization between the two frameworks as possible. We can still dream.

Welcome any thoughts or observations about the proposal. Some highlights from the report are shown below, but the report is worth the read.

  • The Commission will consider how to ensure a coherent application of data protection rules, taking into account the impact of new technologies on individuals’ rights and freedoms and the objective of ensuring the free circulation of personal data within the internal market.
  • The Commission will examine ways of clarifying and strengthening the rules on consent.
  • The Commission will consider:
    • introducing a general principle of transparent processing of personal data in the legal framework;
    • introducing specific obligations for data controllers on the type of information to be provided and on the modalities for providing it, including in relation to children;
    • drawing up one or more EU standard forms (‘privacy information notices’) to be used by data controllers.
  • The Commission will therefore examine ways of:
    • strengthening the principle of data minimisation;
    • improving the modalities for the actual exercise of the rights of access, rectification, erasure or blocking of data (e.g., by introducing deadlines for responding to individuals’ requests, by allowing the exercise of rights by electronic means or by providing that right of access should be ensured free of charge as a principle);
    • clarifying the so-called ‘right to be forgotten’, i.e. the right of individuals to have their data no longer processed and deleted when they are no longer needed for legitimate purposes. This is the case, for example, when processing is based on the person’s consent and when he or she withdraws consent or when the storage period has expired;
    • complementing the rights of data subjects by ensuring ’data portability’, i.e., providing the explicit right for an individual to withdraw his/her own data (e.g., his/her photos or a list of friends) from an application or service so that the withdrawn data can be transferred into another application or service, as far as technically feasible, without hindrance from the data controllers.
  • The Commission will examine the following elements to enhance data controllers’
    responsibility: 

    • making the appointment of an independent Data Protection Officer mandatory and harmonising the rules related to their tasks and competences31, while reflecting on the appropriate threshold to avoid undue administrative burdens, particularly on small and micro-enterprises;
    • including in the legal framework an obligation for data controllers to carry out a data protection impact assessment in specific cases, for instance, when sensitive data are being processed, or when the type of processing otherwise involves specific risks, in particular when using specific technologies, mechanisms or procedures, including profiling or video surveillance;
    • further promoting the use of PETs and the possibilities for the concrete implementation of the concept of ‘Privacy by Design’.

Net Neutrality – Comments to the FCC

The FCC recently asked for additional comments in its ongoing proceeding regarding Open Internet Principles. In particular, the FCC sought specific input on whether the openness principles should apply to both wireline and wireless networks.

We submitted comments in response to the FCC’s inquiry supporting application of the Open Internet principles to wireless networks. Relevant portions of the submission are shown below:

There is, and should be, only one Internet. Historically, the Internet has not distinguished between various forms of content or how users access such content. This non-discrimination has allowed consumers and software developers to choose between locations, platforms, and devices, all without complex negotiations with transport networks. This freedom has been a key reason why the Internet is so creative, competitive, and consumer-friendly. Internet users now benefit from this flexibility as they access the Internet across a wide range of devices and access points including 3/4G, WiFi, and wired networks. The wave of new Internet enabled mobile devices, such as the iPhone, iPad, and a broad range of smartphones, including Blackberry, Palm, and Android based devices, will continue to drive exponential increases in mobile Internet access. The central fact is that wireless Internet access is as important as wired Internet access.

The increasing importance of mobile networks is not the only reason policy should be network agnostic. Users should not have a significantly different experience as they move back and forth between connection types, and they should not have to be aware that one regulatory regime (applicable to wired and WiFi access) protects their ability to access content of their choosing, while another regime (for mobile wireless) does not. At the end of the day, users are not deciding to access a “wired platform” and then a “wireless platform” – they are simply deciding to access the Internet, and their access to content should not depend on how they happen to connect at any given moment. Given the undisputed importance and growth of wireless Internet access, the value created by keeping all Internet access open and neutral, and user expectations of a single Internet, it is imperative that the Commission protect the entire Internet, not just the wireline portion. The best way to do this is to extend the open Internet principles to wireless providers and protect the Internet, not the network.

We trust the FCC will consider these comments, and the many others like them, in reaching its final decision.  You can submit your own comments here.

Related Links:

Search FCC for other comments

Open Internet Coalition Comments

CDT Comments

Updating the MPL

On Monday we announced a public process to update the Mozilla Public License. The goal of the update is to incorporate learnings gathered over the years so we can simplify, modernize, and make the license easier to use.  Mitchell Baker’s post this morning provides some good historical context and you can find more information about the process, rationale, and how to get involved on the MPL update web site. I’m pretty excited about the prospects although it’s going to be a big chunk of work and with any open process some respectful disagreement from time to time, but that’s ok.

More than a decade ago, I had the chance to work with Mitchell on the MPL. At the time, I had never worked on an open source license – nor had most attorneys back then. It seemed like another cool project to work on, but I certainly didn’t fully comprehend the possibility at the time. It was also my first exposure to creating legal artifacts in an open and transparent way. It was a bit of shock and I’m still in awe at how open source products are created.

In my experience practicing law, transactions come and go, and not often do you work on the same “deal” again. Especially in the Internet sector, it’s rare that that you get two shots at anything.  So this means either that the license is enduring, relevant, and worth working on again or perhaps more simply that I’m getting old. I opt for the former.

Follow

Get every new post delivered to your Inbox.

Join 433 other followers