ADVERTISEMENT

Taking the Pulse of Patents

Measuring the quality of inventions would help rid the intellectual property system of its institutional sclerosis, says an IBM attorney



Wikimedia

Like millions of Americans, I suffer from a common, and thankfully mild, heart rhythm problem. Fortunately, it is now possible to diagnose and treat this problem with a high degree of precision and effectiveness. It is easy to imagine how frightening this condition must have been before we had modern medical facilities for monitoring heart rate and for addressing anomalies. How can you effectively treat a condition without the means to understand the nature of the problem or the impact of your treatment? You clearly cannot and it would be folly, if not downright dangerous, to undertake the remediation of a condition that you could not adequately measure.

While the foregoing may seem self-evident, its misunderstanding underlies the woes afflicting the U.S. patent system. It is well known that our system is not functioning optimally and that poor patent quality is at the heart of its malady. Inventors, patent practitioners, the United States Patent and Trademark Office (USPTO), and the courts each shoulder some responsibility for this chronic condition.

It is understood that a principal agent underlying the patent-quality malaise is an overburdened U.S. patent system. There were approximately 1.2 million U.S. patent applications pending at the end of 2008 compared to approximately 275,000 pending at the end of 1997. That is a 439 percent increase in pending patents over the last 11 years. In addition to the sheer volume of patent applications clogging the patent office, the diversity and complexity of inventions has increased as science has expanded into new areas such as nanotechnology and genetic engineering. Many of these new innovations are cross-disciplinary, requiring patent examiners to have expertise in multiple technical areas. Thus, the patent office struggles to address the growing backlog of patents and to hire adequately trained patent examiners.

Compounding the challenges of backlog and staffing, and notwithstanding recent improvements, the standards for drafting and examining patents are not as clear or consistent as they need to be. The recent Supreme Court decision in KSR Int'l Co. v. Teleflex Inc. promises to make it easier for examiners to find obvious inventions unpatentable in view of a combination of prior publications. The non-obviousness requirement for patentability addresses the reality that often there is no single prior published document that describes how to build the invention, but rather the invention can be achieved by drawing ideas from several documents. The standard is that an invention that combines ideas from prior published material (“prior art” in patent-speak) must not be obvious to a person having ordinary skill in that field. For example, in the KSR case the Court found that an invention combining a gas pedal with the car’s electronic throttle control was obvious to those skilled in the field of automotive design in view of prior patents describing the function of both of these devices.

The KSR decision will assist examiners in issuing patents for inventions that are nonobvious, but the law does not provide sufficient guidance and procedural tools to examiners and applicants to ensure that the patent documents themselves adequately teach the public how to implement the patented invention. These ill-defined inventions fail to properly establish the scope of patent protection. The result is that too many patents that have not been adequately prepared or examined are being granted, discouraging would-be innovators from investing in innovation for fear of infringement. The National Academy of Sciences recognized the symptoms of declining U.S. patent quality in a 2004 report on the patent system, noting that there are reasons to think that the USPTO is issuing more and more substandard patents.

Although "home remedies" to address the patent system’s ailments abound from all interested quarters, we appear to have lost sight of the simple notion that you cannot hope to fix a problem until you can quantify the problem you want to fix. If we are serious about curing this disease then one of the first steps needs to be the creation of a metric to diagnose the ailment and monitor the impact of our course of treatment.

Why is the need for such a measuring stick so hard to recognize when it comes to fixing the patent system? Perhaps it is because the symptoms of poor patent quality are not as readily apparent as they are in other pursuits. It's evident that a poor-quality motor will cause your car to malfunction, a poor piece of financial advice will lead to a lighter wallet and low-quality health care can lead to immediate and devastating consequences. Yet the improper issuance of a dubious patent seems more remote. There are few who would dispute that a problem exists, but it is one that appears to lack immediate, personal consequences.

It is foolish to assume we are not affected by poor-quality patents. Left untreated, the economic prognosis for poor patent quality includes needless litigation, increased costs for producers and consumers, and barriers to new innovations. According to one report, poor patent quality costs more than $21 billion per year in the U.S., or more than 7 percent of U.S. R&D expenditures. If we consult the financial caretakers trying to cure the economy of its critical illness brought on by the sub-prime mortgage fiasco, they would undoubtedly warn us that in a global economy we are more intimately connected to all economic decisions, and even those decisions that may initially appear remote—like improvidently granted patents or mortgages—eventually take their toll on us all in very real ways.

I do not mean to suggest that the issue of patent quality has been ignored. In fact, the issue receives much attention, and many worthwhile projects and academic studies have focused on the need to improve patent quality. One such endeavor is the Peer to Patent project, led by Professor Beth Noveck of New York Law School in cooperation with the USPTO and sponsored by many companies, including my employer. The project enables members of the public with relevant technical expertise to assist patent examiners in reviewing patent applications to ensure that the best prior art is considered as part of the examination. Moreover, there are a great number of companies that offer products and services to assist in the valuation and better management of patent portfolios. For instance, Ocean Tomo, a company that manages patent auctions, utilizes sophisticated proprietary valuation algorithms to provide bidders with an indication of the likely value of a patent. These types of tools and algorithms, however, generally focus on the economic value of a patent as opposed to its intrinsic legal quality.

The legal quality of a patent—that is, how well it complies with the statutory requirements for patentability—does not necessarily translate to its economic value. It is possible to craft a patent of exceedingly high legal quality for an invention with little practical use. For example, a well-prepared and thoroughly examined patent on an improved 8-track cassette player would be a patent having high legal quality but low economic value. Unfortunately, owing to the uncertainty of patent litigation and the current inability to reliably assess the legal quality of patents, there have also been patents of dubious legal quality that have nonetheless provided significant economic returns to their owners.

Into this breach comes the Patent Quality Index (PQI) project—sponsored by my employer, IBM, and led by Professor Ron Mann of Columbia University Law School and Professor Toshiya Watanabe of the University of Tokyo. The project seeks to identify characteristics of specific elements of a patent, along with related documents, that can be correlated with high or low quality as measured by whether or not the patents have been found valid or invalid in litigation. In so doing, the project aims to create a usable metric for determining the objective legal quality of each patent and en masse with respect to the patent system.

Consider, for example, a patent having a claim (the part of a patent document defining what is protected by the patent) that includes a particular term that could have more than one meaning. The term “index,” say, has one meaning in the context of a database and another in the context of Web page design. If the patent fails to define that term in the body of the patent document, then the patent is ambiguous and would be less clear (i.e., its legal quality would be lower) than if that claimed term were precisely defined. As another instance, a patent record indicating that an examiner considered a broad set of prior publications suggests a more thorough examination and higher quality than one indicating that little if any of these documents were considered. The PQI project seeks to link factors related to the approval of the patent with data on whether the patents were found valid in the courts, and then create an index or a set of indices that reflect the legal quality of a patent based on the presence or absence of these attributes.

These indices will provide a metric for diagnosing our patent quality problem. An index that focuses solely on the attributes of an initially filed patent application (one that has yet to be reviewed by an examiner) would allow the patent office to sort incoming patent applications on the basis of the quality of the submission. That would permit a more effective allocation of limited examiner resources by giving examiners more time to substantively review higher quality submissions under the assumption that lower quality applications could be more easily rejected and returned to applicants for revision. Another index that focuses solely on the characteristics of patent examination (ensuring that an application has been reviewed properly) could provide a meaningful tool to train new examiners on best practices and serve as a mechanism to capture, review, and address quality problems as they occur. Finally, an index that takes into account the overall quality of a granted patent will address uncertainty in the marketplace for innovations by making it easier to determine the validity of patented inventions, thus facilitating licensing and sale of patents and discouraging speculative patent litigation.

The PQI project is an ambitious undertaking that no one company or organization has the wherewithal to get right on its own. Unlike the well-known use of statistical controls on a manufacturing line to improve product quality, this assembly line—one that produces patents—is not owned by a single manufacturer, but by many private inventors and by the USPTO. It does not produce a single product, but rather its products are thousands of unique patented inventions in hundreds of different technical areas each year. So the effort to rehabilitate the patent system requires broad participation from all communities with an interest in ensuring that our system properly promotes innovation. And even then, it will take time to evolve the metric into one that is substantially complete. In the meantime, however, it is important to build a patent-quality metric as quickly and as best we can—instead of leaving the patient undiagnosed. The health of our innovation-dependent economy and the vitality of our country's competitiveness hang in the balance.

As we have been reminded recently, particularly in matters affecting our economy, an ounce of prevention is worth a pound of cure.

Share this Article:

Comments

You must sign in or register as a ScientificAmerican.com member to submit a comment.
Scientific American Special Universe

Get the latest Special Collector's edition

Secrets of the Universe: Past, Present, Future

Order Now >

X

Email this Article

X