A: Because computers are not capable of truly understanding patents. Period.
Joff Wild over at the IAM Blog explained M-CAM’s $290 M prediction as the “absolute ceiling price” of the AOL patent sale quite succinctly (and politely) by noting that the computer analytics firm “seems to have got it horribly wrong.” Meanwhile, MDB Capital’s Christopher Marlett prediction of a $1 B patent sale turned out to be nearly dead-on. But you’ll have to forgive my ultra-polite, British friend, Joff, because M-CAM is an American company, and this blog primarily caters to an American audience (although anyone across the pond, or anywhere else is always welcome). Joff handled his comments with a certain decorum one would expect from a mild-mannered Brit, but this call and fallout really calls for something more. So, ahem, without further ado:
M-CAM, YOU BLEW IT! Got that? You just effing blew it, and that’s that. Ok? You’ve been annoying the media and countless patent attorneys with your worthless, flawed reports for months now. You’ve tried and tried to get folks in our business to shell out serious money and/or make calls based on your so-called analytics platform, and aside from a few mentions in the press, it probably really hasn’t panned out. Well now you’ve gone and made a bold, decisive prediction based on your own software and it blew up in your face! Congratulations! For your sake, I hope the damage is limited to a very public gaffe. It would be positively awful if, say, Google followed your advice about the “absolute ceiling price” of $290 M and tucked that extra billion dollars back into their couch cushions.
Alright, now that that’s out of the way, a few readers are probably still asking Who the hell is M-CAM?
Although their website suggests expertise and products in other areas, M-CAM’s marketing efforts over the past year and a half or so seem to have primarily focused on selling their patent analytics software, or the fruits thereof. A little research suggests the platform may be as old as ten years or more, and it was previously sold as a replacement to manual prior art searching. Of course, the “John Henry” of prior art searching claims that he (or she) “smashed it to pieces” in a head-to-head test using only a “necktop computer.”
M-CAM’s apparent marketing strategy since about 2010 has been to release reports “analyzing” patent portfolios in the news. Since at least early 2011, the company began generating massive lists of supposed prior art (or “precedent innovation” as they sometimes call it) for patents in litigation and offering to sell the list for the “cost effective” price of $1,000,000. Seriously. Ask just about any patent attorney who was involved in a multiple-defendant patent lawsuit in 2011, and you won’t get very far before someone else confirms this modus operandi.
So how does M-CAM’s process work? Well, that still is something of a mystery. Regarding the AOL analysis, IP lawyer Janal Kalis explained in a blog post that M-CAM considered “the details of patents within the AOL portfolio and made its determination based upon the quality of patents within the portfolio” to end up with the conclusion that “71% of AOL’s US patents have ‘potential commercial impairment.'” However, in a later discussion she too acknowledged that the company lacks transparency. Essentially, some high-level processes are clear: a parsing/breakdown of PAIR data, citation mapping (which is fairly common these days), and semantic mapping. Beyond that, and M-CAM’s predictably inevitable conclusion that the subject of the report is “junk,” M-CAM offers very little public information.
Of course, it would be absurd to expect M-CAM to publicly detail the operation of its software algorithms, but a little transparency into goes along way. Take, for example, the “grading” system which categorizes patents, in descending order of value, as “Most Commercial,” “Commercial,” “Pool,” and “Transfer.” M-CAM only provides generic descriptions of these categories, such as Commercial & Most Commercial being described as patents for which “players in the innovation ecosystem under review have indicated considerable interest” and Transfer being described as patents with “serious prosecution impairments.” However, more detail on how well a patent must perform in various statistical categories in order to rank in the “Commercial” category would go a long way toward building credibility.
On the other hand, M-CAM has left a few tell-tale clues on how its analytics process works, and an explanation of that process exposes its flaws. First, M-CAM’s semantic mapping process manifests in the side-by-side claim comparisons of subject patents to selected prior art patents (see below). The blue text indicates similarities in claim language. From these many examples, M-CAM demonstrates the ability to identify patents based on claim language similarity. From there, it’s a simple matter of filtering by priority dates to identify potential prior art.
Of course, while M-CAM’s computerized process might be able to point out similarities, a human analyst can easily spot the differences that make subsequent inventions patentable, despite the existence of previous ones. Of course, one will not find this level of analysis among M-CAM’s reports.
M-CAM also uses this capability to generate massive lists of “precedent innovation” that supposedly serve to invalidate a particular patent. However, even bloggers eager to bust asserted patents downplayed the significance of the M-CAM prior art identification capabilities. But how does it work? Again, M-CAM chooses not to explain, but has left clues in the way it tries to market the service. In a solicitation sent to numerous patent attorneys, and obtained by Gametime IP, attempting to sell a report listing more than 1000 alleged prior art references, M-CAM pumped up the relevance of the report by claiming “Only those prior art references that the patent office has confirmed to be relevant against similar claims are included in our report.”
Knowing already that M-CAM can identify patents based on claim similarity, the pieces of this process are obvious. M-CAM gathers a data-set of patents with “similar” claims (although with an unspecified tolerance as to similarity), and then generates a list of references cited by the data-set patents as prior art. By filtering the list to remove patents already cited in the subject patent and patents that do not qualify as prior art due to priority dates, M-CAM’s list of “un-cited prior art” is complete. Of course, by adjusting the tolerance on claim similarity, it would not take long to ratchet out a list of thousand or more patents. While there may very well be sound arguments stemming from one or more of the patents in M-CAM’s list, that is no small task. Meanwhile, M-CAM asked for $1,000,000 for the list alone!
M-CAM’s supposed patent prosecution analytics tools seem to be of equally dubious value. The AOL report claims that 79% of the portfolio shows potential commercial impairment, and 17% of the portfolio is classified as exhibit “serious prosecution impairments.” M-CAM then identifies one specific patent, US Patent 8,001,190, as an example of patents that, according to the company, “appear to offer little or no effective licensing opportunities due to their impairments”. According to M-CAM, the patent claims were amended 12 times during prosecution and faced 11 total rejections before the Examiner allowed the patents to issue. Of course, this is the only prosecution evidence M-CAM chooses to provide, even at a summary level, and could have been as easily found through a USPTO Public PAIR search as through some sophisticated algorithm.
What’s more, the 12 claim amendments can hardly be taken as evidence of a worthless patent. It could be a sign of a patent owner fighting tooth-and-nail to obtain the broadest coverage allowable, rather than cutting prosecution short at, say, 3 amendments and taking a narrower claim just to stop the bleeding. Further, taking a look at the original claim 1 versus the allowed claim 1, which M-CAM helpfully provided in the report, it appears that the claim may have ultimately become broader during those 12 amendments. The originally filed claim takes up more than 2 pages of M-CAM’s report, and checks in at a staggering 689 words. In contrast, the final issued claim takes up less than a page, running a mere 380 words. While claim count is far from definitive, the evidence taken at face value suggests a scenario where the patent owner refused to accept an overly narrow claim. Meanwhile, M-CAM aptly demonstrated a complete inability to analyze the evidence included in its own report!
Another example of M-CAM’s staggering incompetence was demonstrated by an anonymous patent attorney in response to the company’s more recent report of the 10 Facebook patents asserted against Yahoo. In this report, M-CAM lampoon’s an Examiner for allowing a fairly lengthy claim based on the minor inclusion of only a handful additional words. (Specifically, the amended claim specified that image content items were “associated with the selected user profile“, but was otherwise identical to the original claim.)
Of course, the report was enough to convince Techdirt to make a call about the quality of the Facebook patents. Proudly stamping the M-CAM report with its foot, and then promptly inserting it into its mouth, Techdirt asked “Any patent lawyers want to defend this kind of ridiculousness?” Of course, one such patent attorney did, calmly explaining that the Examiner already believed the subject matter of the claim, as a whole, was patentable, not merely the addition of a few words. A few clicks through the file history, and the patent attorney fairly quickly noticed that the claim amendments related to “administrative stuff – mostly form, not substance related to prior art.” Once again, if M-CAM cared at all about accurate analysis, it would have asked an actual human being to read the file history and determine the appropriateness of their tasteless jab at a hard-working, underpaid patent examiner.
What value M-CAM’s software holds is unclear, but their lack of commitment to actually understanding patents, instead of just relying on computers to crunch numbers, is crystal clear. I don’t know if MDB Capital used analysts to study all claims of the 800 AOL patents, but what I do know is that their estimation could not have been solely driven by computer models and data crunching. Instead, MDB flexed some actual gray matter muscle, and considered both the broader context surrounding patent value in general, and then the specific context of the likely bidders.
How do I know that? Because, unlike M-CAM, MDB Capital is a serious company operating in a serious business and run by serious people. MDB invests in IP on a daily basis, so they absolutely require accurate, topical analytics that facilitates actual decision-making, with no time to waste on an overpriced, garbage-data dump.
But I do have to pick on Joff one last time, as those silly Brits are often polite, but still cheeky… Joff notes:
I make that MDB Capital 1 – 0 M-Cam!
MDB has gotten a lot more than 1 right … M-CAM, on the other hand, . . .
UPDATE: In speaking with sources about MDB’s prediction, it seems that they too may be getting more credit than deserved on their accurate valuation. It seems MDB, like other analysts in the IP space, enjoys close relationships with the major players in IP acquisitions. With intelligence gathering from the potential AOL bidders, it probably wasn’t rocket science for MDB to figure out where the winning bid was likely to end up.