Using data, rather than collecting it, was on several attendees minds at the Biotechnology Industry Organization conference in Philadelphia, but ideas about the best way forward may be headed for a clash.
Scientists today can generate data in a single afternoon that previously would have required years of work and been considered worthy of a doctoral thesis. But no one seems to be using this data well, largely because neither intellectual property law, information systems nor scientific training allow researchers to wrest valuable secrets from these masses of data.
“We can generate huge amounts of data, far more than we can interpret,” said Duncan McHale, director of clinical pharmacogenomics at Pfizer Global R&D, who spoke at a panel on industry-academia relationships.
But McHale said companies hold onto the data as proprietary rather than share it in a way that might make trials faster and avoid years on drugs making more and more mistakes. The drug industry is reinventing a lot of wheels, and some expensive ones at that.
The industry needs to find a way to make data “precompetitive,” he said. Drug executives must be trained to find ways to share data so that any common gain that comes from data mining is not seen as a private loss.
McHale said drug executives worry that they could be penalized with patent licenses for the very data they generate if they put it into a common repository that other companies use to generate predictive tests or novel tools.
The fear comes out of patenting activity in the early days of genetic sequencing, when researchers would put intellectual property over a series of genetic letters.
Technology required sequencing genes in pieces, and researchers would file patents on pieces, without a clear idea of what the gene did or how knowledge of its sequence could be used to make drugs. Even today, researchers are not always certain of when they are infringing on anothers patent.