Adaptive Learning Speeds New Drug-Screening Software

 
 
By M.L. Baker  |  Posted 2004-04-15 Email Print this article Print
 
 
 
 
 
 
 

Updated: Software created by researchers at Rensselaer Polytechnic Institute uses a pattern-recognition process called kernel learning to more quickly assess molecules' properties.

Researchers at Rensselaer Polytechnic Institute this month added a software program that uses adaptive learning to the roster of programs available for assessing molecules properties. While pharmaceutical companies already have software that searches through databases to screen for drugs for a given therapy, the new software works much faster by using neural networks and adaptive-learning methods to model compounds and predict their behavior. Drug-discovery companies all employ computational tools to aid in finding leads for drug development. But scientists at Rensselaer in Troy, N.Y., say the move into predictive modeling marks a shift away from laboratories assays of mathematical, computer-run models.
Laboratories with the most high-throughput techniques can test a few hundred thousand molecules a day; existing computer programs can process just fewer than a million.
But the Rensselaer software can crunch more than 10 million molecules a day, according to High Performance Computing. The software looks for similarities between molecules in a given database and those with known therapeutic potential. The advantage is chiefly amount and type of chemical information that is available through this method; for a method that produces this much chemical information, the speed is quite fast. The software comes from a National Science Foundation-funded project called Drug Discovery and Semi-Supervised Learning (DDASSL, pronounced "dazzle"). Curt Breneman, a chemistry professor; Kristin Bennett, a mathematics associate professor; senior research associate N. Sukumar; and Mark Embrechts, an associate professor in decision sciences and engineering systems, worked together to develop the software. Computer testing is less expensive and faster than testing actual molecules, and allows workers to pare down the number of tests that need to be performed. Dr. Breneman says, "That approach helps to focus more attention on molecules with the highest probability of success, and also allows dead-ends to be identified before many resources are expended on them. The ultimate pay-off of this methodology may be that it can help to speed up the development of new drugs."
Though several software programs already exist to assess compounds in silico, they can be slow, not particularly predictive or both. The Rensselaer software uses two shortcuts to search large molecular databases rapidly. First, the software renders a description of both a molecules shape and the electrical properties on its surface as a set of numbers. These number sets can be processed rapidly by a computer. Then, the software searches for common chemical properties associated with molecules for a particular therapy. It does not use the method of so-called docking software, which looks at the interaction of a molecule with a particular protein. Instead, it uses a pattern-recognition process called kernel learning. The software is presented with a small set of molecules with the right features, which are analyzed as described above. Then, the software churns through a molecular database, looking for promising compounds. "Conventional techniques are not truly predictive and dont work," Bennett said. "So, we borrowed pattern-recognition techniques already used in the pharmaceutical industry and added algorithms based on support vector machines. That gives us a technique to predict which molecules are promising." Projects are under way to further evaluate how predictive the new software is. Pattern-recognition techniques are rapidly becoming more sophisticated and more capable of using data from laboratory experiments. In unrelated work, researchers at the Harbor-ULCA Medical Center used computational methods and proteomics to find a structure that is common to otherwise diverse and distinct antimicrobial peptides. In a recent review in Science magazine, Yale University chemistry professor William Jorgensen stressed that no single computer program will be sufficient to find drug candidates and that some of the slower processes yield absolutely crucial information "There is not going to be a voilà moment at the computer terminal," he wrote. "Instead, there is systematic use of wide-ranging computational tools to facilitate and enhance the drug-discovery process." Editors Note: This story was updated to include additional information and comments from a discussion with Curt Breneman. Check out eWEEK.coms Enterprise Applications Center at http://enterpriseapps.eweek.com for the latest news, reviews, analysis and opinion about productivity and business solutions. Be sure to add our eWEEK.com enterprise applications news feed to your RSS newsreader or My Yahoo page:  
 
 
 
 
Monya Baker is co-editor of CIOInsight.com's Health Care Center. She has written for publications including the journal Nature Biotechnology, the Acumen Journal of Sciences and the American Medical Writers Association, among others, and has worked as a consultant with biotechnology companies. A former high school science teacher, Baker holds a bachelor's degree in biology from Carleton College and a master's of education from Harvard.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Close
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel