News Release

Scripps Florida scientists develop a more effective molecular modeling process

Peer-Reviewed Publication

Scripps Research Institute

Kendall Nettles, The Scripps Research Institute

image: Kendall Nettles, Ph.D., is an associate professor at The Scripps Research Institute, Florida campus. view more 

Credit: Photo courtesy of The Scripps Research Institute.

JUPITER, FL, September 26, 2013 – It's difficult and time-consuming to produce accurate computer models of molecules, primarily because traditional modeling methods are limited in their ability to handle alternative molecular shapes and, consequently, are subject to multiple errors.

Moreover, the traditional approach uses mathematical formulas or algorithms that are run sequentially, refining the structural details of the model with each separate algorithm—a method that has been revolutionized by personal computing, but still requires labor-intensive human intervention for error correction.

A new method developed by scientists on the Florida campus of The Scripps Research Institute (TSRI) takes another tack entirely, combining existing formulas in a kind of algorithmic stew to gain a better picture of molecular structural diversity that is then used to eliminate errors and improve the final model.

The method was described in a paper published online ahead of print on September 26, 2013 by the journal Structure.

The new process, called Extensive Combinatorial Refinement (ExCoR), could help improve the development of drug candidates that depend to a great degree on detailed structural analysis to determine how they work against specific disease targets.

"Our combinatorial method creates computerized molecular models in a more automated way," said Kendall Nettles, a TSRI associate professor who led the study. "This is an important component of drug discovery—to do them in a more automated fashion will significantly help the process."

Improvement and Some Surprises

In the study, the scientists subjected more than 50 molecular structures to 256 distinct combinations of algorithms and refinement factors that eventually totaled more than 12,000 independent refinement runs.

Nettles and his colleagues measured the improvement in the models by what is known as the R-factor, which measures the similarity between the actual structure of the molecule and the experimental model—in other words, just how closely the refined structure model can predict the factual data.

"Lowering that R-factor is the goal—that's the selection process for finding the best algorithms," Nettles said.

While the study found that no single algorithm consistently produced the best model, the scientists did find some surprises.

"Some algorithms, if you combine them, tend to work better at producing a refined model," said Research Associate Jerome C. Nwachukwu, the first author of the study. "What we didn't expect was two algorithms that worked separately but didn't work in combination."

It is this strange overlap makes it impossible to predict which combinations of algorithms will work best for an individual structure.

"The refinement effects of the various algorithms depend on the structure itself," Nwachukwu said.

###

In addition to Nettles and Nwachukwu, authors of the study, "Improved Crystallographic Structures using Extensive Combinatorial Refinement," which will appear in the November 5, 2013 print issue of Structure, included Mark R. Southern of TSRI; James R. Kiefer of Genentech, Inc.; Pavel V. Afonine of Lawrence Berkley National Laboratory; Paul D. Adams of The University of California, Berkley; and Thomas C. Terwilliger of Los Alamos National Laboratory.

The study was supported by the National Institutes of Health (Grant numbers CA132022, DK077085, 5U01GM102148 and GM063210) and by the US Department of Energy (Contract No. DE-AC02-05CH11231).


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.