Growth The 1980s

If the 1960s were the Dark Ages and the 1970s were the Middle Ages, the 1980s were the Renaissance, the Baroque Period, and the Enlightenment all rolled into one. The decade of the 1980s was when the various approaches of quantum chemistry, molecular mechanics, molecular simulations, QSAR, and molecular graphics coalesced into modern computational chemistry.

In the world of scientific publishing, a seminal event occurred in 1980. Professor Allinger launched his Journal of Computational Chemistry. This helped stamp a name on the field. Before the journal began publishing, the field was variously called theoretical chemistry, calculational chemistry, modeling, etc. Interestingly, Allinger first took his proposal to the business managers for publications of the American Chemical Society (ACS). Unfortunately, they rejected the concept. Allinger turned to publisher John Wiley & Sons, which went on to become the premier producer of journals and books in the field. Nearly 25 years passed before the ACS moved to rectify its mistake, and in 2005 it remolded its Journal of Chemical Information and Computer

Sciences (JCICS) in an attempt to meet the needs of today's computational chemists. JCICS was becoming the most popular venue for computational chemists to publish work on combinatorial library designs (see Fig. 1.2 and Section 1.6 on the 1990s).

Several exciting technical advances fostered the improved environment for computer use at pharmaceutical companies in the 1980s. The first was a development of the VAX 11/780 computer by Digital Equipment Corporation (DEC) in 1979. The machine was departmental size, that is, the price, dimen-

Journal of Chemical Information and Computer Sciences (now the Journal of Chemical Information and Modeling)

Journal of Medicinal Chemistry

Journal of Combinatorial Chemistry

Current Opinion in Chemical Biology

Journal of Molecular Graphics and Modelling

Molecular Diversity

Bioorganic and Medicinal Chemistry Letters

Combinatorial Chemistry and High Throughput Screening

Journal of Computer-Aided Molecular Design

Drug Discovery Today

Figure 1.2 Journals that have published the most papers on combinatorial library design. Total number of papers published on this subject according to the Chemical Abstract Service's CAPLUS and MEDLINE databases for all years through 2004 plus three-quarters of 2005.

sions, and easy care of the machine allowed each department or group to have its own superminicomputer. This was a start toward noncentralized control over computing resources. At Lilly, the small-molecule X-ray crystallo-graphers were the first to gain approval for the purchase of a VAX, around 1980. Fortunately, the computational chemists and a few other scientists were allowed to use it, too. The machine was a delight to use and far better than any of the batch job-oriented IBM mainframes of the past. The VAX could be run interactively. Users communicated with the VAX through interactive graphical terminals. The first terminals were monochrome. The first VAX at Lilly was fine for one or two users but would get bogged down, and response times would slow to a crawl if more than five users were logged on simultaneously. Lilly soon started building an ever more powerful cluster of VAXes (also called VAXen in deference to the plural of "ox"). Several other hardware companies that manufactured superminicomputers in the same class as the VAX sprung up. But DEC proved to be a good, relatively long-lasting vendor to deal with, and many pharmaceutical companies acquired VAXes for research. (However, DEC and those other hardware companies no longer exist.)

The pharmaceutical companies certainly noticed the development of the IBM personal computer (PC), but its DOS (disk operating system) made learning to use it difficult. Some scientists bought these machines. The Apple Macintosh appeared on the scene in 1984. With its cute little, lightweight, all-in-one box including monochrome screen, the Mac brought interactive computing to a new standard of user friendliness. Soon after becoming aware of these machines, nearly every medicinal chemist wanted one at work. The machines were great at word processing, graphing, and managing small (laboratory) databases. The early floppy disks formatted for the Macs held only 400 KB, but by 1988 double-sided, double-density disks had a capacity of 1400 KB, which seemed plenty in those days. In contrast to today's huge applications requiring a compact disk for storage, a typical program of the 1980s could be stuffed on one or maybe two floppy disks.

On the software front, three advances changed the minds of the medicinal chemists from being diehard skeptics to almost enthusiastic users. One advance was the development of electronic mail. As the Macs and terminals to the VAX spread to all the chemists in drug discovery and development, the desirability of being connected became obvious. The chemists could communicate with each other and with management and could tap into databases and other computer resources. As electronic traffic increased, research buildings had to be periodically retrofitted with each new generation of cabling to the computers. A side effect of the spread of computer terminals to the desktop of every scientist was that management could cut back on secretarial help for scientists, so they had to do their own keyboarding to write reports and papers.

The second important software advance was ChemDraw, which was released first for the Mac in 1986 [59-62]. This program gave chemists the ability to quickly create two-dimensional chemical diagrams. Every medicinal chemist could appreciate the aesthetics of a neat ChemDraw diagram. The diagrams could be cut and pasted into reports, articles, and patents. The old plastic ring templates for hand drawing chemical diagrams were suddenly unnecessary.

The third software advance also had an aesthetic element. This was the technology of computer graphics, or when 3D structures were displayed on the computer screens, molecular graphics. Whereas a medicinal chemist might have trouble understanding the significance of the highest occupied molecular orbital or the octanol-water partition coefficient of a structure, he or she could readily appreciate the stick, ball-and-stick, tube, and space-filling representations of 3D molecular structures [63-65]. The graphics could be shown in color and, on more sophisticated terminals, in stereo. These images were so stunning that one director of drug discovery at Lilly decreed that terms like "theoretical chemistry," "molecular modeling," and "computational chemistry" were out. The whole field was henceforth to be called "molecular graphics" as far as he was concerned. A picture was something he could understand!

Naturally, with the flood of new computer technology came the need to train the research scientists in its use. Whereas ChemDraw running on a Mac was so easy that medicinal chemists could learn to use it after an hour or less of training, the VAX was a little more formidable. One of the authors (DBB) was involved in preparing and teaching VAX classes offered to the medicinal chemists and process chemists at Lilly.

Computer programs that the computational chemists had been running on the arcane IBM mainframes were ported to the VAXes. This step made the programs more accessible because all the chemists were given VAX accounts. So, although the other programs (e.g., e-mail and ChemDraw) enticed the medicinal chemist to sit down in front of the computer screen, he or she was now more likely to experiment with molecular modeling calculations. (As discussed elsewhere [66], the terms computational chemistry and molecular modeling were used more or less interchangeably at pharmaceutical companies.) Besides the classes and workshops, one-on-one training was offered to help the medicinal chemists run the computational chemistry programs. This was generally fruitful but occasionally led to amusing results such as when one medicinal chemist burst out of his lab to happily announce his discovery that he could obtain a correct-looking 3D structure from MM2 optimization even if he did not bother to attach hydrogens to the carbons. However, he had not bothered to check the bond lengths and bond angles for his molecule.

On a broader front, large and small pharmaceutical companies became aware of the potential for computer-aided drug design. Although pharmaceutical companies were understandably reticent to discuss what compounds they were pursuing, they were quite free in disclosing their computational chemistry infrastructure. For instance, Merck, which had grown its modeling group to be one of the largest in the world, published its system in 1980 [67]. Lilly's infrastructure was described at a national meeting of the American Chemical Society in 1982 [68].

A few years later, a survey was conducted of 48 pharmaceutical and chemical companies that were using computer-aided molecular design methods and were operating in the United States [69]. Between 1975 and 1985, the number of computational chemists employed at these companies increased from less than 30 to about 150, more than doubling every five years. Thus more companies were jumping on the bandwagon, and companies that were already in this area were expanding their efforts. Hiring of computational chemists accelerated through the decade [70]. Aware of the polarization that could exist between theoretical and medicinal chemists, some companies tried to circumvent this problem by hiring organic chemistry Ph.D.s who had spent a year or two doing postdoctoral research in molecular modeling. This trend was so pervasive that by 1985 only about a fifth of the computational chemists working at pharmaceutical companies came from a quantum mechanical background. Students, too, became aware of the fact that if their Ph.D. experience was in quantum chemistry, it would enhance their job prospects if they spent a year or two in some other area such as molecular dynamics simulations of proteins.

The computational chemistry techniques used most frequently were molecular graphics and molecular mechanics. Ab initio programs were in use at 21 of the 48 companies. Over 80% of the companies were using commercially produced software. Two-thirds of the companies were using software sold by Molecular Design Ltd. (MDL). A quarter were using SYBYL from Tripos Associates, and 15% were using the molecular modeling program CHEM-GRAF by Chemical Design Ltd.

The following companies had five or more scientists working full-time as computational chemists in 1985: Abbott, DuPont, Lederle (American Cyana-mid), Merck, Rohm and Haas, Searle, SmithKline Beecham, and Upjohn. Some of these companies had as many as 12 people working on computer-aided molecular design applications and software development. For the 48 companies, the mean ratio of the number of synthetic chemists to computational chemists was 29: 1. This ratio reflects not only what percentage of a company's research effort was computer based, but also the number of synthetic chemists that each computational chemist might serve. Hence, a small ratio indicates more emphasis on computing or a small staff of synthetic chemists. Pharmaceutical companies with low ratios (less than 15 : 1) included Abbott, Alcon, Allergan, Norwich Eaton (Proctor & Gamble), and Searle. The most common organizational arrangement (at 40% of the 48 companies) was for the computational chemists to be integrated in the same department or division as the synthetic chemists. The other companies tried placing their computational chemists in a physical/analytical group, in a computer science group, or in their own unit.

About three-quarters of the 48 companies were using a VAX 11/780, 785, or 730 as their primary computing platform. The IBM 3033, 3083, 4341, etc. were being used for molecular modeling at about a third of the companies. (The percentages add up to more than 100% because larger companies had several types of machines.) The most commonly used graphics terminal was the Evans and Sutherland PS300 (E&S PS300) (40%), followed by Tektronix, Envison, and Retrographics VT640 at about one-third of the companies each and IMLAC (25%). The most-used brands of plotter in 1985 were the Hewlett-Packard and Versatec.

As cited above, the most widely used graphics terminal in 1985 was the E&S PS300. This machine was popular because of its very high resolution, color, speed, and stereo capabilities. (It is stunning to think that a company so dominant during one decade could totally disappear from the market a decade later. Such are the foibles of computer technology.) At Lilly, the E&S PS300 was set up in a large lighted room with black curtains enshrouding the cubicle with the machine. Lilly scientists were free to use the software running on the machine. In addition, the terminal also served as a showcase of Lilly's research prowess that was displayed to visiting Lilly sales representatives and dignitaries. No doubt a similar situation occurred at other companies.

The 1980s saw an important change in the way software was handled. In the 1970s, most of the programs used by computational chemists were exchanged essentially freely through QCPE, exchanged person to person, or developed in-house. But in the 1980s, many of the most popular programs— and some less popular ones—were commercialized. The number of software vendors mushroomed. For example, Pople's programs for ab initio calculations were withdrawn from QCPE; marketing rights were turned over to a company he helped found, Gaussian Inc. (Pittsburgh, Pennsylvania). This company also took responsibility for continued development of the software. In the molecular modeling arena, Tripos Associates (St. Louis, Missouri) was dominant by the mid-1980s. Their program SYBYL originally came from academic laboratories at Washington University (St. Louis) [71].

In the arena of chemical structure management, MDL (then in Hayward, California) was dominant. This company, which was founded in 1978 by Prof. Todd Wipke and others, marketed a program called MACCS for management of databases of compounds synthesized at or acquired by pharmaceutical companies. The software allowed substructure searching and later similarity searching [72, 73]. The software was vastly better than the manual systems that pharmaceutical companies had been using for recording compounds on file cards that were stored in filing cabinets. Except for some companies such as Upjohn that had their own home-grown software for management of their corporate compounds, many companies bought MACCS and became dependent on it. As happens in a free market where there is little competition, MACCS was very expensive. Few if any academic groups could afford it. A serious competing software product for compound management did not reach the market until 1987, when Daylight Chemical Information Systems was founded. By then, pharmaceutical companies were so wedded to MACCS that there was great inertia against switching their databases to another platform, even if it was cheaper and better suited for some tasks. In 1982, MDL started selling REACCS, a database management system for chemical reactions. Medicinal chemists liked both MACCS and REACCS. The former could be used to check whether a compound had previously been synthesized at a company and how much material was left in inventory. The latter program could be used to retrieve information about synthetic transformations and reaction conditions that had been published in the literature.

Some other momentous advances occurred on the software front. One was the writing of MOPAC, a semiempirical molecular orbital program, by Dr. James J. P. Stewart, a postdoctoral associate in Prof. Michael Dewar's group at the University of Texas at Austin [74-76]. The program was the first widely used program capable of automatically optimizing the geometry of molecules. This was a huge improvement over prior programs that could only perform calculations on fixed geometries. Formerly, a user would have to vary a bond length or a bond angle in increments, doing a separate calculation for each, then fit a parabola to the data points and try to guess where the minimum was. Hence MOPAC made the determination of 3D structures much simpler and more efficient. The program could handle molecules large enough to be of pharmaceutical interest. In the days of the VAX, a geometry optimization could run in two or three weeks. An interruption of a run due to a machine shutdown meant rerunning the calculation from the start. For the most part, however, the VAXes were fairly stable.

MOPAC was initially applicable to any molecule parameterized for Dewar's MINDO/3 or MNDO molecular orbital methods (i.e., common elements of the first and second rows of the periodic table). The optimized geometries were not in perfect agreement with experimental numbers but were better than what could have been obtained by prior molecular orbital programs for large molecules (those beyond the scope of ab initio calculations). Stewart made his program available through QCPE in 1984, and it quickly became (and long remained) the most requested program from QCPE's library of several hundred [77]. Unlike commercialized software, programs from QCPE were attractive because they were distributed as source code and cost very little.

In the arena of molecular mechanics, Prof. Allinger's continued, meticulous refinement of a experimentally based force field for organic compounds was welcomed by chemists interested in molecular modeling at pharmaceutical companies. The MM2 force field [78, 79] gave better results than MMI. To fund his research, Allinger sold distribution rights for the program initially to Molecular Design Ltd. (At the time, MDL also marketed some other molecular modeling programs.)

A program of special interest to the pharmaceutical industry was CLOGP. This program was developed by Prof. Al Leo (Pomona College) in the 1980s [80-82]. It was initially marketed through Daylight Chemical Information Systems (then of New Orleans and California). CLOGP could predict the lipophilicity of organic molecules. The algorithm was based on summing the contribution from each fragment (set of atoms) within a structure. The fragment contributions were parameterized to reproduce experimental octanol-water partition coefficients, log Po/w. There was some discussion among scientists about whether octanol was the best organic solvent to mimic biological tissues, but this solvent proved to be the most used. To varying degrees, lipophilicity is related to many molecular properties including molecular volume, molecular surface area, transport through membranes, and binding to receptor surfaces, and hence to many different bioactivities. The calculated log Po/w values were widely used as a descriptor in QSAR studies in both industry and academia.

Yet another program was Dr. Kurt Enslein's TOPKAT [83, 84]. It was sold through his company, Health Designs (Rochester, New York). The software was based on statistics and was trained to predict the toxicity of a molecule from its structural fragments. Hence compounds with fragments such as nitro or nitroso would score poorly, basically confirming what an experienced medicinal chemist already knew. The toxicological end points included carcinogenicity, mutagenicity, teratogenicity, skin and eye irritation, and so forth. Today, pharmaceutical companies routinely try to predict toxicity, metabolism, bioavailability, and other factors that determine whether a highly potent ligand has what it takes to become a medicine. But back in the 1980s, the science was just beginning to be tackled. The main market for the program was probably government laboratories and regulators. Pharmaceutical laboratories were aware of the existence of the program but were leery of using it much. Companies trying to develop drugs were afraid that if the program, which was of unknown reliability for any specific compound, erroneously predicted danger for a structure, it could kill a project even though a multitude of laboratory experiments might give the compound a clean bill of health. There was also the worry about litigious lawyers. A compound could pass all the difficult hurdles of becoming a pharmaceutical, yet some undesirable, unexpected side effect might show up in some small percentage of patients taking it. If lawyers and lay juries, who frequently have trouble understanding science, the relative merits of different experiments, and the benefit-risk ratio associated with any pharmaceutical product, learned that a computer program had once put up a red flag for the compound, the pharmaceutical company could be alleged to be at fault.

We briefly mention one other commercially produced program. That program was SAS, a comprehensive data management and statistics program. The program was used mainly for handling clinical data that was analyzed by the statisticians at each company. Computational chemists also used SAS and other programs when statistical analyses were needed. SAS also had unique capabilities for graphical presentation of multidimensional numerical data [85] (this was in the days before Spotfire).

With the widespread commercialization of molecular modeling software in the 1980s, came both a boon and a bane to the computational chemist and pharmaceutical companies. The boon was that the software vendors sent marketing people to individual companies as well as to scientific meetings. The marketeers would extol the virtues of the programs they were pushing.

Great advances in drug discovery were promised if only the vendor's software systems were put in the hands of the scientists. Impressive demonstrations of molecular graphics, overlaying molecules, and so forth convinced company managers and medicinal chemists that here was the key to increasing research productivity. As a result of this marketing, most pharmaceutical companies purchased the software packages. The bane was that computer-aided drug design (CADD) was oversold, thereby setting up unrealistic expectations of what could be achieved by the software. Also, unrealistic expectations were set for what bench chemists could accomplish with the software. Unless the experimentalists devoted a good deal of time to learning the methods and limitations, the software was best left in the hands of computational chemistry experts.

Also in the 1980s, structure-based drug design (SBDD) underwent a similar cycle. Early proponents oversold what could be achieved through SBDD, thereby causing pharmaceutical companies to reconsider their investments when they discovered that SBDD too was no panacea for filling the drug discovery cornucopia with choice molecules for development. Nevertheless, SBDD was an important advance.

All through the 1970s, computational chemists were often rhetorically quizzed by critics about what if any pharmaceutical product had ever been designed by computer. Industrial computational chemists had a solid number of scientific accomplishments but were basically on the defensive when challenged with this question. Only a few computer-designed structures had ever been synthesized. Only a very tiny percentage of molecules—from any source—ever makes it as far as being a clinical candidate. The stringent criteria set for pharmaceutical products to be used in humans winnowed out almost all molecules. The odds were not good for any computational chemist achieving the ultimate success, a drug derived with the aid of the computer. In fact, many medicinal chemists would toil diligently their whole career and never have one of their compounds selected as a candidate for clinical development.

Another factor was that there were only a few drug targets that had had their 3D structures solved prior to the advancing methods for protein crystallography of the 1980s. One such early protein was dihydrofolate reductase (DHFR), the structures of which became known in the late 1970s [86, 87]. This protein became a favorite target of molecular modeling/drug design efforts in industry and elsewhere in the 1980s. Many resources were expended trying to find better inhibitors than the marketed pharmaceuticals of the antineoplastic methotrexate or the antibacterial trimethoprim. Innumerable papers and lectures sprung from those efforts. Scientists do not like to report negative results, but finally a frank admission came in 1988. A review concluded that none of the computer-based efforts at his company or disclosed by others in the literature had yielded better drugs [88].

Although this first major, widespread effort at SBDD was a disappointment, the situation looked better on the QSAR front. In Japan, Koga employed classic (Hansch-type) QSAR while discovering the antibacterial agent norfloxacin around 1982 [89-91]. Norfloxacin was the first of the third-generation analogs of nalidixic acid to reach the marketplace. This early success may not have received the notice it deserved, perhaps because the field of computer-aided drug design continued to focus heavily on computer graphics, molecular dynamics, X-ray crystallography, and nuclear magnetic resonance spectroscopy [92]. Another factor may have been that medicinal chemists and microbiologists at other pharmaceutical companies capitalized on the discovery of norfloxacin to elaborate even better quinoline antibacterials that eventually dominated the market.

As computers and software improved, SBDD became a more popular approach to drug discovery. One company, Agouron in San Diego, California, set a new paradigm for discovery based on iterations of crystallography and medicinal chemistry. As new compounds were made, some could be cocrys-tallized with the target protein. The 3D structure of the complex was solved by rapid computer techniques. Observations of how the compounds fit into the receptor suggested ways to improve affinity, leading to another round of synthesis and crystallography. Although considered by its practitioners and most others as an experimental science, protein crystallography (now popularly called structural biology, see also Chapter 12) often employed a step whereby the refraction data were refined in conjunction with constrained molecular dynamics (MD) simulations. Dr. Axel Brunger's program X-PLOR [93] met this important need. The force field in the program had its origin in CHARMM developed by Prof. Martin Karplus's group at Harvard [94]. Pharmaceutical companies that set up protein crystallography groups acquired X-PLOR to run on their computers.

The SBDD approach affected computational chemists positively. The increased number of 3D structures of therapeutically relevant targets opened new opportunities for molecular modeling of the receptor sites. Computational chemists assisted the medicinal chemists in interpreting the fruits of crystallography for design of new ligands.

Molecular dynamics simulations can consume prodigious amounts of computer time. Not only are proteins very large structures, but also the MD results are regarded as better the longer they are run because more of con-formational space is assumed to be sampled by the jiggling molecules. Even more demand for computer power appeared necessary when free energy perturbation (FEP) theory appeared on the scene. Some of the brightest luminaries in academic computational chemistry proclaimed that here was a powerful new method for designing drugs [95, 96]. Pharmaceutical companies were influenced by these claims [97]. On the other hand, computational chemists closer to the frontline of working with medicinal chemists generally recognized that whereas FEP was a powerful method for accurately calculating the binding energy between ligands and macromolecular targets, it was too slow for extensive use in actual drug discovery. The molecular modifications that could be simulated with FEP treatment, such as changing one substituent to another, were relatively minor. Because the FEP simulations had to be run so long to obtain good results, it was often possible for a medicinal chemist to synthesize the new modification in less time than it took to do the calculations. Also, for cases in which a synthesis would take longer than the calculations, not many industrial medicinal chemists would rate the modification worth the effort. Researchers in industry are under a great deal of pressure to tackle problems quickly and not spend too much time on them.

The insatiable need for more computing resources in the 1980s sensitized the pharmaceutical companies to the technological advances leading to the manufacture of supercomputers [98]. Some pharmaceutical companies opted for specialized machines such as array processors. By the mid-1980s, for example, several pharmaceutical companies had acquired the Floating Point System (FPS) 164. Other pharmaceutical companies sought to meet their needs by buying time and/or partnerships with one of the state or national supercomputing centers formed in the United States, Europe, and Japan. For instance, in 1988 Lilly partnered with the National Center for Supercomputing Applications (NCSA) in Urbana-Champaign, Illinois. Meanwhile, supercomputer manufacturers such as Cray Research and ETA Systems, both in Minnesota, courted scientists and managers at the pharmaceutical companies.

A phrase occasionally heard in this period was that computations were the "third way" of science. The other two traditional ways to advance science were experiment and theory. The concept behind the new phrase was that computing could be used to develop and test theories and to stimulate ideas for new experiments.

How To Bolster Your Immune System

How To Bolster Your Immune System

All Natural Immune Boosters Proven To Fight Infection, Disease And More. Discover A Natural, Safe Effective Way To Boost Your Immune System Using Ingredients From Your Kitchen Cupboard. The only common sense, no holds barred guide to hit the market today no gimmicks, no pills, just old fashioned common sense remedies to cure colds, influenza, viral infections and more.

Get My Free Audio Book

Post a comment