Bleeding Edge Biotech

Bioinformatics and Big Iron

The Next Frontier in IaaS: Non-commodity and Vertical Scaling

Something I’ve been thinking about came up several times at BioIT World last week. It was mentioned during the cloud computing workshop and even Deepak Singh’s keynote. It’s the notion of service oriented architectures that offer boutique or non-commodity infrastructure as a service. Clearly the reason that Amazon Web Services was able to shake things up in this space was due to economies of scale. This is a core tenet of cloud computing. Companies like Google and Amazon can leverage their massive operational scale and purchasing power in order to democratize access to compute and storage.

The buzz around the cloud and high-performance computing is almost always in reference to scale-out or horizontal scaling architectures. While I am known to have drunk the kool-aid, I also take issue with the idea that this is the only way to scale applications. Now that the cloud is becoming less hip and anyone with knowledge of a scripting language is able to crunch terabytes of data on thousands of CPU’s, I’m left wondering where the next challenge is.

Big clusters and big storage are essentially solved problems. And that’s why it’s reached a level of abstraction that allows me to manage it all from my laptop and a web browser. A lot of the best practices that arose in the cloud era are finding their way back into the HPC space. The cloud has accelerated efforts in automation, data-intensive compute frameworks, asynchronous programming, and has blurred the line between the developer and the sysadmin. That’s what makes the cloud awesome.

For scientific computing in particular it has finally provided agile and experimental IT to match the experimental essence of science. A bioinformatician or computational biologist too often handles responsibility in the wet lab as well as the machine room. Research should not have to wait 4 to 6 months to acquire new hardware or spend 5 days configuring a relational database. This stifles scientific progress and takes the researcher away from doing what she does best.

What do we do when the problem is special and the resource requirements aren’t consumer grade compute and storage. Are we back to the drawing board? How can we take all the awesomeness of the cloud and make it work in those scenarios? I’m talking about IaaS offerings that include ASICs, GPGPUs, Infiniband, NUMAlink, and the stuff that gives you real performance and not just throughput. It doesn’t have to stop with IT infrastructure. What if cloud sequencing had an API? What if anyone could have easy remote access to a mass spec or a synchrotron?

This is technology we have in hand today. You can partner with big universities to use their high end lab instruments. You can even apply for time on special supercomputers like the Anton from DESRES. I would like to see these types of resources open up in the same way that Amazon enabled anyone with a credit card to spin up a big IT infrastructure. This is critical for small companies to compete and innovate. It would also create new business models and platforms built on top of these services. The cloud gives us some of this today, but what will it look like in the future?

Further reading:

Antibody Docking on the Amazon Cloud

Today an article I wrote for Bio-IT World was published describing Antibody docking experiments that are running on Amazon EC2. Since my final edits didn’t make the deadline I wanted to post the entire article here with some inline links.


It was 18 months ago in this column that Mike Cariaso proclaimed, “Buying CPUs by the hour is back” in reference to our work with Amazon’s Elastic Compute Cloud (EC2). Back then, we were perhaps a bit far ahead of the hype vs. performance curve of cloud computing. A handful of forward- thinking companies were finding ways to scale out web services. Few research groups were putting EC2 instances to work for real number crunching in the life sciences. In the last two years, utility computing has begun to make an impact on real world problems (and budgets) in many industries. For researchers starved for computing power, the flexibility of the pay-as-you-go access model is compelling. The Amazon EC2 process makes the grant process used by national Supercomputing centers look arcane and downright stifling. Innovative and ‘bursty’ research requires dynamic access to a large pool of CPU and storage. Computational drug design is a great place to begin to clear the air about the reality of this emerging technology.

Accelerating the creation of novel therapeutics is priority one for the research side of the pharmaceutical industry. Much time is spent optimizing the later phases of clinical trials in many pipelines. However, IT and infrastructure decisions made much earlier in the process can have a profound impact on the momentum and direction of the entire endeavor. For protein engineers at Pfizer’s Bioinnovation and Biotherapeutics Center, the challenging task of Antibody docking presents computational roadblocks. All-atom refinement is the major high performance computing challenge in this area.

Respectable models of a protein’s three-dimensional structure can usually be generated on a single workstation in a matter hours. After building multiple models, a refinement step typically produces the most accurate models. Atomic detail is necessary to validate whether newly modeled antibodies will bind their target epitopes and to get a clear picture of the protein-protein interactions and binding interfaces of these immunogenic molecules.

One of the most successful frameworks for studying protein structures at this scale is Rosetta++, developed by David Baker at the University of Washington. Baker describes Rosetta as “a unified kinematic and energetic framework… (that) allows a wide-range of molecular modeling problems … to be readily investigated.” Refinement of antibody docking involves small local perturbations around the binding site followed by evaluation with Rosetta’s energy function. It’s an iterative process that requires a massive amount of computing based on a small amount of input data. The mix of computational complexity with a pleasantly parallel nature makes the task suitable for both high-end supercomputers and Internet-scale grids.

BBC Two

When Giles Day and the informatics team at Pfizer BBC designed its antibody- modeling pipeline using Rosetta, it soon realized it had a serious momentum killer. Each antibody model took 2¬–3 months using the 200-node cluster. With dozens of new antibodies to model, the project was at a standstill until they could get enough compute capacity to do the appropriate sampling. Furthermore, the pipeline was invoked with unpredictable frequency since it was dependent upon discovery in other departments. What it needed was a scale-out architecture to support “surge capacity” in docking calculations. This surge could happen frequently or not at all, making capacity planning extremely difficult.

Traditionally options were limited to expanding in-house resources by adding more nodes to the cluster or reducing the sampling. The only true option was to throw more CPUs at the problem — a doubled capacity could potentially halve a two-month calculation – but would necessitate acquisition, deployment, and operational costs. After evaluating those costs, they contracted the BioTeam to provide them with a cloud based solution. The result was a scalable architecture custom fit to their workloads and built entirely on Amazon Web Services (AWS). As was clearly evidenced at this year’s Bio-IT World Expo, the Cloud is mainstream today. Moreover, the AWS team is years ahead of the competition. AWS is unveiling new features and API improvements almost every month. The AWS stack is fast becoming a first choice by BioTeam for cost-effective virtual infrastructure and high-performance computing on-demand.

The architecture employed for docking at Pfizer makes use of the nearly the entire suite of services offered by Amazon. A huge array of Rosetta workers can be spun up on EC2 by a single protein engineer and managed through a web browser. As Chris Dagdigian pointed out in his recent keynote at Bio-IT World: While the cloud is quite hyped, this isn’t rocket science. The Simple Storage Service (S3) stores inputs and outputs, SimpleDB tracks job meta-data, and the Simple Queue Service glues it all together with message passing. What Amazon did right in 2007 was elastic compute and storage. What they do better in 2009 is to provide developers everywhere with a complete stack for building highly efficient and scalable systems without a single visit to a machine room. The workloads at Pfizer that previously took months are now done overnight and the research staff can focus on results without pushing their projects on the back shelf.

Plenary Keynote

The Bio-IT World Conference & Expo ‘09 took place last week in Boston. Highlights included my role model and colleague Chris Dagdigian giving the plenary keynote. Slides can be found at blog.bioteam.net. The keynote was extremely well received by the audience and his talking points resonated throughout the conference.

dag-1.png

James Hamilton from Amazon wrote up a nice summary as well.

There was a great panel on Cloud computing featuring Chris Dag along with representatives from CycleComputing, Google, Johnson & Johnson, and Eli Lilly. I didn’t take notes but topics ranged from how to manage software licensing to cloud portability to security and data motion concerns. It was a very focused discussion that definitely shed light upon some of the opportunities and challenges that lie ahead.

Another highlight was meeting and talking to vendors like Isilion, CycleComputing, Aspera, Geneologics, OpenEye, Omixon, Ocarina, and NextBio.

I had an excellent time and met a lot of new faces – some of whom I’ve worked with (virtually) for almost a year. In the end the biggest highlight for me was realizing that I am privileged to work with some truly awesome people on one of the coolest companies on this planet.

BioTeam Group Photo

Thirty Years of (Bio)Molecular Simulation: How Far Have We Come?

This was originally intended to be micro-blogged talk. Probably on friendfeed. But when I walked into the old Chevron building on the Pitt campus to listen to Professor Wilfred van Gunsteren the wireless was spotty, so I saved my notes for a triumphant return to normal blogging. The talk is part of a lecture series presented by the CMMS at the University of Pittsburgh. Since it was probably the intended purpose when I started Bleeding Edge Biotech; this is my notepad of the distinguished lecturer’s slides and talking points.

Computation based on molecular models is playing an increasingly important role in biology, biological chemistry, and biophysics. Since only a very limited number of properties of biomolecular systems is actually accessible to measurement by experimental means, computer simulation can complement experiment by providing not only averages, but also distributions and time series of any definable – observable or non-observable – quantity, for example conformational distributions or interactions between parts of molecular systems. Present day biomolecular modelling is limited in its application by four main problems: 1) the force-field problem, 2) the search (sampling) problem, 3) the ensemble (sampling) problem, and 4) the experimental problem. These four problems will be discussed and illustrated by practical examples. Progress over the past thirty years will be briefly reviewed. Perspectives will be outlined for pushing forward the limitations of molecular modelling.

Why Thirty Years?

…first simulations were performed in 1976..

Molecular modeling choices to make:

Simulations can:

  • explain experiment
  • provoke experiment
  • replace experiment
  • aid in establishing intellectual property

The four problems

  • Force field problem
  • The search (sampling) problem
  • The ensemble sampling problem
  • The experimental problem

The Force Field problem

  • small free energy differences
  • account for entropic effects
  • variety of atoms and molecules (keep it simple; transferable parameters)

…using only the PDB for force field development just doesn’t work out.

Most dominant fold is not difficult; equilibra between folds is more important.  Should be able to get melting temperatures from simulations. Solvent viscosity drives the kinetics of folding.  Todo: Polarizable force- fields. —–

The searching (sampling) problem

  1. convergence B. alleviated C. aggrevated

Methods to compute free energy

  • counting configurations
  • thermodynamic integration (many simulations)
  • perturbation formula (one simulation)
  • One-step perturbation (few simulations)

  • use “soft-core” atoms for each site where the inhibitors will interact.
    Original Viagra and Levitra could have benefitted from this method (IP, patents)

The ensemble (sampling) problem

  • Entropy
  • Averaging
  • Non-linear averaging

Coiled-coil stability has a strong entropic component.  For monomers the solute-solvent interaction decreases.  For trimers the solute-solute interaction decreases.  Entropy increases with temperature.  In trimers atomic fluctuations do not increase with temperature but solute entropy increases with temperature.

The experimental problem

  • Averaging
  • Insufficient data
  • Insufficient accuracy

“Averages are dangerous”

Conclusions:

  • Experimental data cannot determine the average structure
  • Experimental data cannot determine the biomolecular structure

Artifacts of XPLOR NMR refinement disagree with simulations guided by NOE- restraints – Two ensembles with no ensemble overlap and given same experimental data

“Experimental data is not sufficient”

Don’t rely on structural data (It’s derived; strive for primary data)

History

1957 First molecule 1964 atomic liguid (argon) 1971 molecular liquid (water)

Future

2001 — 2029 Biomolecules in water 2034 E-coli 2056 Mamallian cell (10^-9 sec) 2080 Biomolecules in water (fast as nature) 106 2172 Human body (1027 atoms) 1 sec

So what if you could simulate every atom in your body for 1 second?

— There’s much better things simulation can answer; ask better questions.

Polarizable Force Field

  • improves transferability between different environments – working on these force fields – solvation drives protein processes

Coarse-graining

  • Need to switch FG/CG, back and forth – Run simulations in parallel – Easy to clamp 5 atoms to 1 but not easy to map 1 to 5 – FG/CG replica-exchange simulation enhances sampling – Much faster to cross barriers in CG mode if you can switch – Both force-fields must be thermodynamically calibrated
    We need simulations to explain experiment; so we can see the numbers.  For molecular modelers, there’s still enough work to do at least until 2172!

Questions from the audience

Q: What’s the state of NMR determination A: It depends, narrow bundles should have more motion.  Stable proteins are easy.  Averaging problem is present even in Crystallography.  Can’t get R-values.  Many many structures are not that good (XPLOR FF is simple, no solvent).  Found 20% of side-chain J-values cannot be right.  Simulation is getting to the point to correct experiment.

Q: Could you comment on CG model ‘clamping atoms’ and potential problems related to entropy A: Take 5 atoms, make a ball, you lose entropy.  You should compensate that in the energy level?  You must balance it.

Q: Is Path integral still useful? A: No, we’d like to remove it next version of Gromos.

Professor van Gunsteren is a big believer in using all the data you can get your hands on.

CryoEM of Nanomachines

There was a time in structural biology when solving protein structures using NMR was received with considerable skepticism. In addition to the normal experimental uncertainty, the technique generated structures with additional uncertainty due to the vibrational motions of proteins in solution. That’s part of the reason standard NMR entries in the PDB contain ~20 structures while x-ray structures have just one. However modern NMR methods have advanced to the point that few skeptics are left. The two techniques together were essential in the rapid increase of structural information that’s available today.

According to Dr. Wah Chiu, electron cryomicroscopy today looks a lot like NMR 20 years ago. On his first slide he showed was a definition of Cryo- EM, which looked a lot like a definition of NMR. In bold he emphasized that Cryo-EM is solving structures without crystals. I’ve often heard protein crystallization called ‘black art’ or ‘trying to hold a stack of bowling balls together with tape’. I’m not a practitioner so I’ll assume it’s hard for at least the interesting cases. Getting good crystals is not required but sample preparation rules still apply to Cryo-EM. Wah stresses how diligent and often labor intensive work at the bench yields much better results further along in the pipeline. Once they have a sample though, Wah has a playground full of high-end instrumentsJEOL 3200FSC electron cryomicroscope

JEOL 3200FSC electron cryomicroscope ncmi cluster
1,000-core Linux cluster
images via [http://ncmi.bcm.tmc.edu]

Cryo-EM techniques have been very successful in determining structures of nanomachines (or macromolecular assemblies if you are frustrated with nano- fications) and looks to continue improving over the next 20 years. Large assemblies like capsids, phages, pores, and channels are all possible with Cryo-EM. The resolution is still quite far away from the < 2 Angstroms typical of good x-ray structures. If I remember correctly, Wah said they are currently achieving around 4.7 Angstrom res. and better depending on the system. But resolution isn’t just a number, it’s all about what you can actually see. And what they can actually see now is things like secondary structure and even side-chains. Complete atomic detail is not very far off.

Software and computational techniques influenced by image processing and protein structure prediction efforts are providing atomic details even sooner than expected. Wah’s group has developed SSEHunter to detect secondary structures from the Cryo-EM data and programs like MODELLER are used to characterize each component [paper].

My expectations are quite high. How long before we can see the entire cell and all of it’s components in atomic detail? 5? 10? 20 years? Futher reading:

Impressions From ISMB 3Dsig

This past weekend I attended my first ISMB conference in Toronto, ON. I didn’t have time to attend the main conference but I did enjoy the 3Dsig satellite meeting in the days preceding the main event. During the talks, I used twitter to jot down some brief notes. Here’s the rundown of my favorite 3Dsig keynotes:

“Towards Elucidating Allosteric Mechanisms of Function via Structure-based

Analysis of Protein Dynamics”

I am quite familiar with Ivet Bahar and her work since her lab is just across campus. Dr. Bahar is formally trained in polymer physics and brings a fresh approach to protein structure and dynamics. Borrowing from polymer sciences, elastic network modeling is an efficient coarse-grain approach to calculating mechanical motions in proteins. The approach is similar to Normal mode analysis. Both Gaussian Network models and Anisotropic Network models are beautiful abstractions of macromolecular motion. The low frequency (slowest) modes from the elastic network can be interpreted as the “functional” motions of the macromolecule. Global motions might also be interpreted as allosteric effects. Other uses for ANM modes are steering molecular dynamics simulations and small molecule docking.

One major benefit of ANM is computational efficiency. Efficiency that allows large dynamical systems such as ribosomes and GroEL to be studied, which is still infeasible for classical molecular dynamics. Even though it’s an approximation, ANM captures the mechanisms of motion that are important to protein function. I highly recommend submitting your favorite PDBid to ANM Server and see how it works.

Papers:

“On the Nature of Protein Fold Space: Extracting Functional Information

from Apparently Remote Structural Neighbors”

Dr. Barry Honig talks about the nature of protein fold space. During the talk, he makes the statement “There is no such thing as a fold” which was effectively provocative. His reasoning behind the statement was exemplified by several binding motifs which exist in proteins across 30 or 40 folds. He had several other examples where functional similarities were observed in proteins even though the structures were divergent. A fold class, he says, is a discretization which should come with a caveat. The caveat being don’t let fold classes get in the way of your question. If your question requires analysis of all metal-bindings sites, don’t start throwing away information because it’s ‘not the same fold’.

“I am not a PDBid I am a Biological Macromolecule”

The Prisoner (YouTube) via Wikipedia:

“Where am I?” “In the Village.” “What do you want?” “Information.” “Whose side are you on?” “That would be telling?. We want information. Information! INFORMATION!” “You won’t get it.” “By hook or by crook, we will.” “Who are you?” “The new Number Two.” “Who is Number One?” “You are Number Six.” “I am not a number ? I am a free man!”

It’s no secret that Phil E. Bourne is big on Open Access. He’s involved with the RCSB PDB, PLoS, and more recently SciVee in addition to his core research. This was a dinner session which sparked some interesting discussions late into Friday evening. He started off by referencing The Prisoner, a British sci-fi television show where the main character is imprisoned and referenced only by a number. Phil parallels this with PDB structures, describing how entries in the PDB are essentially featureless and unannotated with respect to function. Partially to blame is structural genomics efforts which rapidly solves structures without functional motivation. The real functional information, he contends, lies in the literature. The typical workflow for a biologist interested in a structure is to go the the PDB, find a structure, lookup primary citation, download the publication, examine figure, download structures, find more references, etc, etc. In order to break this painful workflow he suggests better metadata support in the journal articles themselves, figures which are encoded as representations of the actual PDB coordinates, and lots of other mashable features in publications. Then he talked about catching his graduate students watching YouTube and how that led to development of SciVee. Video is an attractive medium for describing structure-function relationships. Speaking of attractiveness, one concerned member of the audience voiced an opinion that the more attractive scientists are going to get more attention on SciVee and that this would degrade science as a whole. A lively discussion about the differences between a good speaker/pubcaster and a good scientist ensued.

“Conformational Flexibility and Sequence Diversity in Computational

Protein Design”

Dr. Tanja Kortemme reports on progress in protein design. More specifically, redesigning protein interfaces and interactions. The design protocol was as follows:

Interacting complex –> Flexible backbone –> Rotamer library –> Monte Carlo steps

The computational methods were accompanied by impressive experimental efforts including X-Ray crystallography and even cell morphology studies. The flexible backbone model was improved by the implementation of backrub motions in Rosetta, which were recently observed in high-resolution crystal structures, and greatly improves side-chain prediction accuracy. Papers:

“Hits, Leads, and Artifacts from Virtual and High-Throughout Screening”

I am not too familiar with High-Throughput Screening techniques, however Brian Shoichet gave an excellent talk about parallel efforts screening in vitro and in silico. His most compelling points were the false positive rates of HTS (90-100%!) and the bias in small molecule screening libraries. The high false positive rates is due to large aggregates (200nm) sequestering enzyme and appearing like inhibitors. The screening library bias is a major contributor to the success of HTS and comes from “200 years of medicinal chemistry”.

Papers:

Stay up-to-date with the rest of the conference at the ISMB room on FriendFeed!. Also, Public Rambling compiled a list of science bloggers at ISMB.

Hybrid Programming for Shared-Memory and Clustered SMP Systems

There’s an upcoming workshop at the PSC September 8 – 11, 2008

This workshop will present programming models and techniques for writing efficient parallel code on contemporary and future supercomputers with extensive shared memory, or hierarchical architectures with smaller shared- memory components. Two important examples of systems to which these techniques apply are the SGI Altix and networked clusters of multicore processors. Expert instructors from PSC and SGI will review MPI, OpenMP, and hardware architecture prior to launching into detailed treatments of programming for hybrid parallelism, performance analysis, and optimization. This is a “bring your own code” workshop. Participants are encouraged to bring an application to focus on during the hands-on sessions to maximize the workshop’s effectiveness. Examples will be provided for participants who cannot bring a research code. Experienced PSC computational scientists will provide support regarding the topics covered, including hybrid algorithms and implementation strategies and performance engineering.

More details

Solve Puzzles for Science - FoldIt: An Online Protein Folding Game

David Baker is one of my favorite scientists. His group performs the best at CASP. He started the Rosetta protein folding and design software and Rosetta@HOME a distributed computing network to run it. And now he’s behind one of the coolest projects I’ve ever seen. Fold.it is an amazing community-based game where players can compete by folding proteins in a graphical point and click manner. Fold.it has a beautiful UI and molecular graphics not unlike the ones you’ve come to expect from VMD, PyMOL, and UCSF Chimera. Most importantly, this highly addictive puzzle game has real scientific value. Each time you solve a folding puzzle, the software sends your results back to FoldIt. With that data they hope to gain insight into the powerful human capacity to recognize patterns and apply that to new protein structure prediction methods. Players can create and join groups to compete against other players for high-scores.

After playing FoldIt for about an hour the game is actually very fun and addicting! Any game with actions like “Shake Sidechains” and “Wiggle Backbone” is guaranteed to make any bioche/biophysicist smile. While it may compete with GTA4, this game is a huge step in educating students in protein structure. It’s truly brilliant. Thanks to Andrew Perry for pointing this out.

FoldIt – Crowdsourcing to solve the protein folding problem