AWS Cloud Hosts Massive Human Genetics Catalogue

Amazon Web Services (AWS) and the US National Institutes of Health (NIH) announced that the complete 1000 Genomes Project is now available on AWS as a publicly available data set.

AWS and NIH broke the news at the White House Big Data Summit, adding that this makes the largest collection of human genetics available to researchers worldwide, free of charge. The 1000 Genomes Project is an international research effort coordinated by a consortium of 75 companies and organisations to establish the most detailed catalogue of human genetic variation, AWS officials said.

Big Data project

The project has grown to 200 terabytes of genomic data including DNA sequenced from more than 1,700 individuals that researchers can now access on AWS for use in disease research. The 1000 Genomes Project aims to include the genomes of more than 2,600 individuals from 26 populations around the world, and the NIH will continue to add the remaining genome samples to the public data set this year.

The 1000 Genomes project started out with pilot phases in 2008 that included just a couple terabytes of data, AWS told eWEEK. In 2010, NIH made a small portion of that data available on AWS as a public data set, and due to the positive feedback from scientists, they decided to make the 1,000 Genome Project as it stands today – at more than 200TB of data – fully accessible on AWS. The amount of data produced by the 1000 Genomes Project is unprecedented in biomedical research, NIH officials said. NIH, part of the US Department of Health and Human Services, serves as one of the data coordinators for the 1000 Genomes Project.

“Previously, researchers wanting access to public data sets such as the 1000 Genomes Project had to download them from government data centres to their own systems, or have the data physically shipped to them on discs,” said Lisa Brooks, programme director for the Genetic Variation Programme, National Human Genome Research Institute, a part of NIH, in a statement. “This process took a long time, and that’s assuming a lab had the bandwidth to download the data and sufficient storage and compute infrastructure to hold and analyse the data once they had it. We are happy that the 1000 Genomes Project data are on AWS to give researchers anywhere in the world a simple way to access the data so they can put the data to work in their research.”

“Putting the data in the AWS cloud provides a tremendous opportunity for researchers around the world who want to study large-scale human genetic variation but lack the computer capability to do so,” said Richard Durbin, co-director of the 1000 Genomes Project and joint head of human genetics at the Welcome Trust Sanger Institute, Hinxton, England.

AWS said for researchers to download the complete 1000 Genomes Project on their own servers, it would take weeks to months, and that is assuming they had the bandwidth to download the data and enough hardware and storage to hold it. To do meaningful analysis on the data, researchers often needed access to very large, high performing compute resources, which cost hundreds of thousands and sometimes millions of dollars, AWS officials said.

The NIH was selected as one of the data coordinators for the 1000 Genomes Project, and they wanted to remove this friction and make the data as widely accessible as possible, so researchers can immediately start analysing and crunching the data, even if they do not have the large budgets that are traditionally required for this level of data analytics, AWS said.

Centralised data sets

Public Data Sets on AWS provide a centralised repository of public data stored in its Simple Storage Service (S3) and Elastic Block Store (EBS). The data can then be directly accessed from AWS services such as Elastic Compute Cloud (EC2) and Elastic MapReduce (EMR), eliminating the need for organisations to move the data in house and then procure enough technology infrastructure to analyse the information effectively, AWS said.

For its part, AWS’s highly scalable compute resources are being used to power Big Data and high performance computing applications such as those found in science and research. NASA’s Jet Propulsion Laboratory, Langone Medical Centre at New York University, Unilever, Numerate, Sage Bionetworks and Ion Flux are among the organisations employing AWS for scientific discovery and research. AWS is storing the public data sets at no charge to the community. Researchers pay only for the additional AWS resources they need for further processing or analysis of the data.

“It took more than 10 years, and billions of dollars to sequence and publish the very first human genome. Recent advances in genome sequencing technology have enabled researchers to tackle projects like the 1000 Genomes by collecting far more data, faster,” said Deepak Singh, principal product manager for Amazon Web Services, in a statement.

“This has created a growing need for powerful and instantly available technology infrastructure to analyse that data,” he said. “We’re excited to help scientists gain access to this important data set by making it available to anyone with access to the Internet. This means researchers and labs of all sizes and budgets have access to the complete 1000 Genomes Project data and can immediately start analysing and crunching the data without the investment it would normally require in hardware, facilities and personnel. Researchers can focus on advancing science, not provisioning the resources required for their research.”

AWS said the 1000 Genomes is a prime example of Big Data, where data sets become so massive that few researchers have access to the compute power in their own data centres to analyse and process the data. Yet, a key point here is that the 1000 Genomes data will be sitting right next to the compute power researchers need to derive value from the data. In a matter of minutes, scientists can spin up as much compute power as they need to crunch the massive data sets. Researcher will only pay for the additional AWS resources needed for further processing or analysis of the data, AWS said.

For more information about Public Data Sets on AWS go to: http://aws.amazon.com/publicdatasets/

Darryl K. Taft

Darryl K. Taft covers IBM, big data and a number of other topics for TechWeekEurope and eWeek

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

3 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

3 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

3 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

4 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

4 days ago