Amazon Web Services (AWS) and the US National Institutes of Health (NIH) announced that the complete 1000 Genomes Project is now available on AWS as a publicly available data set.
AWS and NIH broke the news at the White House Big Data Summit, adding that this makes the largest collection of human genetics available to researchers worldwide, free of charge. The 1000 Genomes Project is an international research effort coordinated by a consortium of 75 companies and organisations to establish the most detailed catalogue of human genetic variation, AWS officials said.
The project has grown to 200 terabytes of genomic data including DNA sequenced from more than 1,700 individuals that researchers can now access on AWS for use in disease research. The 1000 Genomes Project aims to include the genomes of more than 2,600 individuals from 26 populations around the world, and the NIH will continue to add the remaining genome samples to the public data set this year.
“Previously, researchers wanting access to public data sets such as the 1000 Genomes Project had to download them from government data centres to their own systems, or have the data physically shipped to them on discs,” said Lisa Brooks, programme director for the Genetic Variation Programme, National Human Genome Research Institute, a part of NIH, in a statement. “This process took a long time, and that’s assuming a lab had the bandwidth to download the data and sufficient storage and compute infrastructure to hold and analyse the data once they had it. We are happy that the 1000 Genomes Project data are on AWS to give researchers anywhere in the world a simple way to access the data so they can put the data to work in their research.”
“Putting the data in the AWS cloud provides a tremendous opportunity for researchers around the world who want to study large-scale human genetic variation but lack the computer capability to do so,” said Richard Durbin, co-director of the 1000 Genomes Project and joint head of human genetics at the Welcome Trust Sanger Institute, Hinxton, England.
AWS said for researchers to download the complete 1000 Genomes Project on their own servers, it would take weeks to months, and that is assuming they had the bandwidth to download the data and enough hardware and storage to hold it. To do meaningful analysis on the data, researchers often needed access to very large, high performing compute resources, which cost hundreds of thousands and sometimes millions of dollars, AWS officials said.
The NIH was selected as one of the data coordinators for the 1000 Genomes Project, and they wanted to remove this friction and make the data as widely accessible as possible, so researchers can immediately start analysing and crunching the data, even if they do not have the large budgets that are traditionally required for this level of data analytics, AWS said.
Public Data Sets on AWS provide a centralised repository of public data stored in its Simple Storage Service (S3) and Elastic Block Store (EBS). The data can then be directly accessed from AWS services such as Elastic Compute Cloud (EC2) and Elastic MapReduce (EMR), eliminating the need for organisations to move the data in house and then procure enough technology infrastructure to analyse the information effectively, AWS said.
For its part, AWS’s highly scalable compute resources are being used to power Big Data and high performance computing applications such as those found in science and research. NASA’s Jet Propulsion Laboratory, Langone Medical Centre at New York University, Unilever, Numerate, Sage Bionetworks and Ion Flux are among the organisations employing AWS for scientific discovery and research. AWS is storing the public data sets at no charge to the community. Researchers pay only for the additional AWS resources they need for further processing or analysis of the data.
“It took more than 10 years, and billions of dollars to sequence and publish the very first human genome. Recent advances in genome sequencing technology have enabled researchers to tackle projects like the 1000 Genomes by collecting far more data, faster,” said Deepak Singh, principal product manager for Amazon Web Services, in a statement.
“This has created a growing need for powerful and instantly available technology infrastructure to analyse that data,” he said. “We’re excited to help scientists gain access to this important data set by making it available to anyone with access to the Internet. This means researchers and labs of all sizes and budgets have access to the complete 1000 Genomes Project data and can immediately start analysing and crunching the data without the investment it would normally require in hardware, facilities and personnel. Researchers can focus on advancing science, not provisioning the resources required for their research.”
AWS said the 1000 Genomes is a prime example of Big Data, where data sets become so massive that few researchers have access to the compute power in their own data centres to analyse and process the data. Yet, a key point here is that the 1000 Genomes data will be sitting right next to the compute power researchers need to derive value from the data. In a matter of minutes, scientists can spin up as much compute power as they need to crunch the massive data sets. Researcher will only pay for the additional AWS resources needed for further processing or analysis of the data, AWS said.
For more information about Public Data Sets on AWS go to: http://aws.amazon.com/publicdatasets/
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…