Aging-centric genetic health database in California: 100k people, ~65yrs, 700k SNPs, telomeres too

Kaiser Permanente alongside with UCSF plans for genetic analyses of an unprecedented 100,000 older Californians, the Technology Review writes in Massive Gene Database Planned in California

The effort will make use of existing saliva samples taken from California patients, whose average age is 65. Their DNA will be analyzed for 700,000 genetic variations called single-nucleotide polymorphisms, or SNPs, using array analysis technology from Affymetrix. Through the National Institutes of Health (NIH), the resulting information will be available to other researchers, along with a trove of patient data including patients’ Kaiser Permanente electronic health records, information about the air and water quality in their neighborhoods, and surveys about their lifestyles.

The target age group shows that the focus is on “secondary aging”:

Given the high average age of the group, the platform will also be a boon to studying diseases of aging. “One might want to ask,” Schaefer says, “what are the genetic influences on changes in blood pressure as people age, and how are those changes in blood pressure related to diseases of aging, like stroke and Alzheimer’s and other cardiovascular diseases?”

UCSF will perform separate procedures on the samples to determine the length of telomeres–sections of DNA at the ends of chromosomes that protect against damage. The length of telomeres is associated with cell division and aging. One of the coinvestigators on the project is Elizabeth Blackburn, a biologist at UCSF who shared the 2009 Nobel Prize in Medicine for her work on telomeres.

15 thoughts on “Aging-centric genetic health database in California: 100k people, ~65yrs, 700k SNPs, telomeres too

  1. Journaled filesystems were of course the earlier fad. In this mechanism a copy of any metadata and sometimes data that is to be modified is kept in a journal, a fixed area of the disk or another disk, that logs each modification. In this mechanism the journal is replayed on reboot and the filesystem is left in a consistent state. The problem with journaling is either it’s very simple and uses a tremendous amount of space and I/O with a strict transaction model that prevents some concurrency, or it becomes incredibly complex, to the tune of over 20,000 lines of code in xfs. Still I have questions about concurrency when multiple transactions affect a block in xfs, but I need to dig deeper to understand this.

  2. As millions of people who want to lose weight, but youDid he can not succeed in a kind of worry about your weight yanıyorsunuz more? You might think, and this process will be much easier to lose weight in a healthy way to get over … Now a much more difficult as it used to have a nice view, but you have to do is that you want to attenuation.

Comments are closed.