HoloGenomics News Archives of the Year of 2010

(Dec 28) 23andMe lowers price from $499 to $199 permanently
(Dec 19) Genetic Tests Debate: Is Too Much Info Bad for Your Health?
(Dec 18) Key information about breast cancer risk and development is found in 'junk' DNA
(Dec 09) DIY DNA on a Chip: Introducing the Personal Genome Machine
(Dec 09) Break Out: Pacific Biosciences Team Identifies Asian Origin for Haitian Cholera Bug
(Dec 03) Which Is a Better Buy: Complete Genomics or Pacific Biosciences?
(Nov 27) A Geneticist's Cancer Crusade
(Nov 25) News from 23andMe: a bigger chip, a new subscription model and another discount drive [Grab it NOW - $99 Holiday Sale]
(Nov 17) This Recent IPO Could Soar [as money for "Analytics" makes "Sequencing" sustainable - AJP]
(Nov 16) BGI – China’s Genomics Center Has A Hand in Everything
(Nov 15) Most doctors are behind the learning curve on genetic tests - the $1,000 sequence and $1 M interpretation
(Nov 11) Forget About a Second Genomics Bubble: Complete Genomics Tumbles on IPO First Day
(Nov 11) The Daily Start-Up: Gene Tests Attracting Money & Scrutiny [23andMe C round with J&J]
(Nov 09) NIH Chief Readies the Chopping Block [ASHG in Washington ending on a sour note - AJP]
(Nov 09) Next Generation Sequencing
(Nov 08) Experts Discuss Consumer Views of DTC Genetic Testing at ASHG, Washington
(Nov 07) Complete Genomics plans Tuesday [November, 9] initial public offering of stock
(Nov 02) Today, we know all that was completely wrong - Lander's Keynote at ASHG, Washington
(Nov 01) 1,000 Genomes Project Maps 16 Million DNA Variants: Why?
(Oct 29) Parody of Public’s Attitude Toward DTC Genetics
(Oct 27) UPDATE: Pacific Biosciences IPO Rises While First Wind Cuts Price
(Oct 26) UPDATE 1-Pacific Biosciences IPO prices at midpoint-underwriter
(Oct 26) Complete Genomics Sets IPO Price Range [Analytics is the key - AJP]
(Oct 24) IPO Preview: Pacific Biosciences [this week; a huge surge for Fractal Analytics - AJP]
(Oct 19) Benoît B Mandelbrot: the man who made geometry an art [censored - reinstated AJP]
(Oct 18) 'Fractalist' Benoît Mandelbrot dies [Long Live FractoGene ... AJP]
(Oct 16) Benoît Mandelbrot, Novel Mathematician, Dies at 85
(Oct 14) Going 'Beyond the Genome'
(Oct 09) Cold Spring Harbor Lab Says Benefits of ARRA Funding Will Outlast Stimulus Program
(Oct 08) New Research Buildings Open at Cold Spring Harbor Laboratory
(Oct 07) What to Do with All That Data?
(Oct 06) Pacific Biosciences Targeting $15-$17 Share Price for IPO
(Oct 06) The Road to the $1,000 Genome
(Oct 18) Revolution [was] Postponed [for too long, over Half a Century - AJP]
(Oct 01) The $1,000,000 Genome Interpretation
(Sep 27) Mastering Information for Personal Medicine
(Sep 20) Cacao Genome Database Promises Long Term Sustainability
(Sep 18) US clinics quietly embrace whole-genome sequencing
(Sep 17) The Broad's Approach to Genome Sequencing (Part II)
(Sep 10) Pellionisz Principle; "Recursive Genome Function" gets well over a quarter of a Million hits (261,000)
(Sep 08) Victory Day of Recursion over Junk DNA and Central Dogma COMBINED
(Sep 07) Complete Genomics to Sequence 100 Genomes for National Cancer Institute Pediatric Cancer Study
(Sep 01) Junk DNA can rise from the dead and haunt you [Comments]
(Sep 01) Pacific Biosciences Denies Helicos' Infringement Claims
(Aug 24) Will Fractals Revolutionize Physics, Biology and Other Sciences?
(Aug 19) Reanimated ‘Junk’ DNA Is Found to Cause Disease
(Aug 18) Life Technologies inks $725M deal for Ion Torrent
(Aug 16) PacBio files for $200 million IPO
(Aug 15) How Can the US Lead Industrialization of Global Genomics? [AJP]
(Aug 15) Francis Collins: One year at the helm [US gov. over the cliff in Genomics - AJP]
(Aug 15) BGI Americas [and BGI Europe] Offers Sequencing the Chinese Way
(Aug 15) Junk DNA: Does it Hold More than what Appears?
(Aug 12) Biotech is back [in Korea - AJP]
(Aug 12) CLC bio [of Denmark] and PSSC Labs [California] Deliver Turnkey Solution for Full-Genome Data Analysis
(Aug 10) GenomeQuest and SGI Announce Whole-Genome Analysis Architecture
(Aug 10) Pacific Biosciences Expands into European Union
(Aug 09) Pacific Biosciences launches PacBio DevNet at ISMB 2010 - [Partners]
(Aug 08) Illumina Inc. et al. v. Complete Genomics Inc.
(Aug 05) I was wrong ...
(Aug 04) "Recursive Genome Function" - winner takes it all
(Aug 03) Pellionisz' "Recursive Genome Function" supersedes both obsolete axioms of "Central Dogma" AND "Junk DNA"
(Jul 31) Mountain View's Complete Genomics to make Wall Street debut
(Jul 30) SPIEGEL Interview with Craig Venter: 'We Have Learned Nothing from the Genome'
(Jul 28) GenePlanet in Europe makes Genome Testing Global
(Jul 27) Pfizer to Study Liver Cancer in Korean Patients with Samsung Medical Center
(Jul 27) Lee Min-joo donates 3 billion won to genome project
(Jul 27) Working with regulators-the road ahead
(Jul 23) GAO Studies Science Non-Scientifically
(Jul 23) FDA's 'Out-of-The-Box' Plans
(Jul 22) DTC Genome Testing of SNP-s “Ready for Prime Time”?
(Jul 17) Message arrived ... "the scientific community had to re-think long held beliefs"
(Jul 15) A Proving Ground for P4
(Jul 15) Ion Torrent, Stealthy Company Tied to Harvard’s George Church, Nabs $23M Venture Deal
(Jul 15) PacBio Nabs $109M to Make Cheaper, Faster Gene Sequencing Tools
(Jul 14) Recursive Genome Function at the crossroads - Charlie Rose Panel on Human Genome Anniversary
(Jul 08) The Sudden Death of Longevity
(Jul 07) 23andMe Letter to Heads of FDA and NIH
(Jul 07) Amazon Sees the Future of Biology in the Cloud
(Jul 06) Calling GWAS Longevity Calls into Question [Gene(s)]
(Jul 04) IBM setting up cloud for genome research
(Jul 02) Scientists Discover the Fountain of Youth! Or Not.
(Jul 01) IBM DNA Decoding Meets Roche for Personalized Medicine
(Jun 30) How to Build a Better DNA Search Engine
(Jun 30) 'Jumping genes' make up roughly half of the human genome
(Jun 27) A coding-independent function of gene and pseudogene mRNAs regulates tumour biology
The Second Decade: Recursive Genome Function

(Jun 26) Business Models for the Coming Decade of Genome-Based Economy - the past and transition
(Jun 26) Business Models for the Coming Decade of Genome-Based Economy - the transition and future
(Jun 25) The Genome and the Economy
(Jun 24) 23andMe Publishes Web-Based GWAS Using Self-Reported Trait Data
(Jun 24) Francis Collins: the extended genome anniversary interview
(Jun 24) The Big Surprise of the First Decade - The Genome Affects You to Prevent Diseases, Before it Cures Diseases
(Jun 23) Sergey Brin’s Search for a Parkinson’s Cure
(Jun 23) Data-Driven Discovery Research at 23andMe
(Jun 22) ACI Personalized Medicine Congress in Silicon Valley postponed from June 23-25 to December 9-10, 2010
(Jun 20) The Genome, 10 Years Later
(Jun 16) The Path to Personalized Medicine
(Jun 15) FDA Cracks Down on DTC Genetic Testing
(Jun 15) FDA Did Not Crack Down on DTC Genetic Testing [AJP]
(Jun 11) Why the FDA Is Cracking Down on Do-It-Yourself Genetic Tests: An Exclusive Q&A
(Jun 11) Breaking: FDA Likely to Require Pre-Market Clearance for DTC Personal Genomics Tests
(Jun 11) The Gutierrez Letters from FDA to DTC Genome Testing Companies
(Jun 11) What Five FDA Letters Mean for the Future of DTC Genetic Testing
(Jun 10) Silicon Valley' Genome-Based Personalized Medicine Meeting Postponed to Dec 9-10
(Jun 09) Would Regulation Kill Genetic Testing?
(Jun 04) Stanford School of Medicine Launches Center for Genomics and Personalized Medicine
(Jun 04) Your Genome Is Coming [to where? - AJP]
(Jun 03) Illumina Drops Personal Genome Sequencing Price to Below $20,000
(Jun 02) The Journal Science Interviews J. Craig Venter About the first "Synthetic Cell"
(Jun 02) Scientist: 'We didn't create life from scratch'
(Jun 01) The Genome Project is 10 Years Old - Where is the Health Care Revolution?
(May 27) Get Your Genotype Tests Now Before Congress Makes Them Illegal
(May 26) Who Should Control Knowledge of your Genome
(May 25) 'Junk' DNA behind cancer growth
(May 24) Transparency First: A Proposal for DTC Genetic Testing Regulation
(May 24) Convey Computer Hails Genomics Search Record
(May 18) CVS Follows Walgreens Down Pathway of Least Resistance
(May 11) Company plans to sell genetic testing kit at drugstores
(May 22) Why The Debate Over Personal Genomics Is a False One
(May 21) Existence Genetics is Pioneering the Field of Predictive Medicine - Nexus Technologies Critical in Understanding and Preventing Deadly Disease
(May 21) Where to next for personal genomics?
(May 20) How Bad Can a House Investigation be for DTC Genomics?
(May 20) Joining The Genomics Revolution Early
(May 20) DTC Genomics Targeted by Congressional Investigation
(May 19) BGI Expands Into Denmark with Plans for $10M Headquarters, Staff of 150
(May 17) Potential of genomic medicine could be lost, say science think-tanks
(May 16) Effects of Alu elements on global nucleosome positioning in the human genome
(May 15) Rapid Rise of Russia
(May 12) Genomics goes beyond DNA sequence
(May 12) Walgreens To Sell Genetic Test Kits For Predisposition To Diseases, Drug Response
(May 11) Bio-informatics Springs Up to Place Genome in Neverland
(May 09) Hood Wins $100k Kistler Prize
(May 06) Crisis in the National Cancer Institute
(May 03) Stanford bioengineer [Quake et al.] explores own genome
(Apr 28) Joint research begins on individual-level mechanisms of gene expression [RIKEN and Complete Genomics]
(Apr 28) James Watson Just Can't Stop Talking at GET
(Apr 27) New Algorithmic Method Helps Elucidate Molecular Causes of Inherited Genetic Diseases
(Apr 26) Affymetrix Launches Axiom Genome-Wide ASI Array For Maximized Coverage of East Asian Populations
(Apr 25) Digitization Slashing Health IT Vendor Dominance
(Apr 24) When Reading DNA Becomes Cheaper Than Storing the Data [Not "Disposable Genome" - AJP]
(Apr 23) 23andMe Special Sale on DNA Day (Apr 23 only) - full service for $99
(Apr 22) Predictive, Participatory, Personalized Prevention (P4) Health Care [Chaired by International HoloGenomics Society Founder, Dr. Pellionisz]
(Apr 21) BioMerieux, Knome Team on Sequencing-Based MDx
(Apr 20) Eric Lander's Secrets of the Genome ["Mr. President, the Genome is Fractal!" - AJP]
(Apr 18) Malaysian Genomics Resource Centre Berhad Launches US$4000 Human Genome Bioinformatics Service
(Apr 15) Barcode app tracks allergies [to be tested with Nestle products]
(Apr 14) Human Genome Mapping’s Payoff Disappoints Scientists
(Apr 13) Big science: The cancer genome challenge
(Apr 12) Francis Collins: DNA May Be A Doctor's Best Friend
(Apr 05) Korean Scientists Discover Asian-Specific CNV Genome Catalog
(Apr 03) Middle East Healthcare News [Asia & Middle East Alliance - AJP]
(Apr 02) Genome Sequencing to Predict, Prevent, Treat Diseases [Samsung in Korea - AJP]
(Mar 31) Life is complicated [but complexity is in the eye of the bewildered; think FractoGene - AJP]
(Mar 31) Human Genome Mapping’s Payoff Disappoints Scientists
(Mar 27) Genome Maps of 10 Koreans Completed
(Mar 19) 'Junk' DNA gets credit for making us who we are
(Mar 18) Can a gene test change your life? [Yes, says Francis Collins, with the example of his life...]
(Mar 15) Why the State of Personal Genomics is Not as Dire as You Think
(Mar 11) "Personal" study shows gene maps can spot disease
(Mar 09) A Vision for Personalized Medicine
(Mar 03) Genome Service Available for Predicting Illness [in Korea - and Asia]
(Mar 01) It will not be a DNA Data-Deluge. Get ready for a Tsunami while the data-level is at a low-ebb
(Feb 28) Doctors ‘lack training in genetics to cope with medical revolution
(Feb 27) Genetic testing may yield personalized health treatments
(Feb 26) Splash Down: Pacific Biosciences Unveils Third-Generation Sequencing Machine
(Feb 25) The Future Has Already Happened - How it might unfold by Complete Genomics and Pacific Biosciences?
(Feb 24) Pacific Biosciences Names First Ten Early Access Sequencer Customers
(Feb 24) Oral Cancer Study Shows Full Tumor Genome; Novel Method Speeds Analysis for Individualized Medicine
(Feb 23) Junk DNA could provide vital clues to heart disease
(Feb 22) Three YouTubes later: Is IT ready for the Dreaded DNA Data Deluge?


The Next $100 Billion Technology Business

Forbes
Matthew Herper
Dec. 30 2010

That headline is the cover language from the current issue of Forbes magazine – for a story I wrote about DNA sequencing and, particularly, about Jonathan Rothberg and his new Personal Genome Machine.

What we are declaring in this story is that DNA sequencing, the technology by which individual letters of genetic code can be read out, could be the basis for a $100 billion market that encompasses not only medicine, where sequencing is already being evaluated to help cancer patients, but also other fields like materials science, biofuels that replace petroleum, and better-bred crops and farm animals. There are even synthetic biologists who are talking about using biology to make buildings and furniture based on the idea that this will be better for the environment than current plastic and concrete.

Rothberg’s machine is important because it is the first attempt to lower the cost of DNA sequencing machines to bring them to a far wider audience. The cost of sequencing DNA has been dropping at a rate that rivals – and may surpass – the increases in speed seen with the microchip, but the machines used to do it cost $500,000 or more. The PGM is far less powerful, but it cost only $50,000, although you need other equipment to get it running. It is being made and sold by Life Technologies, the laboratory equipment firm that bought Rothberg’s company earlier this year.

We’re likening the PGM to the Apple Computer, which changed the world, but it could be more like the Altair, which fizzled. Right now, Illumina of San Diego holds the lead in the newer, faster segment of the DNA sequencing market, and it could very well keep it. There are also other new players, such as Pacific Biosciences, which can sequence a single molecule of DNA, and Complete Genomics, which is taking a factory-like approach to bring down cost. That adds to the excitement. Of course, as with all things biotech, this could all fall apart.

Starting on Monday – earlier if I can’t help myself — I’m going to be posting as much material as I can about the new science of genetics, looking at the companies, the science, the potential and the pitfalls. I’ll show you that there is already been a business revolution driven by genomics, even though you might have missed it. I’ll tell you what I think this means for privacy, for drug development, and for medicine, and tell you what books and blogs are good sources of information about the coming DNA wave. It will be Gene Week on The Medicine Show – sort of like Shark Week, but with alleles and sequencing by synthesis instead of sharp teeth and small brains.

And as I do that, I’d like to hear from you. After you’ve read my story, please tell me what you think and share your questions and criticisms in the comments or via email. I’ll publish the best commentaries I receive – and I promise not to hold back if they are critical of my work. I’ll try to answer questions, or to find sources who can. I think we’re on the cusp of a really big technological change. What about you?

[Pellionisz, called-out comment] Forbes has been pioneering the coverage of the first Decade of “Genome Revolution”, and apparently the second Decade, the “Industrialization of Genomics”. It is imperative to point out what went wrong in the first Decade (since 2001) and what is the sound basis of ramping up now from 2011 a “$100 Billion Technology” – especially in view of the admission that “Of course, as with all things biotech, this could all fall apart”. In my opinion, an interesting historical parallel is the blow-up of Newtonian Physics with Nuclear Physics facing the challenge that the old axioms (i.e. that the atom would not split, elements can not be changed – even the philosophical foundation of determinisms) had to create quantum mechanics first, before plunging into peaceful and not so peaceful applications in nuclear industry. The Decade of “Genome Revolution” (officially, with ENCODE, 2007) revealed that the axioms of Central Dogma, JunkDNA, genomic determinism were false – and instead of an introspection Genomics turned towards an “Industrialization” starting with the necessary but not sufficient need of full genome sequencing made affordable. We may be in for the brutal scene of “Sequencing-based industrialization” falling apart, if the “Dreaded DNA Data Deluge” (that e.g. I featured in YouTube, 2008) is not matched by our ability to interpret by software-enabling theory (based on sound informatics of The Principle of Recursive Genome Function). We must bear in mind that the very sustainability of the Industrialization of Genomics is at stake if it is (wrongly) assumed to be just “Technology”, without the software-enabling (algorithmic) understanding of the genome-epigenome (hologenome). Sequences will be worth nothing, and their huge glut will destroy the ecosystem of investments in and industrialization of genomics without the emergence of mathematical understanding of the complex system of hologenome. As always, analytics of complex systems must start with the identification of what the system is. The principle of recursive genome function holds that the system is fractal. Should there be a better idea, let’s hear about it.

[I entered my 2 cents to Matthew's blog - and will insert a pointer in my FaceBook page. The entry ccould thus be debated in Matthew's blog as well as in FaceBook of Andras Pellionisz]


23andMe lowers price from $499 to $199 permanently

[With the Holiday sale of $99 gone, 23andMe made its change of business model permanent. Under the new model they provide a low-cost entry ($199 plus S&H), but charge a monthly pittance of $5, and the buyer of the kit must enroll for the monthly updates for at least 12 months. Or, with a one-time payment ($499) there is no monthly fee for the updates (looks like on a permanent basis). This new marketing will certainly generate a sizable enlargement of their pool of "before sales" 50,000 or so (the official count is not public). This entry can be debated in FaceBook of Andras Pellionisz]


Genetic Tests Debate: Is Too Much Info Bad for Your Health?

Dec 19, 2010 | 8:31 AM ET |

By Samantha Murphy
My Health News

Hoping to find any disease susceptibilities lurking in her genes, 21-year-old Lee — who goes by the nickname "Zlyoga" on YouTube — spit into a container and posted a video of her salivary sampling on the popular site in December 2008.

"I think this is the coolest thing in the entire universe," she giddily said to the camera. "This is one of the best gifts I ever received — so much better than the bike I was going ask for."

Lee's parents had given her a mail-in genetic test. She completed the forms in the kit — then priced at a few hundred dollars — from California genetic-testing company 23andMe, enclosed the sample of her saliva, and sent it on its way.

Direct-to-consumer genetic tests have become increasingly popular since they first hit the market several years ago — in fact, 23andMe alone boasts of having more than 50,000 customers. Although the company does not release statistics about its actual growth, a spokewoman told MyHealthNewsDaily that "our database has grown steadily."

But there's been much debate over whether knowing the results is beneficial or harmful — and if they even give an accurate picture of a person's risk for certain diseases.

Not the whole picture

When Lee received her results, she took to YouTube once again: "In some ways, reading the results felt like a horoscope," she said, sounding half-satisfied.

The test revealed she indeed has blue eyes and, like both of her parents, has a low-tolerance for statin drugs, which are used to treat people with high cholesterol levels. However, she was surprised to learn a few diseases that run in her family posed little to no risk for her.

"I know [genetic testing] is still in its infancy, but how do I know any of this legitimate?" she said.

Others who have completed the 23andMe test have expressed similar concerns on YouTube: "It's all so vague; what does [higher risk] even mean?" asked "MelissaMich," after receiving her 23andMe results.

Not only may the results seem vague, they are just a snapshot of someone's genetic makeup, called single-nucleotide polymorphisms (or SNPs).

"Imagine that the genome is a huge jigsaw puzzle, with many more pieces than we have ever seen in a puzzle before," said Dr. Andras Pellionisz, founder of HolGenTech, a genome interpretation software company. "Now, suppose someone gives you only 10 percent of the pieces."

By knowing your SNPs, you may "get lucky" and precisely learn your risk of some diseases, said Pellionisz, who thinks the tests are a good idea. But, he said, the genetic picture is incomplete.

For example, the results don't factor in lifestyle choices. A person might be told he has a low risk of developing lung cancer, but if he smokes two packs a day, his chances of getting the disease increase.

Indeed, 23andMe is upfront about this on its site, stating that it provides genetic information and "not the sequence of your entire genome," nor does it perform predictive or diagnostic tests. The company acknowledges SNP information is difficult to interpret.

The uncertainty factor

UCLA sociology professor Stefan Timmermans, who studies the genetic testing of newborns, said that knowing too much puts stress on those who've taken the tests. In a recent study, Timmermans revealed how the newly mandated genetic screening of newborns for rare diseases is creating unexpected upheaval for families whose infants test positive for risk factors but show no immediate signs of any diseases.

"Although newborn screening undoubtedly saves lives, some families are thrown on a journey of great uncertainty," Timmermans said. "Rather than providing clear-cut diagnoses, screening of an entire population has created ambiguity about whether infants truly have a disease — and even what the disease is."

"Basically you're telling families of a newborn, 'Congratulations, but your child may have a rare genetic condition. We just don't know, and we don't know when we'll know,'" Timmermans said.

His study paints a picture of families caught in limbo as they wait months for conclusive evidence that their children are out of the woods for various conditions. In many cases, however, the test results never come, the study found. Instead, the children slowly outgrow their known risk factors for dozens of metabolic, endocrine or blood conditions But the effects linger.

"Years after, everything appears to be fine, parents are still very worried," Timmermans said.

Some families are so traumatized that they follow unwarranted and complicated treatment regimens, including waking their children up in the middle of the night, enforcing restrictive diets and limiting their contact with other people for years.

And the same lasting worries come with direct-to-consumer testing, Timmermans told MyHealthNewsDaily.

"Those types of tests are planting seeds in people's minds for something there isn't a lot of firm data about. The genetic information provided by direct-to-consumer tests by itself isn't enough; they also have to look at family history and what has actually developed."

Understanding the results

Mike Spear, a 56-year-old communications director from Calgary, Alberta, didn't know exactly what to make of his results. He found he had a high risk of age-related macular degeneration, which causes vision loss in old age, and became very concerned.

"When I saw my results on paper, it seemed impossible to distance myself from the fact that it was just an experiment," said Spear, who works for Genome Alberta, a not-for-profit genetics research funding organization.

Spear contacted a genetic counselor to gain more insight into the results' meaning.

"The counselor explained that just because I'm at risk for certain conditions, it doesn't mean it's going to turn into anything," Spear told MyHealthNewsDaily. "I also started to be proactive and go to the eye doctor more."

The company 23andMe offers counseling services and gives customers access to its online community, where people can chat about their results, for an additional fee.

But not everyone who participates in the tests reaches out to genetic counselors who can help explain the results, said Dr. Christopher Tsai, director of clinical informatics of Generation Health, a genetic-testing benefits firm. And this is another place problems rise.

Further interpretation

The interpretation of the results is an increasingly central challenge to genetic testing, Tsai said, and the power to analyze the genome has outpaced the ability to interpret the results.

"Even trained geneticists disagree on how results should be interpreted and used to guide care," Tsai said. "The results can certainly be empowering if they lead to concrete actions that the patient can take. Even in terms of lifestyle, there is some evidence that genetics influence people's response to diet and exercise, and this information can guide their lifestyle changes."

According to a 2009 survey conducted by the National Institutes of Health, about 78 percent of respondents said they would change their diet and exercise habits if their results showed a higher risk of cardiovascular disease.

However, some diagnoses seem to only bring bad news, Tsai warned.

For example, a test can predict if someone will develop Huntington's disease — a devastating neurological condition with no cure. It's common for people to develop depression when given such results. [That is why 23andMe does not even check for this condition - such that they have no idea - even if you'd click "I *DO* want to know" -AJP].

"Understanding the value of genetic tests, and when [and] where to use them in the health care system, is becoming an increasing focus of the health care industry," Tsai said.

FDA crackdown

It is for this reason the Food and Drug Administration is increasingly scrutinizing direct-to-consumer genetic-testing companies, Tsai said.

Critics of the tests worry about the safety of consumers who base important lifestyle or medical decisions on inaccurate or misunderstood test results.

"The risk is what people may do in response to the tests — some may suffer psychological harm or feel dread about their future health risks," said Barbara J. Evans, co-director of the Health Law & Policy Institute at the University of Houston Law Center.

"People sometimes pursue ill-advised medical interventions that may actually cause them harm. They may be unaware that these tests can produce false positives and false negatives, and even when people do have a 'bad' gene that does not necessarily imply that the gene will ultimately make them sick. People's futures depend on many, many factors other than their genes," Evans said.

Evans said the solution will require studies to be done before tests enter the market, and ongoing evaluations to see how well they perform once they are in use.

About 90 percent of genetic tests available in the United States have never been through a regulatory review of how safe they are or how much they improve health, she said. Most experts agree such review is needed, but solutions have been mired in the controversies surrounding the tests.

There are practical barriers to forcing all genetic tests to undergo the same sort of review the FDA requires for other medical products, such as drugs. The obstacles include lack of data, the short commercial lives of test products and the difficulty of assessing products that make long-term predictions.

Evans said making genetic tests as safe and effective as they can be will require close coordination among the FDA, state agencies, professional groups and other private-sector overseers.

"Resolving the lingering unknowns about genetic tests will require more data, and getting more data will require a commitment of resources," Evans said. "And make no mistake, having better data about genetic tests will only improve the public's health if the data are communicated in a timely and understandable way to the public."

[Bottom line: GO FOR IT TILL IT LASTS (sale ends December, 25) - the best gift you can ever give for the Holidays! Checks for 179 conditions, for 50 cents you can save the lives of loved ones (up to ten kits per order). This entry can be debated in FaceBook of Andras Pellionisz]


Key information about breast cancer risk and development is found in 'junk' DNA

EurekaAlert
December 16, 2010

A new genetic biomarker that indicates an increased risk for developing breast cancer can be found in an individual's "junk" (non-coding) DNA, according to a new study featuring work from researchers at the Virginia Bioinformatics Institute (VBI) at Virginia Tech and their colleagues.

The multidisciplinary team found that longer DNA sequences of a repetitive microsatellite were much more likely to be present in breast cancer patients than healthy volunteers. The particular repeated DNA sequence in the control (promoter) region of the estrogen-related receptor gamma (ERR-γ) gene – AAAG – contains between five and 21 copies and the team found that patients who have more than 13 copies of this repeat have a cancer susceptibility rate that is three times higher than those who do not. They also discovered that this repeat doesn't change the actual protein being reproduced, but likely changes the amount.

The researchers from VBI's Medical Informatics and Systems Division (https://www.vbi.vt.edu/vbi_faculty/vbi_research_group/personal_research_group_page?groupId=6), the University of Texas Southwestern Medical Center and the University of Liverpool, United Kingdom, report their findings in an upcoming edition of the journal Breast Cancer Research and Treatment. The study is currently available online. The group sequenced a specific region of the ERR-γ gene in approximately 500 patient and volunteer samples. While the gene has previously been shown to play a role in breast cancer susceptibility, its mechanism was unknown.

"Creating robust biomarkers to detect disease in their early stages requires access to a large number of clinical samples for analysis. The success of this work hinged on collaborations with clinicians with available samples, as well as researchers with expertise in a variety of areas and access to the latest technology," explained Harold "Skip" Garner, VBI executive director who leads the institute's Medical Informatics and Systems Division. "We are now working to translate this biomarker into the clinical setting as a way to inform doctors and patients about breast cancer susceptibility, development, and progression. Akin to the major breast cancer biomarkers BRCA1 and BRCA2, this will be of particular benefit to those high-risk patients with a history of cancer in their family."

The majority of DNA is non-coding, meaning its not transcribed into protein. The largest amount of this type of DNA consists of these microsatellites – specific repeated sequences of one to six nucleotides within the genome. There are over two million microsatellites in the human genome, yet only a small number of these repetitive sequences have previously been linked to disease, particularly neurological disorders and cancer.

"We've become increasingly aware that non-coded DNA has an important function related to human disease," said Michael Skinner, M.D., professor of pediatric surgery at the University of Texas Southwestern Medical Center and collaborator on the project. "Replication of this study in another set of patients is needed, but the results indicate that that this particular gene is an important one in breast cancer and they reveal more details about the expression of the gene. This kind of work could eventually result in the creation of a drug that would specifically interact with this gene to return expression levels to a normal range."

"Ninety percent of all the breast cancer patients we see aren't considered high risk patients, which means there wasn't any indication that they would be susceptible to breast cancer," said Dr. James Mullet, a radiologist at Carilion Clinic's Breast Care Center. "This compels us to screen everyone in some way. If we had a better test – one that is more robust and sensitive, but also specific – we could make sure the women with most risk are getting properly screened for breast cancer."

"One practical clinical application of this research is to have a test available that would allow us to tailor our screening better," Mullet said. "For example, we could lessen patients' time, expense, and worry if we could better determine which patients would need only a mammogram, as opposed to additional tests like ultrasound or screening breast MRI. This work may also give us genetic insight into the cause of the breast cancer that may develop in those 90 percent of patients who are not currently identified as high risk."

According to Garner, "There is a big gap between what is suspected and what is known about the genetics of cancer. While more work is needed to better understand how these changes play a role in cancer, these results can be used now as a new test for breast cancer susceptibility and, as our data suggests, for colon cancer susceptibility and possibly other types of cancer. We think this is just the beginning of what there is to be found in our junk DNA."

--

[Excerpts from the paper in the journal of Breast Cancer, "Discussion" - AJP]

There are at least five possible explanations for our results: (1) direct transcriptional influence of ERR-c based on the length of the repeat, (2) linkage of an ancestral ‘‘lengthening’’ mutation with a cancer causing mutation in/ around ERR-c, (3) the repeat resides in an uncharacterized biologically active RNA which is affected by the length of the repeat, (4) misregulation of splicing due to overexpansion of the polymorphic repeat, or (5) a spurious association due to various sampling errors or population issues (albeit unlikely).

["Runs" in intronic and intergenic regions (in the "Junk") have been associated with heredetitary syndromes - but his is one of the papers that pins the main "genome regulation disease" (cancer) on them. Thus, Crick's fear that collapse of his "Central Dogma" (in 1972 rescued by Ohno's nonsense "Junk DNA" theory) will necessitate "putting genomics on an entirely new intellectual foundation" (see The Principle of Recursive Genome Function), is now becoming an active field of identification of glitches in the recursion, leading to a collapse of genome regulation. It has been presented in Cold Spring Harbor, that e.g. the GAA repeat-run in Friedreich' Spinocerebellar Ataxia is a "fractal defect" that disrupts a FractoSet in the middle of an intronic fractal structure. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


DIY DNA on a Chip: Introducing the Personal Genome Machine

Fast Company
BY ARIEL SCHWARTZ
Wed Dec 8, 2010

[Life/Ion Torrent Desktop Sequencer, $49k - AJP]

DNA sequencing technology isn't exactly accessible; a typical sequencing machine can easily cost $500,000. A startup called Ion Torrent aims to change that with a desktop sequencing machine for just $50,000--cheap enough for well-funded research projects to afford.

The key to Ion Torrent's Personal Genome Machine is a semiconductor chip that holds 1.5 million sensors, each of which can hold a single strand DNA fragment. The chip electronically detects the DNA sequence, unlike other sequencing machines that optically detect DNA with pricey lasers, microscopes, and cameras. It can sequence a DNA sample in a few hours, while other machines can take at least a week. And it can scale up fast. The company explains:

Because Ion Torrent produces its proprietary semiconductor chips in standard CMOS factories, we leverage the $1 trillion dollar investment that has been made in the semiconductor industry over the past 40 years. This industry's huge manufacturing infrastructure enables Ion Torrent to meet any demand for our chips.

There are some caveats. Each $250 chip can only be used once. The chip also reads a small amount of DNA; 10 to 20 million bases per run, out of the 3 billion base pairs in the human genome. But that's enough for genetic diagnostic tests, according to Technology Review.

Ion Torrent's machine goes on sale this month. Soon enough, these semiconductor sequencing chips may start popping up in cash-endowed hospitals around the world. Could consumer DNA sequencing machines be far behind?

[Life Technologies bought Jonathan Rothberg's Ion Torrent for $725 M just a few months ago - for an extremely strong reason; "Democratization of Genome Sequencing" (beyond Industrialization of Genomics, also making it available like Ford T auto models, for the masses). Without question, "Leveraging the Semiconductor Industry" for Genomics is a formidable economic driver. However, since the one-time use of the $250 chip (run on the $49k) machine is a "bridge" from the present microarray-technology (that is NOT a sequencer but is suitable only to interrogate up to about 1.6 M single-letter "structural variants") to "full sequencing of all 6.2 Bn A,C,T,G letters of a human genome" (like Roche' 454, Illumina Genome Analyzer and Life's own SOLiD, with Complete Genomics in production and Pacific Biosciences in beta-production), the "Personal Genome Machine" by Life/Ion Torrent is to cater for a very precious segment of the market. The "fractal defects" (peek at the Cold Spring Harbor presentation by Pellionisz) are much-much larger "structural variants" compared to single-letter SNP-s (typically, they are 150-350 letter oligos) - precisely in the range of the capabilities of the PGM. Moreover, IP-owner of "The Fractal Approach", HolGenTech, Inc. is based on the dovetailing economic driver, "Leveraging High Performance Computer Industry" for Genomics - by focusing on pure-play Genome Analytics software. Note, that the Personal Genome Machine is a "Sequencer" - that sorely needs another box (like a washer needs a dryer), that takes the raw data of sequences and provides Analytics and Interpretation; such as either "diagnosis" in hospital settings (FDA permitting), or "Genome-based Product Advertisements" (see YouTube "Shop for your Life") that require no clearance from the medical establishment - yet accomplishes the kind of "democratization" and turning PGM into real "Consumer Sequencing Machines", completed by enabling consumers to use genomic information in their daily life. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


Break Out: Pacific Biosciences Team Identifies Asian Origin for Haitian Cholera Bug

By Kevin Davies
December 9, 2010

In a dramatic piece of ultra-quick genetic detective work, next-generation sequencing company Pacific Biosciences has decoded the sequence of the strain of bacteria responsible for the deadly cholera outbreak in Haiti. The findings, which confirm the putative Asian origin for the devastating disease, are published online in the New England Journal of Medicine today.

The project was led by physician scientists at Harvard Medical School and Massachusetts General Hospital (MGH), including Matthew Waldor, John Mekalanos, Stephen Calderwood and Morton Swartz. “This understanding has important public health policy implications for preventing cholera outbreaks in the future,” says Mekalanos.

Cholera was first detected in Haiti in mid October, spreading across the country and into the Dominican Republic. Nearly 2,000 people have died from the outbreak, with no end in sight. Shortly after the outbreak, Waldor contacted the Centers for Disease Control and Prevention (CDC) and offered to sequence the bacterial strain using Illumina’s technology. Waldor says the CDC initially said he could have the strain, but five days later, changed their mind (citing political reasons) and said they were going to do it themselves. “At that point, I thought we were out of the game,” says Waldor.

CDC subsequently announced that using pulse-field gel electrophoresis fingerprinting technology, the strain was consistent with a south Asian origin. “But from a pure scientific point-of-view, that’s heresay,” Waldor, who is professor of medicine at Harvard Medical School and an investigator with the Howard Hughes Medical Institute, told Bio-IT World. “What are their controls? Pulse-field gel electrophoresis has nothing like the depth of a full genome sequence.”

But by then, Waldor and colleagues were already putting the finishing touches to their manuscript. Two weeks earlier, two MGH physicians -- pediatrician Jason Harris and Richelle Charles – returned from Haiti with samples they’d collected from a hospital. But who would do the sequencing?

That Was the Week

Two days earlier, on Saturday November 6, Waldor emailed a speculative inquiry to the PacBio website. “I knew they had some exciting technology, my understanding was it was very useful for resequencing bacterial genomes.” While he was fishing around on the PacBio Web site, Waldor noticed that one of his colleagues at the Brigham & Women’s Hospital – Joseph Bonventre – was on the PacBio advisory board.

“So I called him up,” Waldor continues. “He was in his office that Saturday, just like me, I told him the story, and he said, ‘let me make a phone call.’ Literally five minutes later, the CEO of PacBio, Hugh Martin, called me up, and said, ‘that sounds very interesting. Let me talk to Eric Schadt and my team.’ We got the strains on Monday November 8. Eric and the CTO called me that day and said they’d be interested in collaborating.”

“We’re going all in!” Waldor recalls Schadt telling him. “They went all in, I must say.”

Waldor’s team grew up the Vibrio cholerae strains on November 8, and the DNA samples arrived at PacBio in California on Wednesday, November 10. “We got a good idea of the [identity of the] two Haitian strains on the evening of November 12. We sent three other strains for comparison, including a true resequencing of the canonical strain.”

Each of the five strains took about one day to sequence to about 60X coverage. “They did an outstanding job in the analysis,” says Waldor. “Most of the credit for this project goes to Eric and his team.”

“The rapidity and depth of the sequence using this 3rd-generation sequencing technology has enormous potential to transform how we can analyze outbreaks of infectious disease and even the prediction of future outbreaks because of the power of their technology.”

According to PacBio, the five cholera genomes were sequenced on November 12 to 12-15X coverage in less than two hours. [This is the kind of speed I predicted in my 2008 Google Tech Talk YouTube as vital for deploying sequencing in real-life emergencies - AJP]. Further runs bumped up the coverage to 60X over the course of the day. Over the next three days, the sequence data were subjected to in-depth analysis, including genome assembly, annotation, and sequence comparisons, including comparisons to nearly two dozen published cholera genomes.

Subsequent bioinformatic analysis confirmed earlier hints matching the Haitian cholera outbreak to a variant of the “El Tor O1” variant from South Asia. This strain has never been documented in the Caribbean or Latin America, suggesting that a recent visitor to the island, possibly a volunteer or a United Nations peacekeeper helping relief efforts after the earthquake, could have inadvertently carried the bacteria to Haiti from outside Latin America.

“Our data strongly suggest that the Haitian epidemic began with the introduction into Haiti of a cholera strain from a distant geographic source by human activity,” is how Waldor puts it. The results disprove another possibility, namely that the strain arose from the local aquatic environment.

The identification of the Haitian strain has important implications regarding vaccination, says Waldor. “By showing this strain is closely related to a south Asian strain, and not close to Latin American isolates, it shows that human activities – food or water brought from South Asia, led to this epidemic and not from transfer from Latin America. That’s a conclusion that allows us to alter our policies in the future to prevent such a thing. For instance, relief workers or security forces should be deployed where there is no domestic or endemic cholera. Otherwise, workers should be screened and/or vaccinated, so they can’t bring it in.”

Speed of analysis is crucial in such situations, says Jason Harris, requiring a technology “that could immediately provide comprehensive genomic information about this virulent strain and quickly get it into the hands of the global health and research community. In the initial stages of a major epidemic, real time is the speed we need to be working in order to have the greatest impact on saving lives.”

From PacBio’s perspective, Schadt says that “real-time monitoring” of pathogens opens the door to using his firm’s technology as “a routine surveillance method, for public health protection in addition to pandemic prevention and response.”

Warning Sign

Just last month, Waldor and colleagues published a perspective the New England Journal advocating the establishment of a cholera vaccine stockpile in the United States to be used to counter outbreaks such as the one in Haiti. There are an estimated 3-5 million cases of cholera each year resulting in about 100,000 deaths.

“The resistance to vaccination is truly baffling,” Waldor said at the time. The Harvard/PacBio results raise another troubling possibility: expansion of the epidemic with the replacement of the currently endemic strains with much more threatening variants. “That would be a deeply troubling outcome,” says Mekalanos. “A cholera vaccination campaign might not only control the disease but also minimize the dissemination beyond the shores of Hispaniola,”

The scientific manuscript was drafted over a few days and submitted to NEJM on November 19. The paper was formally accepted on December 1 and published December 9. “That’s like my record,” says Waldor.

Further Reading: Chin, C-S. et al. “The origin of the Haitian cholera outbreak strain.”New England Journal of Medicine December 9, 2010.

[PacBio [PACB] stocks shot up by about 10% upon the news of this historical landmark in applying rapid genome sequencing for control of Pandemics. This is the first major example how "time critical" sequencing could make a global difference. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


Which Is a Better Buy: Complete Genomics or Pacific Biosciences?

By Brian Orelli | More Articles

November 29, 2010

Today's battle pits recent DNA sequencing IPOs Complete Genomics (Nasdaq: GNOM) against Pacific Biosciences of California (Nasdaq: PACB). Which one is the better buy? Let's have a closer look.

What they do

Technically, Complete Genomics and PacBio aren't direct competitors, but they're close enough. Complete Genomics specializes in sequencing DNA for researchers; PacBio sells DNA sequencing equipment for researchers. It comes down to whether researchers will outsource their sequencing or do it themselves, so the two end up directly competing with the funds researchers have to carry out experiments.

And the two have plenty of additional competition. Illumina (Nasdaq: ILMN), Life Technologies (Nasdaq: LIFE), and Roche have been selling DNA sequencers for a while. Illumina also offers a sequencing service direct to patients.

Both Complete Genomics and PacBio are still in the ramping-up stage. Neither is turning a profit nor should you expect one soon. Fortunately, they have an influx of cash from their IPOs to keep things going for a while.

As of the end of September, Complete Genomics had sequenced 400 genomes to date, 300 of which were completed in the third quarter. At the time, the company had a backlog of more than 800 genomes.

PacBio had sold seven of its limited production models as of the middle of September and had orders for four more that were scheduled to ship by the end of the year. We should get an update on the commercial launch of its machine when PacBio has its earnings conference call tomorrow after the stock market closes.

What investors think

One of the things David Gardner looks for when picking stocks for the Motley Fool Rule Breakers newsletter is strong past price appreciation. This isn't technical analysis mumbo jumbo, but there's something to be said for a company that other investors have confidence in.

Complete Genomics and PacBio don't have a very long history, but so far their ability to catch the fancy of investors has been fairly limited.

Company

Expected IPO rang
Actual IPO price
Price Close Nov. 26

Complete Genomics $12-$14 $9 $7.76

PacBio $15-$17 $16 $11.53

What this Fool thinks

Investors are rightfully timid about the DNA sequencing hype. Remember how the Human Genome Project was going to save the world? Human Genome Sciences (Nasdaq: HGSI) was worth more than $100 on a split-adjusted basis in early 2000. Ten years later, with the company on the verge of getting its first drug approved, the stock is trading at only $25.

Famous last words or not, I think this time it's different. We know a lot more about what genes do now than we did 10 years ago, and the price of sequencing has come down considerably. At some point, getting a DNA sequence will be a routine part of a newborn's first checkup, and everyone who is already alive is going to have to catch up. There's a lot of DNA to be sequenced and therefore a lot of money to be made.

But investors do need to be careful. The market may be huge, but there's a diminishing size as more people get their genomes sequenced since your genome doesn't really change.

That's different than say the software market, where the potential customers remain constant since Microsoft can convince current customers to upgrade to newer software.

The best long-term hope for Complete Genomics and PacBio is probably to expand into other markets, just as Intuitive Surgical (Nasdaq: ISRG) has expanded the use of its robotic surgery machines into additional surgical procedures. Tumors often have genetic mutations, so they'll likely get sequenced to determine the best drugs to treat the cancer. And you could use DNA sequencing to identify viruses and bacteria.

Still, I think this is ultimately a boom-bust industry, albeit with the bust still many years away.

Which one?

If you're interested in trying to catch the boom and get out before the bust, both Complete Genomics and PacBio look like a good choice to benefit from an exponential increase in DNA sequencing.

It's too early to make a definitive call, but of the two, I like PacBio better because I'm not fond of the low-cost, high-volume business model. Sure, it's worked for Costco and Wal-Mart, but I like PacBio's razor and blade model -- sell the machine once and then continue to supply reagents year after year -- a little better.

Which one is your pick? Take the poll and let us know your reason in the comment box below.

[Actually, both in practice and in theory the answer is rather easy to tell. In practice, if one is limited to the price of a stock (an astonishingly narrow-minded approach), the cheapest stock is that of Complete Genomics. In theory, it is also easy to pick the winner from all entrants of the "Sequencing" technology companies. Since sequences alone are absolutely worthless without interpretation, the winner will be that secures the key to the algorithmic (thus software-enabling) theoretical high ground. Which will it be out of the five listed (and further runner-up) companies? It may be one of them, some sharing key IP - or perhaps another company that is not even listed above. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


A Geneticist's Cancer Crusade

The discoverer of the double-helix says the disease can be cured in his lifetime. He's 82.

By ALLYSIA FINLEY
The Wall Stree Journa
November 27, 2010

'We should cure cancer," James Watson declares in a huff, and "we should have the courage to say that we can really do it." He adds a warning: "If we say we can't do it, we will create an atmosphere where we just let the FDA keep testing going so pitifully."

The man who discovered the double helix and gave birth to the field of modern genetics is now 82 years old. But he's not close to done with his life's work. He wants to win "the war on cancer," and thinks it can be won a whole lot faster than most cancer researchers or bureaucrats believe is possible.

Call it the last crusade of one of the nation's most indefatigable and productive scientists. In a long career, Dr. Watson was awarded the Nobel Prize in Physiology or Medicine (1962), garnered 36 honorary degrees and wrote 11 books, including the bestseller "The Double Helix" (1968), which recounts his dramatic quest with Francis Crick to determine the structure of DNA. He spent the early 1990s helping spearhead and direct the Human Genome Project to identify all human genes. And there's the 40 years he's devoted to transforming the Cold Spring Harbor Laboratory in Long Island, N.Y., from a ramshackle ruin into the elite cancer research institute it is today.

To hear Dr. Watson tell it, this determination began—at least formally—in Hyde Park at the age of 15. "The University of Chicago always used to be ranked in the U.S. News and World Report as the third most unpleasant college to go to in the United States," he chuckles. "It was a place that was knocking you down and expecting you to get up by yourself. Nobody was picking you up."

He says he's the better for it because it taught him how to be a leader, something he thinks there are too few of nowadays. "The United States is suffering from a massive lack of leadership. There are some very exceptional, good leaders. I'm not saying they don't exist, but to be a good leader you generally have to ruffle feathers," which Dr. Watson believes most people aren't willing to do.

He certainly is. Throughout his career, Dr. Watson has been a lightning rod for controversy, beginning with his unflattering portrayal of some fellow scientists as awkward and hostile in "The Double Helix." He later butted heads with fellow genetic researcher and founder of Celera Genomics, Craig Venter, over the commercialization of the human genome. Dr. Venter wanted to turn a buck for his firm by selling access to the human genome sequence. Dr. Watson thought the human genome database should be free to the public.

In 2003, Dr. Watson stirred up another academic kerfuffle when he joked that genetic engineering could be used to make all women beautiful and, more seriously, that gene therapy could one day cure stupidity. His 2007 book "Avoid Boring People: Lessons from a Life in Science" used the following words to describe former Harvard colleagues: "dinosaurs," "vapid," "mediocre" and "deadbeats."

But these days, Dr. Watson is sparring with the bureaucratic behemoth known as the FDA.

"The FDA has so many regulations," Dr. Watson says. "They don't want you to try a new thing if there's an old thing that might work. . . . So you take the old thing, but we know cancer changes over time and we would really like to get it whacked early, and not late. But the regulations are saying you can't do these things until we give you a lot of s— drugs," he snorts. "Shouldn't this be the patient's choice to say I would rather beat the odds with a total cure rather than just to know that I am going to have all my hair fall out and then after a year I'm dead? . . . Why should [FDA commissioner] Margaret Hamburg hold things up? There's the cynical answer it gives employment to lawyers."

Ah, the lawyers. "Right now America is being destroyed by its lawyers! Most of the people in Congress just want work for lawyers." He quickly adds: "I was born an Irish Democrat, so I wasn't born into a family which instinctively says these things. But my desire is to cure cancer. That's my only desire."

Dr. Watson may have been born an Irish Democrat, but he's more of a libertarian when it comes to scientific regulation. In his view, freer research enables greater innovation. "I do think one success of Northern Europe, which the United States came from, was its willingness to accept innovation in business practices like Adam Smith and the whole Enlightenment. It essentially made the merchant class free instead of controlled by the king and aristocracy. That was essential."

Another impediment to innovation today is funding. Dr. Watson thinks money is being spread around too much and not enough is going to the best brains. "Great wealth could make an enormous difference over the next decade if they sensibly support the scientific elite. Just the elite. Because the elite makes most of the progress," he says. "You should worry about people who produce really novel inventions, not pedantic hacks."

He also complains that too often government and private money help support scientists rather than cutting-edge science. "That's not the aim of our money—job research, job security. It should be job insecurity. Or hospital insecurity. Empty the breast cancer ward."

Dr. Watson's commitment to innovation is why most scientists at Cold Spring Harbor don't have tenure. Instead, they have security for five years. "We can't decide at the age of 40 that you're going to have a job for 30 years even though you're not producing much science."

Although Dr. Watson says leaders should think in the long term, he is critical of those who say we might find a cure for cancer in another 10 to 20 years. "If you say we can get somewhere in 10 to 20 years, there's no reason you shouldn't be saying 20 to 40, except then people would just give up hope. So 10 to 20 still maintains hope, but why not five to 10?" He adds that there's no reason we shouldn't know all of the genetic causes of major cancers in another few years.

"I want to see cancer cured in my lifetime. It might be. I would define cancer cured as instead of only 100,000 being saved by what we do today, only 100,000 people die. We shift the balance." Alas, modern research has merely reduced cancer mortality in the United States from about 700,000 per year to about 600,000. "We've still got 600,000, which is what the problem is."

The challenge now—at least by Dr. Watson's lights—is killing the mesenchymal cells that cause terminal cancer and figuring out why those cells have become chemotherapy-resistant. He says scientists and doctors are reluctant to tackle terminal cancer because there's so much that remains uncertain about its causes.

The treatment of early-stage cancer, however, is more certain since scientists have already pinpointed many of the genes that are associated with specific cancers. But they still don't know exactly which gene or gene mutations lead to terminal cancer [one reason they still don't know may be that NO GENE OR GENE MUTATION may be the cause of cancer - but HoloGenome Regulation derailed - AJP].

Dr. Watson points out that scientists are correctly looking at DNA before they treat early-stage cancer, since different drugs work on different genes. "If I had cancer I'd certainly want them to look at the DNA to see if there's a Ras gene or change in the Ras gene," which signals cell growth and proliferation.

He points to lung cancer as a case in point. Right now Dr. Watson says there are two types of treatments. The first is a new drug that treats cancers linked with the specific gene ALK, which has proven effective in trials. "I have no idea if it works beyond the first six months, but most drugs don't work in the first six months, so that's very good."

Then there's Tarceva and Iressa, two drugs that inhibit the epidermal growth factor receptor that causes cancer cells to divide. But "they only work on about 10% of people," who have specific mutations in their tumors. "And they work for about a year, and then you become resistant. And we don't have anything to treat the resistant cancer with."

So this is where we now stand in the war against cancer: at our own 20-yard line with a playbook full of untested, complicated plays. But Dr. Watson is optimistic that there could be a Hail Mary: a single drug that will work on all of the deadly mesenchymal cells. All of these cells, he notes, secrete a protein—interleukin-6—and in lab experiments, adding interleukin-6 to lung cancer cells that had been controlled by anti-cancer drugs made them resistant to the treatments.

Thus the key to curing cancer may be finding a drug that blocks interleukin-6. "While this would be wonderful if it turns out to be true," he says, he doesn't know if it is and he concedes, "it's not conventional wisdom."

Despite his crusade, it's not cancer that personally scares Dr. Watson. It's Alzheimer's disease. When he had his genome sequenced and published in 2007, he specifically asked that the doctors not reveal whether he had a gene that would make it virtually certain he would develop Alzheimer's. The mentally debilitating disease would make it impossible for him to continue his research—not to mention that it would estrange him from his family.

I ask Dr. Watson about the double-edged sword of DNA testing and its proliferation. As prices fall due to improved technology, the market for testing grows. Now companies like 23andMe are selling personal DNA tests for roughly $500. Simply spit in a tube, send it in, and in a few weeks you'll get back everything you've ever wanted to know about your genetic inheritance—and some stuff you'd probably rather not.

While such information might encourage some people to adopt healthier lifestyles or get more frequent check-ups, it could also cause undue anxiety. For example, what do you do when you learn at the age of 20 that you have a gene that makes you susceptible to Parkinson's disease—something that you can't do anything about?

To this question, Dr. Watson says that DNA testing "has to involve a lot of acquired common sense." But he doesn't think that common sense should come from government agencies. "I don't see how regulations can do it." Banning it because of potential negative repercussions would be futile.

Futile—now that's a word you won't often hear Dr. Watson use. "I'm going to look optimistically and of course sometimes it doesn't work," he says. But "you move forward through knowledge. You prevail through knowledge. I love the word prevail. Prevail!"

Ms. Finley is assistant editor of OpinionJournal.com.

[I am not going to mince words here - time for a blunt talk, in order for Dr. Watson to renounce Crick's "Central Dogma" (That Dr. Watson actually never subscribed for). Jim Watson is a friend and a hero (I have chosen my FaceBook icon standing next to him and his Double Helix at Cold Spring Harbor) - when I was invited to present a "breakthrough idea" - ditching both JunkDNA and Central Dogma obsolete axioms to be superseded by The Principle of Recursive Genome Function. Jim Watson is far too clever, and he never subscribed to e.g. Crick's "Central Dogma" - Jim just stated the truth (DNA>RNA>PROTEIN), never saying that "the information never recourses to DNA". However, now we need his leadership (not only to do away with FDA obsolete legal mandate of 1976), but more importantly for science to make a clean break to say with full force that cancers (the explosion of genome regulation) will never be "cured" unless we target it as it is; a derailment of Recursive Genome Function. Dear Jim, renounce Crick's "Central Dogma", "break down that wall"! Just look at cancerous, uncontrolled growth; some can be seen by the naked eye to be the result of Fractal Iterative Recursion went out of control. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


News from 23andMe: a bigger chip, a new subscription model and another discount drive

[GRAB IT NOW - $99 Offer Extended till December 25, or when supply runs out - AJP]

[$99 plus a monthly $5 covers 175 conditions - click on 23andMe website to see the video testimonials how a $99 gift may save the life of a loved one - AJP]

Category: 23andme • personal genomics

Posted on: November 24, 2010 8:45 AM, by Daniel MacArthur

Personal genomics company 23andMe has made some fairly major announcements this week: a brand new chip, a new product strategy (including a monthly subscription fee), and yet another discount push. What do these changes mean for existing and new customers?

The new chip

23andMe's new v3 chip is a substantial improvement over the v2 chip that most current customers were run on (the v2 was introduced back in September 2008). Firstly, the v3 chip includes nearly double the number of markers across the genome, meaning that it is able to "tag" a larger fraction of common genetic variants ("tagging" means that a marker on the chip is sufficiently highly correlated with other markers that it can be used to make a reasonable guess about someone's sequence at those other markers). Secondly, the chip now includes additional custom markers targeting specific variants that the company thinks will be of interest to its customers.

The technical details: the v3 chip is based on Illumina's HumanOmniExpress platform, which includes 733,202 genome-wide markers. The company has also added around 200,000 custom markers to the chip (vs ~30,000 on the v2 chip). We don't yet have full details on what those custom markers are, but there's a summary of the improvements over the v2 chip in the press release:

Increased coverage of drug metabolizing enzymes and transporters (DMET) as well as other genes associated with response to various drugs.

Increased coverage of gene markers associated with Cystic Fibrosis and other Mendelian diseases such as Tay-Sachs.

Denser coverage of the Human Leukocyte Antigen region, which contains genes related to many autoimmune conditions.

Deeper coverage of the HLA is particularly welcome - variants in this region are very strongly associated with many different complex human diseases (including virtually every auto-immune disease), and the v2 chip was missing several crucial markers.

The addition of more rare variants associated with Mendelian diseases like cystic fibrosis is entirely unsurprising, but the devil will be in the details: in the arena of carrier testing 23andMe is up against the extremely thorough and experimentally validated platform offered by pre-conception screening company Counsyl. It will be very interesting to see the degree to which 23andMe focuses on the carrier testing angle in their marketing of the v3.

More power for imputation

From the perspective of those of us simply interested in squeezing as much information as possible out of our genetic data, the v3 chip is a welcome arrival. The additional markers present on the chip will substantially improve the power of genotype imputation - that is, making a "best guess" of our sequence at markers not present on the chip using information from tagging variants.

The HumanOmniExpress platform has some decent power here: in European and East Asian populations, 60-70% of all of the SNPs with a frequency above 5% found in the 1000 Genomes pilot project are tagged by a marker on the chip (in this context, "tagged" means "has a correlation of 80% or greater"). In effect, that means that being analysed at the one million markers on this chip allows you to make a decent inference of your sequence at around another 4.5 million other positions in your genome.

At the recent American Society of Human Genetics meeting, 23andMe presenter David Hinds suggested that the medium-term future for 23andMe rested not in moving to sequencing, but rather on expanding the role of genotype imputation. The new chip will certainly help with that. However, it's worth emphasising that imputation is not a replacement for sequencing: it is only accurate for markers that are reasonably common in the population, meaning that it will miss most of the rare genetic variants present in your genome.

However, improved imputation with the extra markers on the v3 chip will mean that 23andMe should be able to do a decent job of predicting customer genotypes at the positions we currently know the most about - those arising from genome-wide association studies of common, complex diseases. I expect that many customers will see changes to their disease risk profiles as a result of the move to the new chip.

Over at Genomes Unzipped, we've already been looking at various approaches to imputation from our 23andMe v2 data, and we'll put a post together soon looking at how this will improve with content from the v3 chip.

The new product strategy

There are two interesting things that 23andMe has done with the new product line: firstly, it has reversed the transient division of its products into separate Health and Ancestry components; and it has introduced a subscription model in which customers pay $5/month for updates to their account as new research findings become available (previously, customers paid a flat purchase fee and were then entitled to free updates).

The recombining of the Health and Ancestry products into a single Complete package is an extremely interesting move. As Dan Vorhaus notes, the previous separation of the two product lines was plausibly interpreted as a way for the company to pre-empt the possibility of a regulatory crackdown by the FDA: if regulators hammered the company's ability to offer health-relevant tests directly to consumers, 23andMe could easily switch to its Ancestry product to maintain a revenue stream.

In the currently uncertain regulatory environment, the decision to reverse this division is an unexpected one. It certainly appears that 23andMe - flush with cash following a successful $22M funding round - is somewhat more confident than I am about the regulatory future for health-relevant genetic tests; I hope that confidence turns out to be warranted. [I am much more of an optimist; since 23andMe could easily outsource their services to Asia or even Central Europe - AJP]

Subscription fees: good for customers

The decision to add a subscription fee may prove unpopular with customers (and has already received a qualified thumbs down from blogger Dienekes, albeit for perfectly sensible reasons). However, a business model based on providing continuous product updates that customers don't pay for has never really looked like a viable long-term business model.

I personally see a subscription model as a positive move: it provides a steadier revenue stream for personal genomics companies, which means less focus on splashy discount drives. It also provides more of a financial incentive for the company to improve the ongoing experience of customers: under the current deal customers are locked in for the first 12 months, but after that 23andMe will need to convince them that it's worth continuing to pay for additional content and features.

Other personal genomics companies (e.g. Navigenics) have long relied on some form of a subscription model, but typically at a higher cost. I think 23andMe is hitting a pretty reasonable price point here: I suspect $60/year would be seen by most customers as a fair price.

OMG discount!

That doesn't mean that 23andMe has abandoned the discount drive approach just yet, of course: they're currently offering v3 kits for just $99 (vs the retail price of $499), which must be purchased along with the previously mentioned 12-month subscription fee of $60. Non-US customers can also expect a ~$70 postage fee, based on comments on Twitter.

Anyone who missed out on the DNA Day sale and is keen to take advantage of the v3 content would be well-advised to get in quickly. The discount code is B84YAG.

[Terrific Holiday Gift - grab it TODAY (offer is still good till November 29, or when the supplies run out)! This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


This Recent IPO Could Soar [as money for "Analytics" makes "Sequencing" sustainable - AJP]

[This double IPO could soar? - Yes, when funds start pouring into "Analytics", to make "Sequencing" business sustainable - AJP]

Tuesday, November 16, 2010
Street Authority

The initial public offering (IPO) market continues to heat up with deals coming this week for GM (NYSE: GM), Booz Allen (NYSE: BAH), Caesars Entertainment (NYSE: CZR) and a half dozen other firms. The flurry of deals puts us on track for the most robust quarter for IPOs in more than two years. And looking at the pipeline of new deal registrations, the first quarter of 2011 may be even hotter.

I recently looked at a strategy that uses analyst research to find stocks about to pop. [See: "The Secret Way to Play IPOs"]

Yet that's not the only way to look for upside among recent new deals. You can also scan lists for "broken IPOs," which are firms that have been public for a short while and are drifting lower while investors focus on more established companies.

Last month, I took a look at top-performing IPOs, as I wrote back then, "many new IPOs take time to find their sea legs and only take off well after their debuts. In fact, every single stock [mentioned in that piece] came out of the gate with a whimper and only started rising many weeks or months after their debut."

The stocks in the table below are all broken IPOs, each is trading off at least -15% from its IPO offering price. I've pored through the list and found the best rebound candidate.

Complete Genomics (Nasdaq: GNOM)

Any company that struggles to fetch a desired IPO price is a conundrum for investors. On the one hand, a lower-than-expected price is a sign that investor demand just isn't there. On the other hand, you've got a chance to buy a stock at a cheaper price than investment bankers have recently assessed. Case in point, Complete Genomics, which hoped to sell shares for $12 to $14, had to settle for a $9 offering price last Friday, and the stock is now down to $7. That's just half the high end of the expected range of pricing. The weak demand may be due to the fact that rival Pacific Biosciences (Nasdaq: PACB) had just pulled off an IPO weeks earlier, snatching the attention of any fund managers that buy these kinds of companies.

Complete Genomics is involved in DNA sequencing. While other firms like Illumina (Nasdaq: ILMN) and Pacific Biosciences sell equipment to scientists, Complete Genomics acts as a service bureau, performing third-party DNA sequencing services.

Why the tepid IPO reception? Complete Genomics is just starting to generate sales and investors fear that quarterly losses will continue for the next year or two, setting the stage for another round of capital-raising. Ideally, the company would have waited until sales started building and losses started shrinking, but its backers likely balked at putting any more money into the company.

Yet this stock has all the makings of an IPO rebounder, as the firm's underwriters, led by Jefferies, get set to publish initial reports on the company in early December. You can expect to see bullish forecasts of projected sales growth rates, and if you look out far enough, fast-rising profits.

Analysts are likely to note that Complete Genomics' DNA sequencing approach may prove to be very cost-effective and capable of high market share. Industry leader Illumina can analyze an entire sequence of DNA strands for around $10,000 in materials. Complete Genomics thinks it can do it for just $4,500. And over time, prices could drop well below that level, making DNA sequencing for the masses more feasible.

Action to Take --> Keep an eye on new IPOs. They often stumble out of the gate, giving the false impression that they are unworthy investment candidates. Of the recent crop of IPO laggards, Complete Genomics appears to have the greatest potential upside.

With a broken IPO and scant revenues, investors will need to focus on the company's technology value. Complete Genomics is valued at less than $150 million, roughly $20 million less than the money spent developing its technology platform. The revenue profile tells you that this is a risky as a biotech stock. But if the company can make headway in the space, investors may start to make comparisons to Illumina, which is valued at $7.2 billion -- 50 times more than Complete Genomics.

-- David Sterman

[While a mere $10 M of Round A or M/A (with IP) into "pure-play DNA Analytics Company", like HolGenTech, Inc. (that leverages HPC for Genomics) could yield a decisive advantage to a "Sequencer" Company (if such deal would be exclusive), the compelling argument above that the two fresh "Pure-Play" Sequencer Companies are grossly undervalued (by a factor as much as 50x), thereby providing a historical investment-opportunity, David assumes that long-public genome companies (Roche/454, Illumina, Life Technologies/Ion Torrents) might not try to take the high ground of "Analytics" - and thus even force (as an auditor of Complete Genomics warned its investors) to go out of business before it could take off. This assumption may be mistaken - as the belief that a key "pure-play DNA Analytics Company" would do an "exclusive" - rather than go for the easily $10 B opportunity if its Genome Computers cater for all Sequencers in a non-exclusive basis (see below the explosive global market of Sequencers humming with an eye on $1,000 sequences - but in dire need of $1 M interpretation). This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


BGI – China’s Genomics Center Has A Hand in Everything

Singularity Hub
November 11th, 2010 by Aaron Saenz

[See interactive (zooming) World Map of Sequencers here - AJP]

When it comes to genomics, China seems a little like the proverbial kid in the candy store – she wants a taste of everything. Of course, unlike the child, China might be making a bid to own the candy store outright as well. The Beijing Genomics Institute (BGI), now located in Shenzhen, is the leading genomics facility in China, and all of Asia. BGI has striven to make a name for itself in every major international genome sequencing project of the last decade. The International Human Genome Project, the International Human HapMap Project, sequencing SARS, the Sino-British Chicken Genome Project, etc. It was also responsible for completely sequencing the rice genome, the silkworm genome, the giant panda genome…the list goes on an on. By the end of this year BGI will have 128 of Illumina’s HiSeq 2000 platforms, 27 of AB’s SOLiD 4 systems, and many other sequencing devices. At full capacity this means they will be capable of the equivalent of 10,000+ human genomes per year. And they are still growing. BGI may not be the largest genomics facility in the world, but it is has phenomenal support from its government, ambition to expand quickly, and a hand in dozens of major sequencing projects. You can’t talk about the future of genetics without talking about China.

In late 1999 the Beijing Genomics Institute started to build China into a world leader of genetic research. In the decade that’s elapsed since, they’ve put their name on some major developments...

Since its inception, BGI has had a very ambitious attitude when it came to participating in world genomics. Every time they were presented with a new project, they basically said, sure, we’ll be a part of that. They contributed 1% to the Human Genome Project’s reference genome, and 10% to the Human HapMap Project. It was like they never met a sequencing project they didn’t like.

That attitude hasn’t seemed to wane at all. BGI is spearheading efforts that will sequence a wide variety of organisms. There’s the 1000Genomes Project aimed at producing a wide database of human genomes from people all over the world. They are also working to sequence 1000 plants and animals, and have already completed 14+ of the former and around 50 of the latter. In 2009, BGI launched its effort to map the genomes of 10,000 microbes – they’ve managed 800 bacteria, 100 fungi, and 100 viruses so far, with more finished every day. They are looking for collaborators to sequence 1000 Mendelian Disorders in humans. Completion of large genetic databases like these will be part of what could empower genetic research to finally make the discoveries the public has been waiting for since the first human genome was sequenced a decade ago.

Even while BGI is a testament to Chinese ambitions in genomics, it also speaks to the prominence of the US in that field. BGI relies heavily, almost exclusively, on sequencing technology rooted in California. Illumina’s HiSeq2000 and Applied Biosystems SOLiD 4 form the bulk of BGI’s machine workforce. To be fair, most of the world has focused on using these systems as well, and BGI is working to expand its hardware horizons, collaborating with OpGen on new optical sequencing methods. Still, when one sees BGI’s successes in genomics one also has to acknowledge that such capabilities weren’t developed in a vacuum. China’s sequencing projects, like every nation’s sequencing projects, have worked as part of a larger global effort.

The only real question, then, is how much will China simply be a part of that worldwide phenomenon, and how much will it lead? Even if the hardware is largely developed by California companies, those companies themselves are international entities [suffice to point out that Life Technologies' just announced 5500 SOLiD sequencers are co-produced with Hitachi - AJP]. BGI is officially part of the sequencing club, recognized by Illumina as one of its associated world class facilities. BGI isn’t some second tier group working its way to the top, it’s already at the top, sharing space with the other lead genomics institutions around the world. If BGI and China continue to dedicate money, labor, and insight into genomics, they’ll be able to set the agenda for many sequencing projects around the globe. Actually, they’re already doing this with their various sequencing projects for microorganisms, plants, animals, and humans.

I know that many of us will view BGI’s growing importance through the lens of competitive national spirit. Yet no matter your feelings about China, you have to view BGI’s accomplishments as wonderful gifts to the global scientific community. Every genomics center around the world is going to have different specialties (Complete Genomics is dedicated to bringing down the costs of human whole genome sequencing, for instance) and it’s only through combining these disparate efforts that we’ll create the general understanding we need to move the field of genetics forward. It’s a team effort. Yay China, Yay us.

[If one would have to point out an outstanding difference of the BGI from much older schools of the UK and USA is Genome Informatics. While it is true that both Illumina and Life Technologies' Sequencers are based on biochemistry-technology of the USA, in BGI in Shenzehen (in the backyard of Hong-Kong) 3,000 Genome Informatics specialists are working busily, with the average age of 27 (thus by definition none of them can be "old schoolers". Hong-Kong and Seoul has some of the very best Neural Network specialists of the World. Once they devote full attention to "The Principle of Recursive Genome Function", China will set PostModern Genomics into an entirely new trajectory of hypergrowth. - This entry can be debated in FaceBook of Andras Pellionisz - AJP.]

^ back to top


Most doctors are behind the learning curve on genetic tests - the $1,000 sequence and $1 M interpretation

Updated 10/24/2010 8:37 PM

USA Today

By Rita Rubin, USA TODAY

GREENWICH, Conn. — It's ironic that Steven Murphy's medical practice is located in this town's Putnam Hill historic district.

His Maple Avenue building, the Dr. Hyde House, is a cozy hodgepodge of architectural styles, with stone and stucco walls, a double bay corner window and orange clay roof tiles. It has housed doctors' offices for a century.

Although Murphy's surroundings may be old-fashioned, his practice is not. Murphy, a board-certified internist who writes a blog called The Gene Sherpa, is one of a small minority of doctors who use genetic tests to help manage their patients' care.

"The majority of people we see have a very strong family history of X, Y or Z disease," says Murphy, who'll be 34 this week. He doesn't bring up genetic testing until after taking a detailed personal and family medical history and assessing such risk factors as cholesterol and blood pressure. "I tell them there are lots of ways to dig deeper. Then I also tell them the limitations."

Other patients show up with the results of personal genome tests, costing upward of $1,000, they had ordered online from companies such as 23andMe and Navigenics. They want to know what it all means. "We like to call it the thousand-dollar genome with the million-dollar interpretation," Murphy says.

Having trained in genetics as well as in internal medicine, he's much further on the learning curve than most doctors.

Since the Human Genome Project was completed in 2003, the introduction of new genetic tests has far outpaced the ability of doctors — who typically have little training in genetics — to figure out what to do with them. Some tests are marketed to help predict disease risk, others to determine how patients might respond to certain medications.

"This is going to become a very big part of mainstream medicine, and we really aren't ready for it," says human geneticist Michael Christman, president and CEO of the Coriell Institute for Medical Research, a non-profit research center in Camden, N.J.

A deluge of data

Eric Topol, director of the Scripps Translational Science Institute in La Jolla, Calif., cites what he calls "a really great paradox."

"Ask patients 'whom do you trust with your genomic data?' and 90% say their physicians," Topol, a cardiologist, says. Yet, when Medco Health Solutions, the pharmacy benefit manager, and the American Medical Association surveyed more than 10,000 doctors, only 10% said they felt adequately informed and trained to use genetic testing in making choices about medications.

That physician survey was conducted two years ago, but Topol, Christman and others in the field doubt much has changed.

Take the blockbuster drug Plavix. In March, the Food and Drug Administration added the strongest type of warning, a black box, to the label of Plavix, which is taken by millions of Americans who have had stents inserted to keep their coronary arteries open. Plavix is supposed to reduce the risk of blood clots in those stents, but, as the boxed warning notes, some patients might not effectively convert the drug to its active form in the body.

The warning points out that a genetic test can identify those patients, who might need a higher dose of Plavix or a different drug. Yet, Christman says, "even in tertiary academic medical centers, you don't have routine testing for Plavix efficacy."

On the other hand, Topol says, doctors have ordered 250,000 $100 tests for a gene called KIF6, tests that were aggressively marketed. One KIF6 variation was thought to raise heart disease risk by up to 55%, but, Topol says, a study this month in the Journal of the American College of Cardiology shot that down.

Considering that there are thousands of genetic tests, doctors might be forgiven for feeling overwhelmed, especially because so many questions remain.

"We have way more data than we have knowledge," says Clay Marsh, a lung and critical-care doctor who directs the Center for Personalized Health Care at The Ohio State University College of Medicine in Columbus. "The biology is struggling to keep up with the technology."

Though some diseases, such as sickle cell and cystic fibrosis, are caused by mutations in a single gene, many common conditions arise from the interplay of a variety of genes and lifestyle and environmental factors, not all of which have been identified.

"Having a family history of heart disease increases your risk of heart disease more than some of these (genetic) markers they test for," Murphy says. "Then, just because you have that marker doesn't mean that's what caused the heart disease in your family. That's one thing I teach residents: No gene is an island."

Right test for one patient

Murphy's first patient on a sunny fall afternoon sat in her home nearly 1,000 miles away, in the village of Niantic, Ill., smack dab in the middle of that state.

Wanda Conner, 72, never met a medication that agreed with her. She heard about the Genelex test for five genes involved in drug sensitivity from her dental hygienist and figured it might provide some answers. So she swabbed some cells from her cheek and mailed them to the company's lab in Seattle.

Besides seeing his own patients, Murphy reviews test results by phone for Genelex customers. He scrolled through Conner's on his computer. Turns out that she carried variations in three of the genes for which she was tested that could affect her response to certain medications.

Murphy touched on types of drugs that Conner wouldn't process normally if taken. He advised her to stay away from SSRIs, or selective serotonin reuptake inhibitors, a class of antidepressants, and cautioned her that she might experience side effects if she took beta-blockers, a heart medication. He promised to fax his report to her doctors.

"They're fascinated with this," Conner says of her doctors in Illinois, "but they don't know much about it. In fact, I probably know more than they do."

Christman's and Topol's organizations hope to change that. "The purpose of our study ... is to determine the best practices, from soup to nuts, in using personal genome information in clinical care," Christman says. "What are the best information technology systems to deliver this?"

For example, he says, when it comes to genetic factors affecting drug response, it probably makes more sense for pharmacists, not genetics counselors, to advise doctors or patients.

The Coriell Personalized Medicine Collaborative is halfway toward its goal of enrolling 10,000 people. Many are doctors. "We're measuring a lot of genetic information about them," Christman says. Genetics counselors explain the results, usually by e-mail or phone, which participants seem to prefer over a face-to-face visit.

The next 5,000 participants will have already been diagnosed with breast or prostate cancer or heart disease. The cancer patients will be recruited through Fox Chase Cancer Center in Philadelphia, the heart disease patients through Ohio State.

Coriell is sharing only results that patients or their doctors can do something about. Expert committees meet twice a year to review the latest findings about different genetic markers. "If somebody came out with an effective cure for Alzheimer's," Christman explains, "then we would report Alzheimer's risk."

In a related study, Coriell is investigating how best to educate doctors about genetic testing and how that affects what they do with results.

Meanwhile, Scripps plans to launch the College of Genomic Medicine, a free online physician training and accreditation program, early next year, Topol says. To become accredited, he says, doctors will spend five to eight hours reviewing materials developed by an international group of leaders in the field and then take a "highly interactive" test.

The genomic medicine college was born at last year's TEDMED, an annual medical technology and health care conference. There, Topol says, both he and Gregory Lucier, CEO of the San Diego-based Life Technologies, a leading supplier of gene-sequencing equipment to academic laboratories, delivered talks about the need to get the medical community up to speed.

As a result, the Life Sciences Foundation, the company's philanthropic arm, awarded Scripps a $600,000 grant to develop the genomic medicine college.

Topol expects interest in the program will be high.

"Consumers are coming into their physician with their genomic data," he says. "Physicians don't want to be trumped in their knowledge by the patient they're looking after. Instead of playing catch-up, they need to be in the leading front of knowledge."

[Whom are we kidding? Doctors spending "five to eight hours" will be equipped to catch up with e.g. 3,000 full-time Genome Informatics specialists' output, just in one place of the globe, the Beijing Genome Institute??? This task is simply nonsense without Doctors' use of results prepared by High Performance Genome Computers. At your annual check-up, does your doctor conduct your actual blood test? Nonsense! The sample is sent to the lab, where super-sophisticated machines conduct the tests, and deliver only the results. The doctor does not even have to look at those "within range". This entry can be debated in FaceBook of Andras Pellionisz - AJP.]

^ back to top


Forget About a Second Genomics Bubble: Complete Genomics Tumbles on IPO First Day

Xconomy
Luke Timmerman 11/11/10

Super-fast, super-cheap DNA sequencing technologies have made big news in biotech the last couple years. But the early returns have made it clear that investors haven’t gone ga-ga for genomics like they did a decade ago.

The latest data point arrived today in the form of Complete Genomics (NASDAQ: GNOM). This Mountain View, CA-based company, which aspires to lead the way in the quest to sequence entire human genomes for as little as $1,000, got a ho-hum reception from IPO investors this week. The company pared back its forecasted price from a range of $12 to $14, ultimately settling for an initial price of $9 per share. And in its first day of trading as a public company—on an overall down day for the markets—Complete Genomics stock fell 11 percent, closing at $8.03.

The deal still provides Complete Genomics with a much-needed shot of $54 million in fresh capital, which it plans to use to pitch its commercial sequencing service to researchers. But the company was originally hoping to raise as much as $86 million, so Complete Genomics is going to have to pursue this market on a leaner budget.

One of Complete Genomics’ archrivals—Menlo Park, CA-based Pacific Biosciences (NASDAQ: PACB) has also seen some wind come out of its sails. PacBio broke out with a $200 million IPO late last month, commanding a hefty $800 million market valuation at an initial price of $16 a share. Investors initially bid up PacBio’s shares to as high as $17.47, but the stock has since been on a downward slide, closing today at $12.51.

It’s nothing like the hype-driven period of 2000, in which first-generation genomics companies like Human Genome Sciences, Celera Genomics, Incyte, and Millennium saw their stocks enter triple-digit territory on notions that the genome would lead to new cures and personalized medicine, right around the corner. It didn’t happen, and for a nice little retrospective, check out this piece from Nature last March.

This will be a fascinating story to watch over the coming months and years, to see whether PacBio and Complete Genomics, as well as established players like San Diego-based Illumina (NASDAQ: ILMN) and Carlsbad, CA-based Life Technologies (NASDAQ: LIFE), will truly make gene sequencing so cheap and convenient that it’s really accessible to the average biologist and changes the way they think about running experiments. But it now looks clear that the two intriguing new entrants into the market will have to tap into this emerging arena without the luxury of being able to raise more cash at the snap of a finger.

[Some of us have warned against the unsustainability of the "Industrialization of Genomics" without proper "supply chain management". First, the Genome Based Economy is NOT about "running experiments" - but "Sequencing" must match "Analytics by Genome Computers", such that results can be useful for the ultimate markets of the ecosystem: Consumers should be directly involved for their own P4 Health Care. Hospitals must be provided with both Sequencers and matching Genome Computers in order to be useful in diagnosis and personalized therapy, up to personalized cure. Biodefense, Agriculture and Synthetic Genomics are all branches of the "Genome Industry" - but (similar to Nuclear Industry) it can not be achieved based on demonstrably false axioms of the underlying science. (Nuclear reactors or nuclear weapons were simply unthinkable based on the obsolete axiom that "the atom does not split nor fuses"). The predicted crash happened much earlier than projected; with the President's Science Advisor (Eric Lander) admitting to the hordes of workers at ASHG convention in Washington, D.C. that the underlying axioms have been false (and both himself and I went on record with the understanding of genome structure and function in the mathematical and thus software enabling terms of fractals). Just as outlined in YouTube-s "Pellionisz" (2008, 2009, 2010) and in the peer-reviewed science paper "The Principle of Recursive Genome Function, 2008", industries (both Sequencing industry and Big IT now involving Intel, Google, Microsoft, IBM, HP, Xilinx, Altera, Hitachi, Samsung, etc) are ready to take off - as well as major Consumer Companies (Procter and Gamble, Johnson and Johnson, Nestle, etc.) are ready - and so are Big Pharma and Big Agriculture Firms, (such as Roche, Merck, Pfizer or Monsanto, etc). However, without sound mathematical (thus software enabling) genome informatics on one hand, to first understand how the hologenome functions, before plunging into handling its malfunctions, and without an industrial-strength supply chain management on the other, how supply of sequences will not glut our ability to process them, Industrialization of Genomics will remain inherently wasteful and unable to bring out its tremendous potential. This entry can be debated in FaceBook of Andras Pellionisz - AJP.]

^ back to top


The Daily Start-Up: Gene Tests Attracting Money & Scrutiny [23andMe C round with J&J]

NOVEMBER 10, 2010, 10:56 AM ET
The Wall Street Journal
By Scott Austin

This morning’s roundup of the latest venture capital news and analysis across the Web:

The [hollowed by the change of politics in the Congress, making it extremely unlikely that FDA might have an updated mandate from its 1976 legislation anytime soon - AJP] threat by the Food and Drug Administration to regulate direct-to-consumer genetic tests didn’t stop Johnson & Johnson and two venture firms from investing more than $22 million in 23andMe, whose services are designed to help consumers better understand what their genetic information says about their ancestry and disease risk. In July a Government Accountability Office [deeply flawed, admittedly non-scientific analysis of science] report raised questions about the accuracy of these services’ results, and the FDA is moving to regulate them. Besides 23andMe, whose investors include Google Ventures and New Enterprise Associates, other venture-backed companies developing the genetic tests include deCODE Genetics, Navigenics and Pathway Genomics.

GroupMe is only a few months old, but the start-up that lets users send group text messages on their cellphones is already worth about $35 million, according to All Things Digital. That high valuation, set upon GroupMe’s $9 million Series B round led by Khosla Ventures, resulted from a bidding war among prominent venture firms, Business Insider previously reported, and reportedly acquisition interest from Twitter.

Two Groupon co-founders are sinking $1 million in a start-up that helps small businesses manage social-media tools like Twitter and Facebook, The Wall Street Journal reports. The money for Sprout Social comes from Lightbank, a seed fund managed by Eric Lefkofsky and Brad Keywell. Lightbank previously invested in Betterfly.com, which helps users find professional services, and Where I’ve Been, a Facebook application that lets people share travel information.

Rhode Island’s general treasurer-elect, Gina Raimondo, is officially cutting ties with Point Judith Capital, the firm she co-founded. The Providence Journal reports she will no longer make investments, and will set up a blind trust to manage her investment in the firm. Raimondo, who invested in health-care companies, resigned from her boards, which included GetWellNetwork, NABsys, Novare Surgical and Spirus Medical.

“The Deal Professor” explains why entrepreneurs should be sure to maintain control of their companies after raising venture capital. “The key for entrepreneurs in negotiations is to make sure that when they do raise V.C. money, they have options,” writes Steven M. Davidoff, a former corporate attorney who is now a professor at the University of Connecticut School of Law. “If they can get multiple term sheet offers, then they can negotiate to sell the smallest part of their company on the most lenient terms. If you only have one term sheet, you are not going to fare well.”

[This entry can be debated in FaceBook of Andras Pellionisz - AJP.]

^ back to top


NIH Chief Readies the Chopping Block
MedPage Today

Published: November 07, 2010

WASHINGTON -- The National Institutes of Health is considering ways to cut research grant funding in anticipation of possible budget restrictions, NIH Director Francis Collins, MD, PhD, said here Saturday.

"One area we have to look at more is whether our workforce is properly planned for," Collins said in a keynote address at the American Society of Human Genetics meeting.

"I don't think it's reasonable to assume that NIH is going to have another doubling [of its budget] anytime soon, and yet, we never tried to model what the workforce should look like in such [an economically difficult] environment."

One idea that has been raised is whether university faculty members receiving NIH grants should continue to have their salaries largely supported by those grants.

"One might make the case that it would be better for those funds perhaps to be available for other investigators," he said. "We at NIH are committed to looking at this in a careful way, and we're going to have a discussion about this at the NIH Institute Directors Leadership Forum coming up in three weeks."

He elaborated at a press conference after his address. "NIH is supporting an awful lot of salaries, and that seems fair to the degree that they are spending that amount of time on the research. Universities have also discovered that's a great way to build programs... but it may be in the long run that this may not be the best way... for research to be supported."

Despite his worries about the agency's future budget, Collins said he was excited by some of the new research opportunities at NIH. One example is the NIH's new Therapeutics for Rare and Neglected Diseases program, a collaboration among various NIH laboratories to take research findings and develop them to a point where private companies would be interested in getting the products to market.

The program currently has a budget of $24 million, but it is expected to soon grow to $50 million, Collins said. Diseases currently being worked on under the program include schistosomiasis/hookworm, Niemann-Pick Type C disease, hereditary inclusion body myopathy, sickle cell disease, and chronic lymphocytic leukemia.

One interesting compound being studied in the sickle cell project is 5-hydroxymethyl-2-furfural, which binds to the sickled hemoglobin and increases its oxygen affinity, thus allowing blood cells to hold onto oxygen. The work on this compound is now in the late pre-clinical stage, Collins said. "It's something fairly bold for a disease that hasn't attracted much private sector investment."

In general, the NIH has been sheltered from the winds of shifting political opinion, Collins said during the press conference. "For the most part people, regardless of their political party, are concerned about human health -- about themselves, their families, and their constituents. If one can pull medical research aside from hot-button issues like stem cells, most people regardless of political persuasion say 'Yeah, it's a good thing.'"

He acknowledged that the controversy surrounding human embryonic stem cell (hESC) research has been a problem for NIH. The agency is fighting litigation by medical researchers who allege that NIH's recent support of such research violates federal law and hurts adult stem cell research.

At the press conference, Collins noted that the controversy has put a damper on the NIH's efforts to hire a director for its new Center for Regenerative Medicine.

"We were hoping to recruit a world-renowned expert in stem cells to come and direct it [but] with the cloud over the whole field, it is difficult," Collins said. "It has been a factor in slowing down the process of trying to search for that director."

He added that his institution spends nearly three times as much money on adult stem cell research as it does on studies with hESCs -- in fiscal year 2009, the ratio was $397 million to $143 million.

Overall, Collins said, the NIH needs to do a better job of selling itself.

"Whether there is a general understanding that the government has been the main driver of medical research in the last five decades, I'm not sure," he said. "We have not done a good job of getting our brand name appreciated for what it does."

[It is certainly a fact that medical research was driven by the government for half a Century - just like Internet was ramped up from nothing by the initial support coming entirely from the government (defense). Now time has come for Genomics to be integrated with Epigenomics, in terms of Informatics (HoloGenomics) - which will take off driven by private industries worldwide. When Intel, AMD, Xilinx, Altera, Hitachi, IBM, Samsung, Illumina, Life Technologies, Roche, Merck (and fresh IPO-s by Pacific Biosciences two weeks ago, and Complete Genomics tomorrow) - the landscape will change forever. - AJP.]

^ back to top


Next Generation Sequencing

BioCompare
Monday November 01, 2010

by Jeffrey M. Perkel

If you want to get a sense of the current state of the high-throughput sequencing market, look no further than this month's news.

First, the US Department of Energy's Joint Genome Institute mothballed the last of its fleet of Sanger chemistry-based sequencers, completing the transition to newer, faster, next-gen sequencers that has been in the works for several years.

"With these new sequencers incorporated into the production line over the last two years, our productivity has risen to 1 terabase in FY09; 5 Tb in FY10 and to a projected over 25 Tb in FY11," GenomeWeb quotes JGI Spokesman David Gilbert as saying. [1] "To put this in perspective, our total commitment to DOE in FY98 was 20 megabases, which we do now in a few minutes."

The second item was the initial data release from the 1000 Genomes Project Consortium, an effort to sequence the genomes of 1,000 humans and thereby get a handle on human sequence variation. In a report in the journal Nature, the Consortium detailed the sequencing and analysis of nearly 900 individual genomes (179 full genomes and 697 partial exomes), as part of the project's "pilot phase," using a blend of next-gen sequencers from Illumina, 454 (a Roche company), and Life Technologies. [2]

Remarkable as that achievement is, it represents just a fraction of next-gen sequencing output to date. According to an infographic accompanying the article, "at least 2,700 human genomes will have been completed by the end of this month [October 2010], and [the] total will rise to more than 30,000 by the end of 2011." [3]

The final news item: One of those 2,700 genomes belongs to none other than rocker Ozzy Osbourne, of MTV's The Osbournes and biting-the-head-off-a-live-bat fame, who wrote of the experience in the October 24 Sunday Times of London. According to Scientific American, [4]:

"I was curious," he wrote in his column. "Given the swimming pools of booze I've guzzled over the years—not to mention all of the cocaine, morphine, sleeping pills, cough syrup, LSD, Rohypnol…you name it—there's really no plausible medical reason why I should still be alive. Maybe my DNA could say why."

If the JGI announcement and 1000 Genomes Project data release speaks the fact that sequencing whole genomes is, as Jay Therrien, vice president of commercial operations for next-gen sequencing at Life Technologies, puts it, "basically routine," the Osbourne sequencing project attests to how far there still is to go.

"We're in this era at the moment of celebrity genomics," says Daniel MacArthur, a UK-based postdoctoral fellow who blogs [5] and tweets [6] extensively about the next-gen sequencing industry. "That will persist for a while until the cost goes down enough that ordinary people can actually afford to do it. And I guess that's when it will get really interesting."

Of course, from a technology point-of-view, the next-gen sequencing arena has been interesting for years—Harvard geneticist George Church estimates the industry cost has plummeted about 10-fold per year for each of the past five years—and even if that pace is slowing a bit (Church estimates this year's improvement at between three and five fold) it continues to be so.

To wit: the rise of "personal" next-gen platforms. All three of the major sequencing companies, Life Technologies, Roche/454, and Illumina, have announced such devices, which provide a lower-cost, lower-throughput alternative for those researchers who would like to take advantage of next-gen sequencing, but have neither the resources nor the need for the industrial-scale equipment that previously was their only option.

"To keep the latest generation of Illumina fully loaded, you need to have a 400-gigabase-pair project. Most people don't have a 400-Gbp project," says Church, who is a scientific advisor for some 18 next-gen firms, including all six with commercial products (Dover Systems, Roche/454, Life Technologies, Illumina, Complete Genomics, and Helicos).

First out of the gate was Roche/454, which announced its GS Junior system late in 2009. Priced at around $100,000 (as compared to $500,000 for the company's top-of-the-line GS FLX), the GS Junior runs the same pyrosequencing chemistry as the GS FLX, but at a lower throughput: 100,000 parallel reactions, compared to one million on the GS FLX.

"It's a scaled down version of our big system," says Katie Montgomery, marketing communications manager at 454 Life Sciences.

At about 400 bases apiece, those reads currently lead the industry in terms of length. But in 2011, the company plans an upgrade to about a kilobase, says Montgomery, adding that this will be available to existing users as "a small hardware upgrade to accommodate the increased reagent volumes."

On Oct. 26, Illumina announced a new member of its sequencer line, as well. The HiSeq 1000 "is designed for researchers who want the ease of use, industry-leading cost per gigabase (Gb) and data rate of the HiSeq 2000 but do not currently require its throughput," the company said in a press release (Illumina was unavailable to comment for this article). [7]

This "single flow cell version" of the HiSeq 2000 "will deliver in excess of 100 Gb of data per run using paired 100 base pair reads, easily enabling the sequencing of a complete human genome in a single run," according to the release.

Finally, at the American Society of Human Genetics annual meeting this week, Life Technologies announced two new additions to its line of SOLiD sequencers. On the high end, the company is launching the SOLiD 5500xl. Built in collaboration with Hitachi, the 5500xl will generate twice the data of SOLiD 4 (200 Gbp per run) at half the cost ($6,000 per run) and in half the time (5-6 days vs 10-12), Therrien says.

"You can sequence an entire human genome at 30x coverage at a price of $3,000, which was unheard of just a year ago," says Therrien.

At the same time, the company also announced a personal option. Priced at $299,000 (vs $595,000 for the 5500xl), the SOLiD 5500 base system is essentially a single flow-cell version of the 5500xl.

Life Technologies is also gearing up to commercialize an entirely new sequencing technology this November. Based on its recent acquisition of Ion Torrent Systems for some $725 million, the Personal Gene Machine (PGM) will provide up to 10 Megabases worth of 100-base reads in just two hours for about $500, according to Therrien. (An upgrade to 100 Mbp per run is planned for release "early next year," he adds.)

That, Church notes, is 1,600 times more expensive per-base than the SOLiD 4. But the company, says Therrien, is positioning it as mid-way between a Sanger capillary electrophoresis-based instrument and the SOLiD, for applications such as bacterial and viral genomics and targeted amplicon sequencing. "What that gets you is a radical reduction in turnaround time for really what is a very large amount of sequence data," he says.

The Ion Torrent consumable "is a computer chip that's been modified so we can flow biologics into it," Therrien says. Amplified DNA templates on silicon beads are flowed into that flow cell, where they sit in tiny wells. At the bottom of those wells is basically "the smallest pH meter in the world." As nucleotides are flowed into the reaction chamber one by one and added to the growing DNA chain by DNA polymerase, they release protons, causing a pH drop that registers as a change in voltage.

It's a design that requires no optics, no fluorescence, and no imaging; "We call it 'post-light sequencing,’" Therrien says. It is also, for that reason, considerably less expensive than other next-gen platforms, costing just $52,000 for the hardware.

"It's a clever system," says MacArthur. "I think it's really quite elegant, but how well it actually works in the field will be the big test."

(In related news, Roche/454 announced Monday a partnership with DNA Electronics "for the development of a low-cost, high-throughput DNA sequencing system," according to a press release. Details are sketchy, but the described system bears certain similarities to Ion Torrent's technology. According to the release, the system will use "inexpensive, highly scalable electrochemical detection"—as opposed to optical detection—and "leverages 454 Life Sciences' long read sequencing chemistry with DNA Electronics' unique knowledge of semiconductor design and expertise in pH-mediated detection of nucleotide insertions, to produce a long read, high density sequencing platform.")

Of course, not all sequencing will be done on these platforms. Sanger sequencing remains a powerful force in the industry. "If you just want to check one fact in 24 hours, as biologists often do, it's $2 for a 700 bp run," Church says. "And that's a no-brainer."

At the same time, sequencing firms are also pursuing the "next-next" generation of instruments.

One such technology is Project Starlight, which Life Technologies discussed at the Advances in Genome Biology and Technology meeting in February. Project Starlight is a sequencing approach based on single-molecule fluorescence resonance energy transfer (FRET) between a FRET donor-bearing DNA polymerase and FRET acceptor bearing nucleotide triphosphates.

Unlike most commercial sequencing systems, which amplify a template prior to sequencing it, Starlight sequences individual molecules directly. (Helicos BioScience's HeliScope commercial sequencer is also a single-molecule technology, as is Pacific Biosciences' in-development SMRT technology.) According to Therrien, the company is "currently targeting a commercial release in mid-2011," with 1-kb read lengths—about twice as long as any current next-gen technology—at launch.

Such long reads are sorely needed, MacArthur says. Short reads (such as the products of the SOLiD and Illumina chemistries) make it difficult to assemble a genome without a reference scaffold (that is, de novo), especially if the genome contains repetitive sequences. A few long reads could go a long way towards overcoming that problem, he says.

"Once you start pushing beyond about a kilobase or so, that starts giving you some real power," MacArthur says. "If you can sprinkle even a few of these kind of longer reads into a sequencing project that's already generating lots and lots of short reads, then that potentially can make a big difference to how well you can put the genome together."

Another next-next-gen technology in development is nanopore-based sequencing, in which DNA is "read" as it passes through nanometer-scale holes. Oxford Nanopore has been pursuing that approach for several years now. More recently, Roche/454, in partnership with IBM, entered the fray.

According to a press release [8] announcing the latter collaboration, IBM's in-development approach is based on the company's “DNA Transistor” technology. "The novel technology, developed by IBM Research, offers true single molecule sequencing by decoding molecules of DNA as they are threaded through a nanometer-sized pore in a silicon chip," the release explains.

"It's still an early-stage research project at this point, but Roche is very interested in the future of sequencing," says Montgomery. "I think there's still a lot yet on the horizon to come."

For the next-generation sequencing market overall, that surely is the case.

["Sequencing" became industrialized. Now is the turn towards industrialization of the "Genome Computing" - HolGenTech does this by leveraging defense computing for genomics. - AJP.]

^ back to top


Experts Discuss Consumer Views of DTC Genetic Testing at ASHG, Washington

GenomeWeb
November 08, 2010

By Andrea Anderson

WASHINGTON (GenomeWeb News) – Researchers are sifting through survey and other data that may eventually help discern consumers' attitudes about direct-to-consumer genetic tests and inform future oversight of such tests, experts explained at the American Society of Human Genetics meeting here today.

The ASHG's existing recommendations for DTC genetic testing call for transparency and evaluation of DTC tests by health and/or consumer organizations such as the US Food and Drug Administration or the Federal Trade Commission, along with education about the tests for consumers and healthcare providers and studies of consumers' views on and use of such tests, ASHG President-elect Lynn Jorde, chair of human genetics at the University of Utah, told reporters during a press briefing.

Jorde moderated a panel of experts who weighed in on DTC tests and outlined findings from their own studies of consumer attitudes toward DTC tests.

For instance, David Kaufman, director of research and statistics at Johns Hopkins University's Genetics and Public Policy Center, described results from a survey of more than 1,000 individuals that was designed to get at individuals' motivation for using DTC genetic testing services offered by 23andMe, Decode Genetics, and Navigenics — and their experiences and level of satisfaction with these tests.

Kaufman and his colleagues surveyed 1,048 DTC genetic test customers who had been tested by 23andMe, Decode Genetics, or Navigenics and received their test results between June 2009 and the following March.

In general, they found that the earliest DTC genetic test adopters tended to be well educated and had significantly higher incomes than the average American. Most participants said they were motivated by factors such as general curiosity and an interest in assessing their ancestry and/or disease risk, Kaufman noted, though many also cited an interest in improving their health as a motivator for testing.

Although some 70 percent of consumers surveyed supported oversight by a consumer agency that would hold testing companies to their scientific claims, Kaufman and his team found that roughly two-thirds of those surveyed believe DTC tests should be available to the public without government oversight.

The researchers also gained insights into everything from participants' understanding of test results to their overall satisfaction with the tests.

Nevertheless, Kaufman cautioned, though the survey provided information on how DTC test results are interpreted by customers, it did not address the scientific rigor of the tests themselves or the clinical validity or utility of the test results.

Meanwhile, Barbara Bernhardt, co-director of the University of Pennsylvania's Center for the Integration of Genetic Health Care Technologies, and her co-workers surveyed 60 individuals who had been tested for risk variants associated with eight conditions through the Coriell Personalized Medicine Collaborative.

Again, the team found that participants tended to be well educated and motivated by factors such as curiosity and interest in improving their health.

While most of the individuals surveyed understood their general results — often interpreting them within the context of their own family history — they did not necessarily have a deep understanding of the relative risk information provided to them, Bernhardt explained. Though some were told they were at a heightened risk certain diseases, the researchers found that none of the participants reported being very concerned about this increased risk.

Even so, Bernhardt said, nearly a third had at least somewhat changed their behavior or lifestyle based on their test results. And half had discussed their test results with their doctor.

Finally, Andrew Faucett, director of genomics and public health at Emory University School of Medicine, outlined some questions that consumers and clinicians should keep in mind when selecting, evaluating, and interpreting DTC genetic tests and results.

For example, Faucett said, consumers need to consider what they hope to learn by taking the test. For clinicians, meanwhile, issues such as treatment implications of genetic findings are important, Faucett noted, as are an understanding the population(s) in which a particular test has been evaluated.

Faucett also drew a distinction between DTC genetic tests that are regularly used in the clinic and those that aren't, explaining that while testing labs in general are doing a good job with test analyses, much less is known about the clinical validity and utility of some tests.

[During the week of ASHG in Washington, the American people have spoken - and the legislative lame-duck remainder of the mid-term and ensuing division of Congressional Legislature make it an impossibility that the 1976 mandate of the FDA will be updated anytime soon. There is enough consumerism around, most people think, to give advice to consumers to make their free choices. The more choice, the better. - AJP.]

^ back to top


Complete Genomics plans Tuesday initial public offering of stock
Associated Press
11/05/10 5:11 PM EDT

INDIANAPOLIS — Complete Genomics Inc. expects to raise about $69.3 million in an initial public offering of 6 million shares Tuesday to help fund improvements and expansion of its DNA sequencing strategy.

The Mountain View, Calif., company said proceeds could rise to about $80.2 million if underwriters exercise an overallotment option for 900,000 shares. The company expects the stock price to range between $12 and $14 per share.

It plans to use the money to expand the sequencing and computing capacity at its Mountain View and Santa Clara locations, to fund more development of its technology and for sales and marketing and working capital, according to a registration statement filed with the Securities and Exchange Commission.

The company said it has developed and commercialized an innovative DNA sequencing platform and aims to "become the preferred solution for complete human genome sequencing and analysis," according to the statement. Complete Genomics believes its products will offer academic and biopharmaceutical researchers complete analysis without requiring them to invest in equipment like in-house sequencing instruments and high-performance computing resources.

"By removing these constraints and broadly enabling researchers to conduct large-scale complete human genome studies, we believe that our solution has the potential to revolutionize medical research and expand understanding of the basis, treatment and prevention of complex diseases," the company said.

Complete Genomics started operations in March 2006 and spent its first three years focused on research and developing its sequencing technology. It has piled up a $108.1 million deficit during its development stage.

The company plans to list its stock on the NASDAQ Global Market under the ticker symbol "GNOM."

[In the five-way horse race for Genome Sequences with 3 public companies (Roche, Life Sciences and Illumina) and 2 fresh IPO-s (Pacific Biosciences and now two weeks later Complete Genomics) the crucial question, surprising as it is, will not be "Sequencing" - but "Analytics" - based on entirely new paradigms, since the Decade since the finish of "Human Genome Project" "was all wrong" (confessed publicly the President's Science Advisor, Eric Lander, November 2nd, 2010). In the horse race "leveraging" is expected to play a major role. For instance, while PacBio leverages Big IT (IntelVC), Life Technologies leverages Big IT (Hitachi) in their new high-end SOLiD 5500 sequencer (announced at ASHG for next Spring). At the same time, for their "low end" ($49k) Ion Torrent sequencer Jonathan Rothberg, now part of Life Technologies, leverages the entire $1 Trillion semiconductor industry for the sequencer chip. As for "Analytics", HolGenTech leverages "Defense Computing" (with their High Performance Computing Hybrid Platforms), combined with Pellionisz' Fractal Approach to Recursive Genome Function - AJP.]

^ back to top


Today, we know all that was completely wrong - Lander's Keynote at ASHG, Washington

Lander’s Lessons Ten Years after the Human Genome Project

Bio-IT World
November 3, 2010
Kevin Davies

By Kevin Davies

November 3, 2010 | WASHINGTON, DC – If anyone was capable of distilling the lessons learned in the ten years since the first draft of the Human Genome Project (HGP) in 2000, it was Broad Institute director Eric Lander.[Also, Science Advisor to the President - AJP]

Opening the annual American Society of Human Genetics (ASHG) convention in Washington, D.C., Tuesday evening, Lander tried to meet the organizers’ challenge to sum up “what’s come of it?”

From a technical perspective, the HGP produced “a scaffold onto which information can be put,” said Lander, including cancer genes, epigenomics, evolutionary selection, disease association, 3-D folding maps, and much more. As for intellectual advances, Lander made a series of startling comparisons of geneticists’ knowledge around the time of the HGP in 2000 and today.

In 2000, for example, only four eukaryotic genomes (yeast fly, worm, and Arabidopsis) had been sequenced, as well as a few dozen bacteria. Today, those numbers stand at 250 eukaryotic genomes, 4,000 bacteria and viruses, metagenomic projects and many hundreds of human genomes. By the end of this year, Lander expects the Broad Institute to have generated 100,000 Gigabases (Gb) of sequence.

“The cost [of sequencing] has fallen 100,000 fold in past decade, vastly faster than Moore’s Law,” said Lander. But the question remained: “How will this get used in clinical medicine? The costs need to drop to $1,000 and then $100,” said Lander.

“I no longer think these things are crazy.”

In 2000, Lander and his HGP consortium colleagues estimated there were about 35,000 protein-coding genes, with a few classical non-coding RNAs. Repetitive DNA elements called transposons were just parasites and junk.

“Today, we know all that was completely wrong,” said Lander.

Studying patterns of evolutionary conservation in some 40 sequenced vertebrates, the human gene count is “21,000, give or take 1,000,” said Lander. “There are many fewer genes than we thought. Much more information is non-coding than we thought . . . 75% of the information that evolution cares about is non-coding information.

The study of 29 mammalian genomes shows some 3 million conserved non-coding elements in the genome, covering about 4.7% of the genome. Some of these have regulatory functions, he said. Another exciting area was the generation of genome-wide 3-D maps, which has revealed that the genome resides in ‘open’ and ‘closed’ compartments. There was much more work to be done in the coming decade, but with new next-generation sequencing tools, “it will happen.”

Mendel Redux

In 2000, the genes for about 1,300 Mendelian genetic disorders had been identified. Today, that number is about 2,900, leaving “another 1,800 Mendelian disorders to go,” said Lander. He noted the success of some whole-genome sequencing projects in identifying rare Mendelian disease genes, although the approach was not trivial. “We all have about 150 rare coding variants,” he said, in other words glitches in about 1% of a person’s genes. Those have to be carefully vetted and filtered, but in the case of recessive genes or a small number of patients, the whole-genome approach was very powerful.

Lander also broached the progress in genome-wide association studies (GWAS) for common inherited disease, where Lander says “an entire village came together” to develop the array tools, haplotype maps, and a catalogue of more than 20 million single nucleotide polymorphisms (SNPs). “The vast majority of common variation is known,” said Lander. The numbers are 1,100 loci associated with 165 common diseases/traits. For diseases such as inflammatory bowel disease and Crohn’s disease, 70-100 loci have been mapped, a pattern that Lander showed exists for lipid disorders, type 2 diabetes, height, and many other conditions.

Lander addressed the oft-publicized disappointment and criticisms expressed by some prominent geneticists, including ASHG president-elect Mary-Claire King, in the “missing heritability” and the net value extracted from GWAS papers. One widely voiced concern is that the effect size of individual GWAS “hits” is small. “I think that’s nonsense,” said Lander. “Effect size has nothing to do with biological or medical utility.” He pointed out that a drug acting on a target can have much bigger effect that the effect of the common allele.

Some geneticists believe that the “missing heritability” so far untapped by GWAS must be explained by rare DNA variants. Not so fast, said Lander. For one thing, the proportion of heritability explained in disorders such as Crohn’s and diabetes is increasing. Population genetics theory suggests that for many common diseases, rare variants will explain less than common variants.

Lander also said that geneticists must take into account epistasis, the effects of modifier genes. Such effects cannot be found statistically in GWAS, he argued. Rather than moving from mapped loci to explaining heritability to understanding biology, Lander said we must understand biology first, and then explain the models of heritability.

Cancer Conclusion

In 2000, Lander said some 80 cancer-related genes were known. The tally is now 240 genes, with genome sequencing studies revealing mutational hotspots in colon, lung, and skin cancers with therapeutic implications. As an example, Lander said his Broad Institute colleague Todd Golub, studying multilple myeloma tumors, had discovered mutations in four well known cancer genes, but more excitingly, implicated a handful of new biological pathways, including protein synthesis and an extrinsic coagulation pathway.

The battle against cancer needed more sequencing. “We’ll need the equivalent of the 1 million genomes project. We better start thinking how to engage patients,” said Lander, suggesting social networking and other ideas had to be leveraged to get patients involved.

Lander concluded by presenting what he called “the path to the promise.” If the HGP provided the raw tools, scientists were still translating basic genome discoveries into more medically directed research. That’s how far we’ve progressed in ten years. But that still leaves the daunting tasks of clinical interventions, clinical testing, regulatory approval and widespread adoption.

[This public confession of the Science Advisor to the President that the last ten years of Genomics (derailed by Crick in 1956) was on a completely wrong track even for the last decade since HGP, may still be stunning for a part of the huge crowd, though such a message was heralded in peer-reviewed paper and widely disseminated YouTube in 2008. Still, there were even companies at the ASHG convention that attempt to sell analytics with validity only for "genes" - when nobody can even know the number of genes (protein-coding exons amounting to far less than 1% of human DNA), since at this point there isn't even a scientifically valid commonly accepted definition of what a "gene" is. Thus, ASHG conference in Washington, 2010 November became the "Eye of the Cyclone". Along with his notion to turn the "upside down" approach to "right side up" ("we must understand biology first, and then explain the models of hereditability"), Dr. Lender published in Science a year ago that the structure of DNA was fractal. Meanwhile, Pellionisz gained a decade with his "Fractal Approach to Recursive Genome Function using Fractal Iteration", on record since 2002, reaching back to his seminal 1989 publication. Those who only see the "tip of the iceberg" of Pellionisz' Fractal Approach in publications are into huge suprises now that all (formerly fierce) opposition has been wiped out from the top. Look for more coming here and also on Pellionisz FaceBook and pellionisz_at_junkdna.com - AJP.]

^ back to top


1,000 Genomes Project Maps 16 Million DNA Variants: Why?

CBS News
October 28, 2010 11:47 AM

Remember the race to map the human genome? Science crossed that finish line in 2003. But it turns out that was just the beginning.

Scientists are now focusing on the small differences in our genomes, hoping to find fresh clues to the origins of many diseases.

The effort is called the 1000 Genomes Project and already it claims to have found 16 million previously unknown variations in human DNA, about 95 percent of all variations in our species. That's just from the 800 people who are part of the pilot study. The group hopes to catalog DNA from 2,500 people before they are done. [How do we know that after 16 M only 5% will be found? If 95% is already found from 800 people, why does the "1,000 Genomes" project plan for 2,500 Genomes? - AJP]

Why does it matter?

"What really excites me about this project is the focus on identifying variants in the protein-coding genes that have functional consequences," said Dr. Richard Gibbs, director of the Human Genome Sequencing Center at the Baylor College of Medicine, in a statement. "These will be extremely useful for studies of disease and evolution."

That's geek speak for finding cures to diseases that have genetic components, such as Alzheimer's, mental illness, cystic fibrosis and Huntington's Disease. The work may eventually also help certain cancers that are genetically linked such as breast and prostate.

The research is being done at government, academic and corporate facilities around the world including the National Institutes of Health in America and is made possible by new high speed techniques for mapping genetic material.

The pilot results were published in Nature and are being shared freely to speed research.

[The Nature (hard core science) paper concludes "The 1000 Genomes Project represents a step towards a complete description of human DNA plymorphism". With all due regard to the very distinguished line-up of the Consortium IMHO a "complete description of human DNA polymorphism (if completion of such a "brute force aproach" is possible at all) may not be the best and most scientific approach. At the conception of the 1,000 Genomes Project e.g. Francis Collins and George Church critized the project' planning in a Nature article in 2008. Will there be reconsideration of the design in view of the initial results? (Looks like 1,000 is already changed to 2,500). The crucial question seems to be that no two genomes may be identical - yet chances are that those differences that are responsible for "human diversity" might hugely outnumber those structural variants that are the root causes of hereditary diseases. Some algorithmic approaches can tell the "parametric" and "syntactic" variants apart. This might a burning question at the 60th meeting of the Amercian Society of Human Genetics this week in Washington, D.C. - AJP.]

^ back to top


Parody of Public’s Attitude Toward DTC Genetics

October 27, 2010
Genomeweb

Daily Scan’s inbox has been teeming with announcements for various talks and workshops to be held at the upcoming American Society of Human Genetics annual meeting in Washington, D.C., though none have read quite like Blaine Bettinger’s at the Genetic Genealogist blog. Bettinger has posted a parody of a press release for a talk in which “a group of the nation’s top geneticists and ethicists” showed data that analyzed the public’s awareness of direct-to-consumer genetic testing services and their regulation. “The researchers, funded in large part by federal grants, interviewed over 10 people randomly chosen at the entrance to the nearest grocery store and asked them whether they were familiar with one or more of the five DTC genetic testing companies included in the study,” whether they had – or had ever considered – taking a DTC gene test, and whether they felt the public should be allowed access to their genetic information, the blogger writes in his satire. “Finally, to gauge the participant’s understanding of the basic principles of genetics, each was asked to briefly describe in 100 words or less the role of the replication fork in DNA replication,” Bettinger adds

---new

I'm getting a 404 error when

Submitted by sarahemily on Wed, 10/27/2010 - 13:24.

I'm getting a 404 error when I click the link to the parody and can't find it anywhere else on the site. I would love to read it - can you supply a new link?

• reply

---new

Here is the text: New Study

Submitted by Jeff.Rosner on Wed, 10/27/2010 - 14:01.

Here is the text:

New Study Analyzing DTC Genetic Testing Released Today

October 26th, 2010 in Genealogy |

I received this news release yesterday via email. I’m probably breaking the embargo by publishing this, but I think it’s too important not to get it out there. Please be sure to read ALL the way to the bottom.

Nation’s Top Geneticists and Ethicists Release New Study of Consumer Perceptions of Direct-to-Consumer Genetic Testing and Announce New DTC Testing Guidelines

Leading up to the American Society of Human Genetics 60th Annual Meeting, which will be held November 2-6, 2010 in Washington, D.C., a group of the nation’s top geneticists and ethicists today released the results of a new study analyzing the public’s awareness and use of so-called “direct-to-consumer” genetic testing by companies such as 23andMe, deCODEme, and Pathway Genomics.

The researchers, funded in large part by federal grants, interviewed over 10 people randomly chosen at the entrance to the nearest grocery store and asked them whether they were familiar with one or more of the five DTC genetic testing companies included in the study. The participants were then asked if they had participated in DTC genetic testing, and whether they might be interested in doing so in the future. The participants were also asked whether they believed that members of the general public should be allowed to access their own genetic data without the assistance of a physician or genetic counselor. Finally, to gauge the participant’s understanding of the basic principles of genetics, each was asked to briefly describe in 100 words or less the role of the replication fork in DNA replication.

The results of the study indicate that 100% of the study participants were completely unfamiliar with these DTC testing companies, and none had any experience with DTC testing. The study also showed that while none were currently interested in performing testing on their own DNA, 90% believed that Americans should be allowed to access their genetic data without the assistance of a physician or genetic counselor. The results also showed that none of the participants in the study were able to competently explain even the basics of the DNA replication fork.

“Our study shows for the first time that the vast majority of the American public is completely unaware of even the most popular DTC testing companies,” reported Dr. David N. Anderssen, lead geneticist in the study. “Additionally, the inability of every single one of the study participants to explain one of the most basic aspects of genetics was, quite frankly, very disappointing, again suggesting that people are not equipped to handle genetic information.”

While 90% of the participants stated that they should be able to access their own genetic information without a physician or geneticist’s assistance, we completely disagree with their opinions and took this opportunity to explain to each one of them just how dangerous their genetic information can be. We also explained to them that their erroneous opinions and beliefs don’t really matter anyway, since it is the role of certified geneticists and ethicists to determine for America exactly who should access genetic information.”

In light of the findings, Dr. Anderssen noted the group’s newly-issued guidelines on DTC testing: “We’re recommending that all DTC genetic testing companies immediately close up shop, or, alternatively, hire a staff of 25 or more genetic counselors. We also recommend that Congress immediately make it illegal to even look at an ‘A,’ ‘T,’ ‘G,’ or ‘C’ without a physician or genetic counselor within at least 5 feet; the danger of privacy violations and/or the misunderstanding DTC genetic testing results is just too great to ignore.”

“Indeed, the majority of the group [of ten people - AJP] believes that there is no role for genetics in health care, disease risk, genealogy, or anthropology, among other endeavors; the old-fashioned – but always informative – family history is really the only way to go here,” reported the geneticist. “However, since most of us need these jobs, we decided to approve the use of genetics for disease assessment in the new guidelines.”

Dr. Anderssen noted that the group is continuing to study this emerging area of genetics, and plans to expand the study to 25 more participants from the nearby gas station in the near future.

____________________________________

(This post is a parody only, meant as criticism of some of the glaring deficiencies in recent studies analyzing DTC claims. A reasonable person would not interpret this post to contain factual claims, and is within my First Amendment rights (isn’t it sad that I have to write this?)).

• reply

new

Hi Sarah, the post appears to

Submitted by tvence on Wed, 10/27/2010 - 14:51.

Hi Sarah, the post appears to have been removed from the site. We'll provide an updated link when -- and if -- it becomes available.

Jeff: Thanks for pasting the full text in the meantime.

[This is really a parody of an infamous, and explicitely "non-scientific" study of the science of DTC by some genomically illiterate lame-duck politician, having already announced his retirement - AJP.]

^ back to top


UPDATE: Pacific Biosciences IPO Rises While First Wind Cuts Price

Wall Street Journal
By Lynn Cowan, Tess Stynes And Christopher Zinsli
Of DOW JONES NEWSWIRES
OCTOBER 27, 2010, 4:30 P.M. ET

A genetics technology company on Wednesday became the first U.S. life sciences initial public offering this year to both price well and trade higher, while a wind farm operator cut its asking price ahead of its offering.

Pacific Biosciences of California Inc. (PACB), which has created an instrument platform to help scientists observe nucleotides being added to DNA in real time, offered up a strong data point for the initial public offering market, while First Wind Holdings Inc. showed that green energy companies continue to be a hard sell in America.

Pacific Biosciences closed at $16.44 a share on the Nasdaq, up 2.8% from its initial public offering price of $16. It sold 12.5 million shares at the midpoint of its $15 to $17 price range.

...

Even though it hasn't commercially released its DNA sequencing platform or generated any revenues from it, Pacific Biosciences has more going on than the typical early-stage life science IPO hopeful. It plans to begin commercial delivery in the beginning of next year, has an order backlog of $15 million, and could see recurring revenue from the consumable components that need to be re-ordered.

The platform is a new generation of DNA sequencing technology, one that allows longer nuceotide chains to be read in less time than existing systems, according to the company's prospectus.

"It's a disruptive technology," said Steve Brozak, a biotech and medical-devices analyst who is president of WBB Securities LLC. "It could be a building block for future innovation."

Not every deal this week seems destined for easy pricing and trading. Wind-farm developer First Wind Holdings Inc. on Wednesday cut the estimated price range of its 12-million share IPO to $18 and $20 each, $6 below the $24 to $26 that it had originally planned. The company, which is supposed to begin trading on the Nasdaq Thursday under the symbol WIND, operates 504 megawatts of wind farms in the Northeastern and Western U.S.

...

[While market conditions are still shaky, "Genome Informatics" is "IN", while "Green Tech" may be fading OUT, according to investors - AJP.]

^ back to top


UPDATE 1-Pacific Biosciences IPO prices at midpoint-underwriter

Tue Oct 26, 2010 7:22pm EDT

* Prices at $16 vs $15-$17 range-underwriter

* Sells 12.5 mln shares, raises about $200 mln-underwriter

* To trade on Nasdaq under symbol "PACB"

NEW YORK, Oct 26 (Reuters) - Pacific Biosciences of California Inc (PACB.O), which designs machines to speed up DNA sequencing in labs, priced shares in its initial public offering at the midpoint of the expected range on Tuesday, according to an underwriter.

The company sold 12.5 million shares for $16 each, raising about $200 million. It had planned to sell shares for $15 to $17 each.

Menlo Park, California-based Pacific Biosciences sells equipment that can be used for clinical, agricultural and drug research, food safety, biofuels and biosecurity applications.

The company has never been profitable and all of its revenue to date has come from government grants. Pacific Biosciences posted a net loss of $63.04 million on revenue of $1.17 million in the six months ended June 30.

Pacific Biosciences said it had a backlog of orders worth $15 million as of June 30. The U.S. Department of Energy Joint Genome Institute and Monsanto Co (MON.N) are among those that have ordered Pacific Biosciences equipment.

Underwriters were led by JPMorgan, Morgan Stanley, Deutsche Bank Securities and Piper Jaffray. The shares are expected to begin trading on the Nasdaq on Wednesday under the symbol "PACB."

[The Industrialization of PostModern Genomics has begun - AJP.]

^ back to top


Complete Genomics Sets IPO Price Range

XConomy
Luke Timmerman 10/26/10

Complete Genomics, the low-cost gene sequencing company in Mountain View, CA, has set a goal of pricing 6 million shares in its initial public offering at a price of $12 to $14, according to a filing today with the Securities and Exchange Commission. If the company can find demand from investors at the top of its range, and its underwriters buy an extra 900,000 shares, then the deal could bring in as much as $96.6 million. The company is scheduled to set the actual IPO price the week of Nov. 8, according to Renaissance Capital. Complete Genomics’ existing roster of investors includes OrbiMed Advisors, Essex Woodland Health Ventures, San Diego-based Enterprise Partners Venture Capital, Kirkland, WA-based OVP Venture Partners, and Palo Alto, CA-based Prospect Venture Partners. The company plans to begin trading under the symbol (NASDAQ: GNOM).

[As heralded since YouTube in 2008 (based on peer reviewed science paper of The Principle of Recursive Genome Function) the key is Analytics, not only for the public investors, but for the sustainability of the Industrialization of Genome revolution of Genome Revolution. The paradigm-shift has been available since 2002 - a year before the START of ENCODE - AJP.]

^ back to top


IPO Preview: Pacific Biosciences [this week; a huge surge for Fractal Analytics - AJP]

Bloomberg BusinessWeek
October 22, 2010

Pacific Biosciences expects to offer up to $212.5 million in common stock in an IPO next week.

The company said it expects to price 12.5 million shares between $15 and $17 apiece. It is also offering underwriters 1,875,000 shares to cover overallotments. If all options are exercised, the company could have gross proceeds of just under $244.4 million.

The company, based in Menlo Park, Calif., makes genetic analysis technology focused on helping researchers investigating biochemical processes. Its initial focus is in the DNA sequencing market, with customers including research institutions and commercial companies focusing on agricultural research, drug discovery and development, biosecurity and bio-fuels.

The company said there are a significant number of competitors in the market, including Illumina Inc., Life Technologies Corp. and Roche Applied Science. Many of its competitors already have established manufacturing and marketing capabilities.

"We expect the competition to intensify within this market as there are also several companies in the process of developing new technologies, products and services," Pacific Biosciences said in its prospectus.

Those emerging competitors could include Complete Genomics Inc., Oxford Nanopore Technologies Ltd. and Ion Torrent Systems Inc., which is in the process of buying Life Technologies [for $735 M in cash and stocks - AJP].

Pacific Biosciences said it had $135 million in revenue in 2009. The fledgling company has yet to turn a profit.

The company said it expects net proceeds of about $210.4 million, after costs, depending on how the stock prices within the range and overallotment options. It said it would invest between $60 million and $70 million in current and future applications of its SMRT technologies [The $60-70 M looks most like the investment in Analytics - AJP], use $40 million to $60 million to fund anticipated future working capital needs, and use $20 million to $30 million to fund planned capital expenses. It would use between $40 million and $60 million for other general corporate purposes.

Underwriters in the offering include J.P. Morgan, Morgan Stanley, Deutsche Bank Securities, and Piper Jaffray.

The company plans to trade under the "PACB" symbol on the Nasdaq Global Market.
--

Among "Investment Risks" disclosed by PacBio on their S1 filing:

Adoption of our products by customers may depend on the availability of informatics tools, some of which may be developed by third parties.

Our commercial success may depend in part upon the development of software and informatics tools by third parties for use with our products. We cannot guarantee that third parties will develop tools that will be useful with our products or be viewed as useful by our customers or potential customers. A lack of additional available complementary informatics tools may impede the adoption of our products and may adversely impact our business.

[As the PacBio IPO will happen in the coming week, there will be a huge surge for the Fractal Approach to Analytics:

a) If the IPO will be successful, there will be monies to make good on what PacBio claims to be; a DNA "analysis" company - "at the first step" with the focus on sequencing". Smart Public Investors are keenly aware that analytics is missing - and would be a huge (Silicon Genetics type, 2000) mistake investing in Analytics based on totally wrong ancient axioms of Genome Informatics (Central Dogma and Junk DNA), at a time when the superseding replacement paradigm "The Principle of Recursive Genome Function" is available. The "proof of concept" that the genome (structure) is fractal was provided now over a year ago by Science Advisor to President Eric Lander et al.

b) If the PacBio IPO will be less than successful (e.g. should the $15-17 stock list price drop upon IPO to lower levels), it will have to be the bitter lesson to all Sequencing Companies that smart enough public investors are as aware of the "Dreaded DNA Data Deluge" - as I presented it in 2008 (Google YouTube, which by the way rises relentlessly, now over 9,000 views from all Continents) - and would be reluctant to invest in Genomics where a potential glut of sequences that can not be adequately analyzed may threaten sustainability. Many investors are aware of the present DTC "unsustainability" - though that is caused by the US government, rather than supply-chain-management initial difficulties in the Industrialization of Genomics by the Private Sector, banking on public investment. Sequencing companies would have to do something rather quickly to embrace Fractal Analytics. AJP.]

^ back to top


Benoît B Mandelbrot: the man who made geometry an art [censored -reinstated - AJP]

Guardian.co.uk
October 19, 2010

[Excerpts - see full article linked to title - AJP]

Few recent thinkers have woven such a beautiful braid of art and science as Benoît B Mandelbrot, who has died aged 85 in Cambridge, Massachusetts. (The B apparently doesn't stand for anything. He just felt like adding it.) Mandelbrot was a provocative mathematician, a subversive geometer. He left a beautiful legacy in visual art, for Mandelbrot was the man who named and explained fractals – those complex, apparently chaotic yet geometrically ordered shapes that delight the eye and fascinate the mind. They are icons of modern understanding of the universe's complexity.

The Mandelbrot set, one of the most famous fractal designs, is named after him. With its fizzing fringe of crystal-like microforms blossoming out of a conjunction of black circles, this fractal pattern looks crazy but is the outcome of geometrical calculations.

.... Mandelbrot was not the first, but with his startling fractals concept he created a visual manifesto for a non-Euclidean universe.

Fractals – and I'd be delighted if mathematicians can give a better explanation below– are shapes that are irregular but repeat themselves at every scale: they contain themselves in themselves. Mandelbrot used the example of a cauliflower which, like a fern, is a fractal found in nature; if you look at the smallest sections of these vegetable forms, you see them mirroring the whole....

Artists have been fascinated by geometry for as long as mathematicians have. The studies of Euclid are reflected in the regularities of classical and Renaissance architecture, from the Pantheon in Rome to the duomo in Florence. But artists and architects were also thinking centuries ago about non-regular, curving geometries. You could argue that fractals give us the mathematics of the Baroque – they were anticipated by Borromini and Bach. I have a facsimile, given away by an Italian newspaper, of part of Leonardo da Vinci's Atlantic Codex, which contains page after page of his attempts to analyse the geometry of twisted, curving shapes.

Mandelbrot was a modern Leonardo, a man who showed the beauty in nature...

---

Comments [partial list - AJP]

singo111

19 October 2010 2:22PM

As others have already pointed out...

The beauty is not the 'pretty pattern' fractal. The beauty is in the fact that a dry and simple mathematical function can give rise to a fractal output of such complexity. That's what blows my mind anyway.

And as for:

Mandelbrot was not the first, but with his startling fractals concept he created a visual manifesto for a non-Euclidean universe - there isn't anything necessarily Non-Euclidean (as a mathematician would understand the term) about it at all.

Why isn't a science correspondent writing this?

RIP Benoit - you deserved better than this article.

---

Pellionisz

19 October 2010 6:36PM

This comment has been removed by a moderator. Replies may also be deleted.

Pellionisz

21 October 2010 4:33PM

Comment reinstated (see contents below).

--- [ end of excerpts from the Guardian - AJP] ---

[The comment by Pellionisz that was censored out - AJP]

19 October 2010 6:36PM

Mandelbrot defined himself by his book "Fractal Geometry of Nature" (B. B. Mandelbrot. W. H. Freeman, 1982) as his creative job title was "mathematical scientist". Though a highly artistic soul, Benoit left "the art part" to colleagues; e.g. The Beauty of Fractals (H. O. Peitgen and P. H. Richter, Springer 1986). His oeuvre is profoundly seminal, with a significance way over visual arts only. Suffice to mention his well-known fractal understanding of stock market prices.

Further, to illustrate how "seminal" his geometrical understanding of Nature became, I quote FractoGene (Pellionisz, 2002), The Principle of Recursive Genome Function where the fractal DNA governs growth via fractal recursive iteration of fractal organelles (e.g. brain cells, cardiac coronaries), organs (e.g. lung, kidney) and organisms (e.g. cauliflower romanesca). For those preferring over peer-reviewed science papers, a Google Tech Talk YouTube is available.

One might argue that (fractal) universe, lunar surfaces, mountain ranges etc. had been around for time measured by mega-millions of years, Mandelbrot did not invent fractality of lifeless and living Nature (just like Newton did not invent gravity) - but discovered their mathematical principles. While he realized (2004) that the genome is fractal, testing his uncanny ability to tell with high precision the fractal dimension of roughness I asked in public at Stanford "what do you think the fractal dimension of the genome might be?" his honest answer was "I do not know" (implying that understanding the fractal nature of the genome structure and function will dominate genomics of the 21st Century). Just as John von Neumann could have arrived at the intrinsic mathematics of brain function (that he stated in his posthumus book "The Computer and the Brain" as certainly different from logical calculus), had beloved Benoit lived a decade more, his expressed interest in the most seminal biology (genomics) could have contributed with breakthroughs beyond his realization of the challenge.

Pellionisz_at_JunkDNA.com

---

[Mandelbrot and Pellionisz at Stanford, 2004]

[Labeling Mandelbrot as an artist by an admittedly non-mathematician is like celebrating Picasso as a mathematician; a flat journalistic mistake that censorship will not hide (comment reinstated in 48 hours) - AJP, FaceBook and Pellionisz_at_JunkDNA.com]

^ back to top


'Fractalist' Benoît Mandelbrot dies

New Scientist
Valerie Jamieson, chief features editor
21:08 18 October 2010

[View his last major presentation at TED, 2010 - AJP]

Benoît Mandelbrot, who died a month shy of his 86th birthday on Thursday, wanted to be remembered as the founding father of fractal geometry – the branch of mathematics that perceives the hidden order in nature.

He became a household name, thanks to the psychedelic swirls and spikes of the most famous fractal equation, Mandelbrot set. (Recently, a 3D version of the set was discovered, called theMandelbulb.)

Fractals are everywhere, from cauliflowers to our blood vessels. No matter how you divide a fractal nor how closely or distantly you zoom in, its shape stays the same. They have helped model the weather, measure online traffic, compress computer files, analyse seismic tremors and the distribution of galaxies. And they became an essential tool in the 1980s for studying the hidden order in the seemingly disordered world of chaotic systems.

By his own admission, Mandelbrot spent his career trawling the litter cans of science for fractal patterns and found them in the most unusual places. His job title at Yale University in New Haven, Conneticut, was deliberately chosen with this diversity in mind. "I'm a mathematical scientist," he told me. "It's a very ambiguous term."

I met Mandelbrot in 2004 when he was promoting The (Mis)behaviour of Markets, the book he'd written with financial journalist Richard L Hudson. After a long detour through other fields of science, Mandelbrot turned the tools of fractal geometry to financial data and had a stark warning for economists. "We have been mismeasuring risk," he said. "Brokers who ask why we should even think about 'wild events' where one bad event in the stockmarket can wipe out everything are misleading themselves."

Mandelbrot's hope was that by thinking about markets as scientific systems, we might eventually build a stronger financial industry and a better system of regulation. He also challenged Alan Greenspan, chairman of the Federal Reserve, and other financiers to set aside $20 million for fundamental research into market dynamics.

He called himself a maverick because he spent his life doing only what he felt was right and never belonging to a particular scientific community. And he enjoyed the reputation of someone who was happy to disturb ideas.

Back in 2004, Mandelbrot showed few signs of slowing down. He was writing his memoir – The Fractalist: Memoir of a Geometer – which was set to be published in 2012.

He worked every day except Sunday and enjoyed going to conferences (watch his talk at the 2010 TED conference ...). This year, he even co-authored two papers in the Annals of Applied Probability. "What motivates me is the feeling that these ideas may be lost if I don't push them any further," he told me of his desire to continue his research.

Mandelbrot may be gone. But the beauty of his fractals live on. You only have to look around you to be reminded of his insights. In his own words: "Clouds are not spheres, mountains are not cones, coastlines are not circles, bark is not smooth, nor does lightning travel in a straight line." [and the Genome is not contiguous snippets of Gene-sequences in the vast see of Junk DNA - but FractoGene, AJP]

^ back to top


Benoît Mandelbrot, Novel Mathematician, Dies at 85

By JASCHA HOFFMAN
New York Times
Published: October 16, 2010

[All this visual complexity is compressed into the Z=Z^2+C equation - AJP]

Benoît B. Mandelbrot, a maverick mathematician who developed the field of fractal geometry and applied it to physics, biology, finance and many other fields, died on Thursday [Oct. 14, 2010 – five weeks before turning 86 - AJP] in Cambridge, Mass. He was 85.

The cause was pancreatic cancer, his wife, Aliette, said. He had lived in Cambridge.

Dr. Mandelbrot coined the term “fractal” to refer to a new class of mathematical shapes whose uneven contours could mimic the irregularities found in nature.

Applied mathematics had been concentrating for a century on phenomena which were smooth, but many things were not like that: the more you blew them up with a microscope the more complexity you found,” said David Mumford, a professor of mathematics at Brown University. “He was one of the primary people who realized these were legitimate objects of study.”

In a seminal book, “The Fractal Geometry of Nature,” published in 1982, Dr. Mandelbrot defended mathematical objects that he said others had dismissed as “monstrous” and “pathological.” Using fractal geometry, he argued, the complex outlines of clouds and coastlines, once considered unmeasurable, could now “be approached in rigorous and vigorous quantitative fashion.”

For most of his career, Dr. Mandelbrot had a reputation as an outsider to the mathematical establishment [Received tenure from Yale University in 1999, at the age of 75 - AJP]. From his perch as a researcher for I.B.M. in New York, where he worked for decades before accepting a position at Yale University, he noticed patterns that other researchers may have overlooked in their own data, then often swooped in to collaborate.

“He knew everybody, with interests going off in every possible direction,” Professor Mumford said. “Every time he gave a talk, it was about something different.”

Dr. Mandelbrot traced his work on fractals to a question he first encountered as a young researcher: how long is the coast of Britain? The answer, he was surprised to discover, depends on how closely one looks [How many "Genes" the human DNA has? It depends how closely one looks - AJP]. On a map an island may appear smooth, but zooming in will reveal jagged edges that add up to a longer coast. Zooming in further will reveal even more coastline.

“Here is a question, a staple of grade-school geometry that, if you think about it, is impossible,” Dr. Mandelbrot told The New York Times earlier this year in an interview. “The length of the coastline, in a sense, is infinite.”

In the 1950s, Dr. Mandelbrot proposed a simple but radical way to quantify the crookedness of such an object by assigning it a “fractal dimension,” an insight that has proved useful well beyond the field of cartography.

Over nearly seven decades, working with dozens of scientists, Dr. Mandelbrot contributed to the fields of geology, medicine, cosmology and engineering. He used the geometry of fractals to explain how galaxies cluster, how wheat prices change over time and how mammalian brains fold as they grow, among other phenomena.

His influence has also been felt within the field of geometry, where he was one of the first to use computer graphics to study mathematical objects like the Mandelbrot set, which was named in his honor.

“I decided to go into fields where mathematicians would never go because the problems were badly stated,” Dr. Mandelbrot said. “I have played a strange role that none of my students dare to take.”

Benoît B. Mandelbrot (he added the middle initial himself, though it does not stand for a middle name) was born on Nov. 20, 1924, to a Lithuanian Jewish family in Warsaw. In 1936 his family fled the Nazis, first to Paris and then to the south of France, where he tended horses and fixed tools.

After the war he enrolled in the École Polytechnique in Paris, where his sharp eye compensated for a lack of conventional education. His career soon spanned the Atlantic. He earned a master’s degree in aeronautics at the California Institute of Technology, returned to Paris for his doctorate in mathematics in 1952, then went on to the Institute for Advanced Study in Princeton, N.J., for a postdoctoral degree under the mathematician John von Neumann.

After several years spent largely at the Centre National de la Recherche Scientifique in Paris, Dr. Mandelbrot was hired by I.B.M. in 1958 to work at the Thomas J. Watson Research Center in Yorktown Heights, N.Y. Although he worked frequently with academic researchers and served as a visiting professor at Harvard and the Massachusetts Institute of Technology, it was not until 1987 that he began to teach at Yale, where he earned tenure in 1999.

Dr. Mandelbrot received more than 15 honorary doctorates and served on the board of many scientific journals, as well as the Mandelbrot Foundation for Fractals. Instead of rigorously proving his insights in each field, he said he preferred to “stimulate the field by making bold and crazy conjectures” — and then move on before his claims had been verified. This habit earned him some skepticism in mathematical circles.

“He doesn’t spend months or years proving what he has observed,” said Heinz-Otto Peitgen, a professor of mathematics and biomedical sciences at the University of Bremen. And for that, he said, Dr. Mandelbrot “has received quite a bit of criticism.”

But if we talk about impact inside mathematics, and applications in the sciences,” Professor Peitgen said, “he is one of the most important figures of the last 50 years.”

Besides his wife, Dr. Mandelbrot is survived by two sons, Laurent, of Paris, and Didier, of Newton, Mass., and three grandchildren.

When asked to look back on his career, Dr. Mandelbrot compared his own trajectory to the rough outlines of clouds and coastlines that drew him into the study of fractals in the 1950s.

“If you take the beginning and the end, I have had a conventional career,” he said, referring to his prestigious appointments in Paris and at Yale. “But it was not a straight line between the beginning and the end. It was a very crooked line.”

[How mammalian brains fold in a fractal way, was followed up (after the initial ideas of Grosberg 20 years ago) almost exactly a year ago by Dr. Lender et al., for the fractal folding of the DNA; "Mr. President, the Genome is Fractal!". The primary concept of FractoGene (2002 by Pellionisz, reaches back to the Fractal Geometry of Cerebellar Purkinje Cells, 1989, based on a musing of Mandelbrot in his "Fractal Geometry of Nature" classic book - AJP]

^ back to top


Going 'Beyond the Genome'

Genomeweb
October 14, 2010

BioMed Central's Beyond the Genome conference in Boston this week — which was held in conjunction with Genome Biology's 10th anniversary — showcased the work of several researchers whose ideas go beyond just sequencing.

The University of Maryland's Steven Salzberg kicked off the conference with a keynote speech about the work he and others are doing to try and accurately estimate exactly how many genes a person has. In 1964, F. Vogel wrote a letter to Nature estimating that humans have 6.7 million genes. He was way off, Salzberg said, but it hasn't gotten any easier over the years to make the estimate more accurate. In the mid-1990s, three different papers estimated the count to be 50,000 to 100,000, 64,000, and 80,000. Even after the draft genome was published, the estimates widely varied. The public consortium estimated the count to be between 30,000 and 40,000, while Celera and its private partners estimated 26,588, with 12,000 other additional "likely" genes. So far, the most accurate estimate is 22,333 human genes, Salzberg said, but there is still much of the human genome that not much is known about, and RNA-seq is still revealing a lot of new genes that may have previously been overlooked. In the end, Salzberg said, it's not as important to know how many genes there are as to know what they are and what they do.

George Church emphasized how important it is to continue to read the genome. About 2,000 genes are highly predictive and medically actionable, he said, and as the price of sequencing continues to drop, researchers will be able to find more genes they can work with to the benefit of human health. Church also stressed the importance of open-access data, and said there is a need for an open database that researchers can use to analyze each others' data.

Elaine Mardis spoke about her work with cancer genomics, and said that, in researching the way tumors work, validating tumor variants is important especially for dissemination of the information to the wider scientific community for further analysis. The speed of data generation is both challenging and enabling, she added.

The University of Washington's Jay Shendure talked about his lab's work with exome sequencing in autism studies. At least some percentage of autism is caused by coding mutations, and exome sequencing is useful in studies of the disorder because the technique can be used to focus in on a single gene instead of an entire region of the genome, Shendure said. He described a trio-based exome study done in his lab, where 60 exomes — from 20 autistic children and both of their parents — were sequenced, and then analyzed to identify Mendelian errors. The researchers found 16 de novo SNPs validated by Sanger from the 20 autism trios, and found two genes — GRIN2B and FOXP1 — which they think could be causative in autism.

The University of Colorado's Rob Knight and BGI's Jun Wang discussed their respective labs' work with microbes. Knight talked about the research he has done with obese and lean mice, and trying to elucidate the relationship between an organism's weight and its gut microbes. Wang talked about some of the studies BGI has done with diabetic patients, and said one study of Chinese type II diabetes patients discovered more than 500,000 novel bacterial genes and found 1,306 bacterial genes associated with diabetic patients, though whether the genes were the cause or the effect of diabetes is not yet known.

Comments:

Submitted by S. Pelech - Kinexus on Thu, 10/14/2010 - 14:08.

It is intriguing that despite the complete sequencing of the human genome for many years now, it is still unresolved exactly how many human genes actually exist. Mass spectrometry studies have revealed several protein sequences that were not originally described in gene databases. In my own experience, with the assignment of over 90,000 phospho-sites in predicted human proteins for PhosphoNET (www.phosphonet.ca), I have noticed several hundred proteins that were originally documented in UniProt (www.uniprot.org) that have had the entries deleted without any replacements. Since these phosphoproteins were identified from cell lysates by mass spectrometry, obviously the encoding genes actually exist. Since Uniprot has just over 21,000 distinct human proteins currently listed, perhaps 4 to 5 percent of human proteins are still not tracked in the best repository that we have information about our proteins. How well the 22,333 figure for the total number of human genes accounts for these anomalies identified by mass spectrometry analysis of proteins is also unclear.

reply

Submitted by andras on Thu, 10/14/2010 - 19:47.

A better title would be: "FractoGene Recurses to the Genome". Both "Going beyond the Genome" and "Counting the Exact Number of Genes in Human DNA" are exercises in futility - unless going beyond the genome is tracked through its full recourse from intrinsic and extrinsic proteins back to the DNA>RNA>PROTEINS> and on, as well as contiguous sequences, formerly defined as "genes" yield to facts of their "alternative splicing" (one gene acting as many different genes when spliced in various different ways), as well as to the newly found facts that given contiguous sequences constitutes functional units with sequences very far downstream or upstream, totally outside of the boundaries of the (now obsolete) "gene" definition. The Principle of Recursive Genome Function peer reviewed science paper and Google Tech TalkYouTube of our Genome Revolution defines not just one trip to go "beyond Genome" (akin to the Russians in the early days of Space Age, blasting a dog into space and leave it there to perish), but more like "Sending a Man on the Moon - and taking him safely back to Earth" (and do it repeatedly, again and again). It is within the context of recursive algorithms, such as fractal iterative recursion, that seemingly scattered elements of genes, FractoGene governs growth of fractal organelles (such as brain cells), organs (such as the lung) and organisms (such as the cauliflower romanesca) as guided by the demonstrably fractal genome, recursing through epigenomic channels back to the DNA. Pellionisz_at_JunkDNA.com (reprint requests to holgentech_at_gmail.com).

^ back to top


Cold Spring Harbor Lab Says Benefits of ARRA Funding Will Outlast Stimulus Program

October 08, 2010

By Alex Philippidis

NEW YORK (GenomeWeb News) - Cold Spring Harbor Laboratory says it expects to benefit from the stimulus funding it received through from the American Recovery and Reinvestment Act of 2009, well past the program's end next year.

"For CSHL, the injection of ARRA funds has been very positive and will have an impact past the two years of the funding in that it is generating new data that will lead to new projects and new opportunities to pursue grant funding from public and private sources," a laboratory spokeswoman, Dagnia Zeidlickis, told GenomeWeb Daily News this week.

CSHL secured $23.4 million of stimulus funding in 19 awards. The largest award, at more than $4.7 million over two years from the National Cancer Institute, funded the creation of a Molecular Target Discovery and Development Center, with the goal of determining which of the hundreds of genes that are altered in cancer actually play a role in causing the disease.

The center - part of a network of five such centers established nationwide - is evaluating the torrent of data from recent human cancer genome projects, as well as validating candidate genes in mouse models. The center hopes that information can help in discovering and validating new cancer drugs targeting molecular changes in the disease seen in patients. Scott Powers, an associate professor, is the project's principal investigator.

Next largest, at just over $2.5 million over two years from the National Heart, Lung, and Blood Institute, is a study of the epigenetic dynamics of developing germ cells and early mouse embryos. Gregory Hannon, professor and Howard Hughes Medical Institute investigator, served as PI for the study, part of which compares epigenetic profiles in early embryos derived from normal mice to those of early embryos in hormone-treated, super-ovulated mice, since hormone treatments are believed to alter the epigenetic state of some genes.

The research is designed to help understand hormone-assisted attempts at conception undergone by up to 1 million women each year.

CSHL also used almost $1.3 million of ARRA funds over two years from the National Institute of Mental Health to hire a developmental neurobiologist with expertise in neural circuit development and plasticity. Zeidlickis said the laboratory won't disclose the faculty member's identity until the appointment is finalized.

That person will join CSHL's current 50-member faculty, which includes professors, associate professors, assistant professors, and fellows.

A less costly project, using $497,423 in ARRA funds from the National Science Foundation, consisted of renovations of the greenhouse at CSHL's Uplands Farm Research Field Station, which supports research into Arabidopsis and crop plants as well as the plant genetics teaching programs of CSHL's Dolan DNA Learning Center.

In an abstract of its grant application, CSHL concluded: "These facilities are inadequate to meet the demands of current genome driven plant biology research. The infrastructural improvements will provide appropriate growing conditions for a greater diversity of plant species and will increase the energy efficiency of the facilities."

"With new research project funding, upgraded infrastructure, and a new faculty position, CSHL will be able to continue to pursue the kind of innovative research that we are best known for," Zeidlickis said. "This research should lead to new opportunities for funding from Federal programs that are increasingly recognizing innovation and transformative research - like the TRO1 and Challenge Grants that we have been successful in securing - in addition to ARRA."

ARRA is the $814 billion measure signed into law by President Obama last year with the intent of stimulating the nation's economy. The law required NIH to spend, or commit to spend, all $10 billion available to the agency under the legislation by Sept. 30, 2010 - though ARRA money doesn't have to be in the hands of grant winners, generally, until Sept. 29, 2011.

^ back to top


New Research Buildings Open at Cold Spring Harbor Laboratory

Research at the $100 million Hillside Laboratories will address “grand challenges” facing science and society

Cold Spring Harbor, NY – Cold Spring Harbor Laboratory (CSHL) cut the ribbon on six new research buildings, collectively called the Hillside Laboratories, at a dedication ceremony on June 12. The $100 million complex represents the largest expansion in CSHL’s 119-year history and increases active research space by 40%. When fully occupied the buildings will house approximately 200 new research-related personnel, which will mark a 20% increase in employment at CSHL.

At the dedication ceremony, CSHL President Bruce Stillman said, “This expansion will allow Cold Spring Harbor Laboratory to do more of what it has always done best: perform pioneering research at the leading edge of biological science, particularly in the areas of cancer and neuroscience, but also in the emerging field of quantitative biology.” Dr. Stillman spoke before a distinguished audience of CSHL staff and supporters from the research, business, philanthropic and government communities, including Nobel laureate Philip Sharp.

In his dedication remarks, Dr. Sharp, perhaps best known as the co-discoverer of gene splicing, suggested how research to be performed in the Hillside buildings “will help humanity surmount some of the great challenges of our time.” He recalled that the first public announcement calling for a national effort to sequence the human genome was made at the dedication of a new research building at CSHL in 1985. He then issued his own implicit challenge to the scientists who will occupy the gleaming new Hillside buildings. Sharp envisioned a future in which data collected in millions of patient electronic medical records will be merged with genome scans of the same individuals. This would serve as the basis for profound insights into cancer and mental illness, two of the foci of work in the new Hillside Laboratories. Such an effort, he said, might also help usher in an era of personalized medicine.

CSHL’s President Stillman in his remarks thanked gathered guests for their support of the expansion project, saying, “Such a significant addition to our research space was made possible by the generous contributions of private donors, philanthropic foundations and the New York State ‘Gen*NY*sis’ initiative, which provided a grant of $20 million. They had the foresight to understand the significance of this expansion to the Laboratory’s long-term mission to advance our ability to diagnose and develop more effective ways of treating cancers, neurological diseases and other major causes of human suffering.”

A capital campaign raised over $200 million to support the construction of the new research buildings, recruitment of new investigators, equipment for new research projects and endowment for research and graduate education. The project was also supported by a bond issued with the Nassau County Industrial Development Authority.

The Hillside Laboratories

Called the “Hillside Laboratories,” the six new research buildings total 100,000 square feet and include:

The Donald Everett Axinn Laboratory, for research on the neurobiological roots of mental illness;

the Nancy and Frederick DeMatteis Laboratory, for research on the genetic basis of human diseases, including autism, cancer, and schizophrenia;

the David H. Koch Laboratory, home to a newly established Center for Quantitative Biology, where an interdisciplinary team of top mathematicians, physicists, and computer scientists will develop mathematical approaches to interpret and understand complex biological data sets;

the William L. and Marjorie A. Matheson Laboratory, for research on the tumor microenvironment and metastasis;

the Leslie and Jean Quick Laboratory, for research on new therapeutic strategies for treating cancer; and

the Wendt Family Laboratory, for research on neurodevelopment and the wiring of complex circuits in the brain.

Designed to foster the progress of scientific discovery

Speaking at the opening ceremony, CSHL Board Chairman Eduardo Mestre said, “An important goal for the design of the Hillside Laboratories was to encourage collaboration among scientists and foster the progress of scientific discovery, while preserving the historic appeal of CSHL’s picturesque campus. Looking at this beautiful complex I believe we have succeeded brilliantly.”

The six new buildings are actually outcroppings of a single interconnected structure with an infrastructure that is integrated beneath ground level. Each of the laboratories rises from the ground in a different place, giving the appearance of six discrete buildings. Nestled into the hillside, the buildings are connected at various elevations and share a common utility grid that will make them 30% more energy-efficient than prevailing standards for laboratory facilities.

In order to preserve the idyllic nature and existing environment of the 115-acre campus, the Hillside Laboratories have been designed to complement rather than overpower CSHL’s smaller, historic buildings along the western shoreline of Cold Spring Harbor. In addition to the six research buildings, the new complex features the Laurie and Leo Guthart Discovery Tower, the tallest structure in the group, which serves to vent heat from the six buildings while providing an aesthetic “cap” for the ensemble.

Other unique features of the complex include a water element that threads like a mountain stream through its center; a 200-foot-long bridge; an award-winning storm water management system; meticulous landscape design; and spectacular new vantage points for viewing Cold Spring Harbor.

The Hillside Laboratories were designed by Centerbrook Architects and Planners LLP, which was selected for this project based on its history of award-winning designs of earlier CSHL buildings and its commitment to creating unique and uplifting designs that fulfill program and budget objectives while enriching the natural surroundings.

In the construction phase of the project CSHL emphasized the hiring of local Long Island craftsmen and -women. It is estimated that the project provided as many as 250 construction industry jobs on Long Island during the course of its nine-year planning and construction phases.

Hillside Laboratory Facts

The construction project is the largest ever undertaken by CSHL and will increase research space by 40%.

The expansion at CSHL will create 200 new high-paying, high-tech jobs on Long Island.

More than 250 project contractors, consultants, and craftsmen who worked on the project were from local Long Island companies.

Construction costs on the new 100,000-square-foot building complex totaled $100 million.

The laboratory buildings are designed to be 30% more energy-efficient than standards set for laboratories by ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers).

An innovative environmental design for storm-water management uses newly-created wetlands, rain gardens, and bio-swales to filter storm water runoff from the hillside before it makes its way into the harbor. This system has a capacity of 254,000 gallons, and was awarded the 2007 Project of the Year by the Nassau County Society of Professional Engineers.

Nearly 700 trees have been planted to reforest the approximately 11 acres of forest that were cleared to make way for construction.

All organic material from the site was retained for reuse. Trees were chipped and mulched for site restoration, and topsoil was scraped away from the building site, retained onsite, and reapplied during site restoration.

Approximately 200,000 cubic yards of excess earth was removed from the site. A sand mining operation was set up on-site, screening out rock, gravel, fine sand and other high-quality construction material before being removed from site. Sale of the construction material reduced the cost of excavation from $4 million to $2 million.

William H. Grover, FAIA, and James C. Childress, FAIA, are Centerbrook Architects’ Partners-in-Charge of the Hillside complex. Todd E. Andrews, AIA, is the Project Manager. Visit www.centerbrook.com for more information.

Art Brings is the Vice President, Chief Facilities Officer in charge of the project.

Cold Spring Harbor Laboratory is a private, nonprofit research and education institution dedicated to exploring molecular biology and genetics in order to advance the understanding and ability to diagnose and treat cancers, neurological diseases and other causes of human suffering.

^ back to top


What to Do with All That Data?

October 07, 2010

By Alex Philippidis

Recent technological advances in genomics have caused something both "terrifying" and "exciting," Mike the Mad Biologist says - "a massive amount of data." Mike says that genome sequencing is already fast and cheap, but it will become faster and cheaper; the problem is evolving from how to sequence genomes to get informative data to how best to use the information we already have. "We are entering an era where the time and money costs won't be focused on raw sequence generation, but on the informatics needed to build high-quality genomes with those data," Mike says. While it's great to be able to contemplate a $100 genome, the costs of storing and using the data could be upwards of $2,500. Researchers must find ways to store the data and analyze everything that's already been sequenced. "You have eleventy gajillion genomes. Now what? Many of the analytical methods use 'N-squared' algorithms: that is, a 10-fold increase in data requires a 100-fold increase in computation. And that's optimistic," he says.

Submitted by Stephen.Craig.J... on Thu, 10/07/2010 - 16:47.

I am thinking there needs to a an independent organization, comprised of experts, which continually analyzes and interprets the new information and translates it into a usable form for clinicians and other end users.

• reply

Submitted by S. Pelech - Kinexus on Thu, 10/07/2010 - 18:41.

Unless there is a well funded parallel program of biomedical research that can make sense of the genomics data from a proteomics perspective, the genome sequencing efforts will yield primarily correlative data that will offer limited risk assessment at best. In view of the complexities of cellular regulation and metabolism, it will not provide conclusive data about the actual cause and progression of an individual's disease and how best to treat it. Unfortunately, much of the currently efforts to understand the roles and regulation of proteins are undertaken in simple animal models that are attractive primarily because of their ease of genetic manipulation. However, such studies have little relevance to the human condition. Without a better understanding of how mutations in genes affect protein function and protein interactions in a human context, genome-based diagnostics will in most situations probably not be much more beneficial than phrenology.

Phrenology is an ancient practice that was extremely popular about 200 years ago. It was based on the idea that formation of an individual's skull and bumps on their head could reveal information about their conduct and intellectual capacities. Phrenological thinking was influential in 19th-century psychiatry and modern neuroscience. While this practice is pretty much completely ridiculed now, it is amazing how many people still use astrology, I Ching, Tarot cards, biorhythms and other questionable practices to guide their lives, including medical decisions. I fear that an even wider portion of the general population will put their faith into whole genome-based analyses, especially with the strong encouragement of companies that could realize huge profits from offering such services. The most likely consequences, apart from yet another way for the sick to be parted from their money, is a lot more anxiety in the healthy population as well.

While I am sure that many of my colleagues may view my comparison of gene sequencing with obvious pseudo-sciences as inappropriate, the pace at which such genomics services are becoming offered to the general population warrants such consideration. We know much too little about the consequences of some 15 million mutations and other polymorphisms in the human genome to make sensible predictions about health risks. For only a few dozen human genes, primarily affected in cancer, do we have sufficient data to make reasonable pronouncements about the cause of a disease and the means to do something effective about it in the way of targeted therapy.

While it is easy to become exuberant about the power and potential of genomic analyses, the limitations of this type of technology alone to improve human health will soon become painfully obvious. Ultimately, economics will be the main driver of whether it is truly worthwhile to pursue whole genomic sequencing on mass. This will not be dictated simply by the cost of whole genome sequencing, but as pointed out by others, the costs of storing and analyzing the data, and whether significant improvement outcomes in health care delivery actually materialize.

I am much less optimistic about the prospects of this. When I grew up in the 1960's, there was excitement about human colonies on the moon and manned missions to Mars before the turn of the 20th century. Nuclear power, including fusion, was going to solve our energy problems by this time. I believe in 30 years when we look back at current plans to sequence tens to hundreds of thousands of human genomes, we will be amazed at the naivety of proponents for this undertaking.

^ back to top


Pacific Biosciences Targeting $15-$17 Share Price for IPO

October 05, 2010

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – Pacific Biosciences will make 12.5 million shares available at a price between $15 and $17 per share for its initial public offering, the company said in an amended preliminary prospectus filed with the US Securities Exchanged Commission today.

Today's amended S-1 document follows a similar filing last week with the SEC in which the company disclosed it had done a 1-for-2 reverse stock split in September and increased the amount it expects to raise from its IPO to $230 million from the $200 million originally targeted when the company announced its proposed IPO in August.

The Menlo Park, Calif.-based single-molecule sequencing firm also said in today's filing that it is making about 1.9 million shares available to its underwriters to purchase to cover over-allotments

At the midpoint of its share price range, $16, PacBio's net proceeds from the offering would be about $182.5 million or $210.4 million if the underwriters fully exercise their option to purchase additional shares, the company said.

The underwriters on the offering are JP Morgan, Morgan Stanley, Deutsche Bank Securities, and Piper Jaffray.

PacBio previously had stated that through the first half of 2010, it had recorded almost $1.2 million in revenues, all from government grants, and a net loss of $63 million, or $99.58 per share. In H1 2009, it had no revenues, and a net loss of $35.1 million, $75.39 per share.

The company had cash and cash equivalents of $90.1 million as of June 30, it said.

[There is hardly any question that with the stock market easing back to 11,000 three possible successful IPO-s could clinch the return of a definite recovery. It is widely rumored that the social Internet media sector FaceBook is poised for one. The other two could be guestimated for Affordable DNA Sequencing by Complete Genomics (Mountain View) and almost simultaneously filed by Pacific Biosciences (Menlo Park). Thus the remaining crucial question is "when", "which one first", and "which will be more successful than the other"? Answers to these heretofore open questions might harbor some very good news for Silicon Valley - and for the US Economy, as heralded in YouTube "Personal Genome Computing" panel by Churchill Club in early 2009. (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


The Road to the $1,000 Genome

Bio-IT World, September 28, 2010

SPECIAL REPORT

The term next-generation sequencing (NGS) has been around for so long it has become almost meaningless. We use “NGS” to describe platforms that are so well established they are almost institutions, and future (3rd-, 4th-, or whatever) generations promising to do for terrestrial triage what Mr Spock’s Tricorder did for intergalactic health care. But as the costs of consumables keep falling, turning the data-generation aspect of NGS increasingly into a commodity, the all-important problems of data analysis, storage, and medical interpretation loom ever larger.

“There is a growing gap between the generation of massively parallel sequencing output and the ability to process and analyze the resulting data,” says Canadian cancer research John McPherson, feeling the pain of NGS neophytes left to negotiate “a bewildering maze of base calling, alignment, assembly, and analysis tools with often incomplete documentation and no idea how to compare and validate their outputs. Bridging this gap is essential, or the coveted $1,000 genome will come with a $20,000 analysis price tag.

“The cost of DNA sequencing might not matter in a few years,” says the Broad Institute’s Chad Nusbaum. “People are saying they’ll be able to sequence the human genome for $100 or less. That’s lovely, but it still could cost you $2,500 to store the data, so the cost of storage ultimately becomes the limiting factor, not the cost of sequencing. We can quibble about the dollars and cents, but you can’t argue about the trends at all.”

But these issues look relatively trivial compared to the challenge of mining a personal genome sequence for medically actionable benefit. Stanford’s chair of bioengineering, Russ Altman, points out that not only is the cost of sequencing “essentially free,” but the computational cost of dealing with the data is also trivial. “I mean, we might need a big computer, but big computers exist, they can be amortized, and it’s not a big deal. But the interpretation of the data will be keeping us busy for the next 50 years.”

Or as Bruce Korf, the president of the American College of Medical Genetics, puts it: “We are close to having a $1,000 genome sequence, but this may be accompanied by a $1,000,000 interpretation.”

Arbimagical Goal

The “$1,000 genome” is, in the view of Infinity Pharmaceuticals’ Keith Robison, an “arbimagical goal”—an arbitrary target that has nevertheless obtained a magical notoriety through repetition. The catchphrase was first coined in 2001, although by whom isn’t entirely clear. The University of Wisconsin’s David Schwartz insists he proposed the term during a National Human Genome Research Institute (NHGRI) retreat in 2001. During a breakout session, he said that NHGRI needed a new technology to complete a human genome sequence in a day. Asked to price that, Schwartz paused: “I thought for a moment and responded, ‘$1,000.’” However, NHGRI officials say they had already coined the term.

The $1,000 genome caught on a year later, when Craig Venter and Gerry Rubin hosted a major symposium in Boston (see, “Wanted: The $1000 Genome,” Bio•IT World, Nov 2002). Venter invited George Church and five other hopefuls to present new sequencing technologies, none more riveting than U.S. Genomics founder Eugene Chan, who described an ingenious technology to unfurl DNA molecules that would soon sequence a human genome in an hour. (The company abandoned its sequencing program a year later.)

Another of those hopefuls was 454 Life Sciences, which in 2007 made Jim Watson the first personal genome using NGS, at a cost of about $1 million. Since then, the cost of sequencing has plummeted to less than $10,000 in 2010. Much of that has been fueled by the competition between Illumina and Applied Biosystems (ABI). When Illumina said its HiSeq 2000 could sequence a human genome for $10,000, ABI countered with a $6,000 genome dropping to $3,000 at 99.99% accuracy.

Earlier this year, Complete Genomics reported its first full human genomes in Science. One of those belonged to George Church, whose genome was sequenced for about $1,500. CEO Cliff Reid told us earlier this year that Complete Genomics now routinely sequenced human genomes at 30x coverage for less than $1,000 in reagent costs.

The ever-quotable Clive Brown, formerly a central figure at Solexa and now VP development and informatics for Oxford Nanopore, a 3rd-generation sequencing company says: “I like to think of the Gen 2 systems as giant fighting dinosaurs, ‘[gigabases] per run—grr—arggh’ etc., a volcano of data spewing behind them in a Jurassic landscape—Sequanosaurus Rex. Meanwhile, in the undergrowth, the Gen 3 ‘mammals’ are quietly getting on with evolving and adapting to the imminent climate change... smaller, faster, more agile, and more intelligent.”

Nearly all the 2nd-generation platforms have placed bets on 3rd-gen technologies. Illumina has partnered with Oxford Nanopore; Life Technologies has countered by acquiring Ion Torrent Systems; and Roche is teaming up with IBM. PacBio has talked about a “15-minute” genome by 2014, Halcyon Molecular promises a “$100 genome,” while a Harvard start-up called GnuBio has placed a bet on a mere $30 genome.

David Dooling of The Genome Center at Washington University, points out the widely debated cost of the Human Genome Project included everything—the instruments, personnel, overhead, consumables, and IT. But the $1,000 genome—or in 2010 numbers, the $10,000 genome—only refers to flow cells and reagents. Clearly, the true cost of a genome sequence is much higher (see, “The Grand Illusion”). In fact, Dooling estimates the true cost of a “$10,000 genome” as closer to $30,000, by the time one has considered instrument depreciation and sample prep, personnel and IT, informatics and validation, management and overheads.

“If you are just costing reagents, most of the vendors could claim a $1,000 genome right now,” says Brown. “A more interesting question is: ‘$1,000 genome—so what?’ It’s an odd goal because the closer you get to it the less relevant it becomes.”

Special Interests

This special issue of Bio•IT World contains a series of stories and essays that provide some useful perspectives on the march to the $1,000 genome, which some regard as a medical imperative and others a grand illusion.

We get an up-close look at sequencing operations at the Broad Institute, which has been the U.S. flagship genome center for a decade (see page 30). We also meet the leaders of BGI Americas, which aims to provide sequencing capacity and analysis for labs big and small, while managing editor Allison Proffitt gleefully visits BGI’s prized new sequencing center under construction in Hong Kong (page 42).

We look at the genesis of Solexa, the British company that provided the raw technology for Illumina, the best-selling NGS platform to date (page 52). We meet Kevin Ulmer, a man who has spent more than three decades trying to develop the killer app for the $1,000 genome (page 64). And we meet NABsys, a 3rd-generation technology taking aim at the myriad clinical applications of NGS (page 61).

Given that the costs of data analysis and storage will increasingly dominate the NGS equation, Alissa Poh reviews some of the latest software solutions on offer (page 58), while Allison Proffitt appraises some of the latest data storage technologies (page 38).

Finally, we meet some of the organizations—from bioinformaticians and medical geneticists to pathologists and software engineers—who are developing new ideas and resources for clinical genomic interpretation (page 48). And we profile Hugh Rienhoff, physician and founder of My Daughter’s DNA.org, and follow his inspirational quest to solve his daughter’s mystery condition (page 34).

Also in this report are invited commentaries from genomics experts at two big pharma—Amgen’s Sasha Kamb and Novartis’ Keith Johnson and colleagues—discussing the potential applications and adoption hurdles to NGS in pharma. We also have our regular columns, including BioTeam’s Michele Clamp and our colleague Eric Glazer on social media and a preview of an exciting online community called NGS Leaders.

We hope you enjoy this special report on the road to the $1,000 genome as much as we have enjoyed reporting and preparing it.

—Kevin Davies, Mark Gabrenya and Allison Proffitt

[George Church has been talking about "the zero dollar DNA sequence" for years by now. The "Great Inflection" from emphasis on Sequencing to emphasis on Analytics of Data was looming for at least since YouTube "Is IT ready for the Dreaded DNA Data Deluge?" 2 years ago. We almost agree with Stanford' Russ Altman that The Principle of Recursive Genome Function will us preoccupied for 50 years (I think 500 is more likely...). (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Revolution [was] Postponed [for too long, over Half a Century - AJP]

Scientific American
By Stephen S. Hall
October 18, 2010

[The full 8-page article is to be purchased for $5.99 at Scientific American]

The Human Genome Project has failed so far to produce the medical miracles that scientists promised. Biologists are now divided over what, if anything, went wrong—and what needs to happen next

In Brief:

In the year 2000 leaders of the Human Genome Project announced completion of the first rough draft of the human genome. They predicted that follow-up research could pave the way to personalized medicine within as few as 10 years.

So far the work has yielded few medical applications, although the insights have revolutionized biology research.

Some leading geneticists argue that a key strategy for seeking medical insights into complex common diseases— known as the “common variant” hypothesis— is fundamentally flawed. Others say the strategy is valid, but more time is needed to achieve the expected payoffs.

Next-generation methods for studying the genome should soon help resolve the controversy and advance research into the genetic roots of major diseases.

---

Comment #3

Yehuda Elyada

06:26 PM 10/1/10

The complexity of the concept of a gene requires analytic tools far more sophisticated than the naive assumption that there exist a one-to-one correspondence rule between gene variation and phenotype traits. The DNA is not a "blueprint" in the simple metaphor borrowed from engineering drawings. A more fitting metaphor is a musical score, defining the timing and amplitudes of notes series expression by various organs in the assemblage. Each musical instrument produces different waveforms due to its unique note expression mechanism, but music is made when all are controlled by a single set of notes and playing instructions and synchronized by the conductor. The waveforms combine to generate something "higher" than just more complex waveforms - just as life is more than metabolism.) The musical metaphor suggests how to analyze the relationship between DNA and phenotype.

The "holistic" approach to appreciation of music is based on subjective, human-centric psychological response to various harmonics, note sequences, tempo, emphasis, etc. No "reductionist" approach can grasp the essence of what make music a different experience from noise. However, we do not possess a similar mental capacity to analyze DNA expression, so we have to develop a reductionist approach to enable analysis based on mathematical rigor. This is where physics can point the way.

From the point of view of physics, music is a time varying complex waveform that can be broken into its simple components. By doing so, you move from the complex world of waveforms into the linear, "orthogonal" world of frequencies. You pay for this transformation by losing the ability to grasp the wholeness of the musical experience - a heavenly symphony become just flickering bars on your spectrum analyzer - but the technique of Fourier transform is essencial when you want the zero-in on an acoustic trait of a music instrument.

It is somewhat naive to assume that the same transformation that proved so useful and central in physics (not just in acoustics. Where would quantum mechanics be without the Fourier transformation) will unlock the genotype-phenotype conundrum. But it's a promising first step in injecting some more sophisticated mathematics into genomics. To gain insight into the rules of the game you have to start with an overarching paradigm: our aim is to uncover a many-to-many transformation (perhaps expressed as a matrix) between two complementary world-views, the genome (the vector of DNA bases) and the phenotype (the vector whose components are traits).

[The Scientific American article (though for copyright reasons we can not reveal contents in full) blows open for the public the core problem of "PostModern Genomics"- that scientists are split in the middle, some would not even admit that anything was deadly wrong with the "frighteningly unsophisticated" (and mathematically void) "theory" of (holo)genome function, oversimplified to "genes" (1.3% and Junk DNA 998.7%) with both intronic and intergenic, as well as epigenomic pathways not only neglected, but their research discouraged by withdrawing ongoing government grants. As commenter #3 points out, the article is fundamentally flawed, since if one does not consider overarching new paradigms (such as e.g. the "Recursive Genome Function - AJP) and sophisticated mathematics (such as e.g. the "Fractal approach - AJP), in princple one can not tell if anything based on obsolete axioms is valid or not.

For a preview of this message, see YouTube "Is IT ready for the Dreaded DNA Data Deluge?" 2 years ago . Also, see The Principle of Recursive Genome Function (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


The $1,000,000 Genome Interpretation

Bio-IT World
By Kevin Davies
October 1, 2010

Groups of clinicians, academics, and some savvy software companies are crafting the tools and ecosystem to make medical sense of the sequence.

It is doubtful that the scientists and physicians who first started talking about the $1,000 genome in 2001 could have imagined that we would be on the verge of that achievement within the decade. As the cost of sequencing continues to freefall, the challenge of solving the data analysis and storage problems becomes more pressing. But those issues are nothing compared to the challenge facing the clinical community who are seeking to mine the genome for clinically actionable information—what one respected clinical geneticist calls “the $1 million interpretation.” From the first handful of published human genome sequences, the size of that task is immense.

Although early days, a number of groups are making progress in creating new pipelines and educational programs to prepare a medical ecosystem that is ill-equipped to cope with the imminent flood of personal genome sequencing.

Pathologists’ Clearing House

The pathology department at one of Boston’s most storied hospitals isn’t necessarily the place where one might expect to find the stirrings of a medical genomics revolution, but that’s what’s happening at Beth Israel Deaconess Medical Center (BIDMC) under the auspices of department chairman Jeffrey Saffitz.

“I see this as ground-breaking change in pathology and in medicine,” he says.

Together with Mark Boguski and colleagues, Saffitz has introduced a genomic medicine module for his residents (see “Training Day”). And under the stewardship of applied mathematician Peter Tonellato, he is building an open-source genome annotation pipeline that might pave the way for routine medical inspection once whole-genome sequencing crosses the $1,000 genome threshold.

All well and good: but why pathology? [In my 25 years in University Medical Schools I find it quite unheard of to teach Pathology first, followed by Physiology. For Pathology, taking Physiology is a prerequisite - AJP] “We are the stewards of tissue and we perform all the clinical laboratory testing. This has been our function historically for many years. But we have a sense that the landscape is changing,” says Saffitz. Genetic testing, he argues, must be conducted under the same type of quality assessment, regulatory oversight, and CLIA certification as provided by the College of American Pathologists (CAP), “and should be done by physicians who are specifically trained to do this. That’s us!”

“The brilliance of that,” says Boguski, a pathologist by training, “is that it removes a lot of the mysticism surrounding genomics and makes it just another laboratory test.” There’s really nothing magical or different about DNA, insists Saffitz. “We regard a file of sequence data as a specimen that you send to the lab, just like a urine specimen!”

BIDMC is a medium-sized hospital that conducts 7 million tests a year. Arriving in Boston five years ago, Saffitz began recruiting visionaries to shape “the future of molecular diagnostics” and help the discipline of pathology become a clearinghouse for genomic medicine in a way that is “going to revolutionize the way we do medicine.”

Boguski is best known as a bioinformatician who spent a decade at the National Center for Biotechnology Information (NCBI). He sums up the genomic medicine informatics challenge thus: “You have 3 billion pieces of information that have to be reduced to six bytes of clinically actionable information. That’s what pathologists do! They take in samples—body fluids and tissues—and we give either a yes/no answer or a very small range of values that allow those clinicians to make a decision.”

Increasingly, he says, pathology will become a discipline that depends on high-performance computing to extract clinically actionable information from genome data. That frightens many physicians, but Boguski cites a precedent. “Modern imaging technology would not be possible were it not for high-performance computing, but it’s built into the machine!” he says. “Most practicing radiologists don’t think about the algorithms for reconstructing images from the X-rays. Most pathologists in the future won’t think about that stuff either—it will just be part and parcel of their trade. Nevertheless, we have to invent those technologies.”

Math Lab

Mathematician Peter Tonellato has a deep interest in software systems for the clinic, and formulated the idea of a whole-genome clinical clearinghouse within pathology. “We have to start thinking about genetics as just another component of data information and knowledge that has to be integrated into the electronic health record. Stop labeling genetics as something different and new and completely outside the mainstream medical establishment and move it back into the fundamental foundational effort of medical activity.”

Come the $1,000 genome, it will simply make sense to sequence everyone’s tumor, he says. Just as pathologists study tissue biopsies under a microscope, “we’re going to be sequencing it in parallel and figuring out which pathways and targets are pertinent to that person’s condition.” Simply doing more specialized tests isn’t the solution. “How many tens of millions of dollars and how many years has it taken to validate [the warfarin] test?” asks Boguski. “Multiply that by 10,000 other genes and it simply doesn’t scale. We’re going to have to look at this in a whole new way.”

Tonellato has been funded by Siemens and Partners HealthCare to construct an open-source, whole-genome analysis pipeline. Although not commercially released, the pipeline is built and being used for some pilot projects. He is also partnering with companies—including GenomeQuest—who want to do the sequencing analysis in a best-of-breed competition to establish the most refined NGS mapping utilities and annotation tools. The goal is to annotate those variants in a clinically actionable way down to Boguski’s six bytes of information and the drug response recommendation. “We think we’re as far forward in terms of doing that in an innovative and pragmatic way as anyone,” says Tonellato.

Using the cloud (Amazon Web Services), his team has lowered the cost of whole-genome annotation to less than $2,000. “Everybody talks about the $1,000 genome, but they don’t talk about the $2,000 mapping problem behind the $1,000 genome,” he says. It takes Tonellato’s group about one week using five nodes for the resequencing, mapping and variant calling, while the medical annotation takes three people about a month. High-quality computer scientists have to be paid too, he says. “You can’t just talk about the sequencing costs.”

Of course, it is most unlikely that hospitals will start running massive NGS and compute centers. “We envision a day where every clinical laboratory in every hospital in this country can do this testing,” says Saffitz. “They’re not going to do the sequencing, but there’ll be a machine where they can basically acquire the data, analyze it, and send a report to the doctor saying, ‘This is what we found, this is what it means, this is what you do.’” Where the sequencing is done isn’t of great concern. “We actually treat sequencing as a black box,” says Boguski. What’s important is that the hospital’s cost requirements and quality standards (and those of the FDA) are met. But Tonellato reckons it would be “very odd to have U.S. samples sent abroad for sequencing to Hong Kong or India... and then sit around and wait for the CLIA-certified, clinically accurate results to come back to us. That may happen in the future, but we have to get our own house in order first.”

Another problem is the current state of the gene variant databases, which Boguski calls “completely inadequate” in terms of clinical grade annotation. Where such a resource belongs is open to debate but Boguski is certain it does not belong with the government. “The government is not a health care delivery organization. Whatever that database is, it needs to operate under the same CLIA standards as the actual tests.”

Pathologists have traditionally interacted with patients when they are sick. “But more and more,” says Saffitz, “we’re going to be analyzing the genomes of people who are well, and I hope assuming a very prominent role in the preservation of health and preempting disease.”

Quake Aftershocks

The most comprehensive clinical genome analysis to date was reported in May 2010 in the Lancet. Stanford cardiologist Euan Ashley and colleagues, including Atul Butte and Russ Altman, Stanford’s chair of bioengineering, appraised the genome of Stephen Quake (see, “A Single Man,” Bio•IT World, Sept 2009). “This really needs to be done for a clinical audience to show them what the future is going to be like,” says Altman, who is also director of the biomedical informatics program and chief architect of the PharmGKB pharmacogenomics knowledgebase. The task of interpreting Quake’s genome involved more than 20 collaborators, including bioethicist Hank Greeley and genetic counselor, Kelly Ormond. When discussions turned to the risk of sudden cardiac arrest (Quake’s family has a history of heart disease), Ormond would invite Quake to leave the room until a consensus was reached.

Altman’s own group was able to predict Quake’s response to about 100 drugs. Some of it was imprecise, but he realized that, “especially for the pharmacogenomics, we are much closer [to clinical relevance] than I realized.” He said he would “bet the house” on the results dealing with statin myopathy, warfarin and clopidogrel dosing. The Stanford team also tried linking environmental and disease risk, but Altman admits that is farther from clinical practice. The Lancet study drew high praise from the BIDMC team. “As good as it gets,” is Tonellato’s verdict. “But go down to some town in the middle of America and say, ‘What are you going to do with this genome dataset for your patient?’... Is medicine ready for genetics yet or not? There is a long way to go.”

Since the publication, Altman has received inquiries from companies interested in doing similar “genomic markups” and licensing his group’s annotations. Altman intends to hire an M.D. curator to complement his Ph.D. curators, someone who can highlight the clinical significance of research data. Altman says he would be happy to have PharmGKB data included “in any and all pipelines. Meanwhile, Ashley is leading a Stanford program to make a computer pipeline to reproduce the Quake analysis on a larger scale.

In a rational world, Altman says, it seems logical to sequence human genomes at birth and put the data in a secure database, querying it only when you know what you’re going to do with the results. That’s in an ideal world. In the United States, he notes dryly, some people do not trust governmental databases. “I could imagine if it’s cheap enough, that people will actually resequence the genome on a need-to-know basis, simply so they don’t have to store it. I think that’s a little bit silly, but in order to get genomic medicine effected, I’m not going to lose the fight over the database.”

Whoever ends up doing clinical genomic sequencing in the future, Altman says they will have to document high-quality data with a rapid turnaround. “We will then put [the data] through the pipeline—hopefully the Stanford pipeline or whatever pipeline seems to be winning—and then we will query it as needed and as requested by the physicians on a need-to-know basis.”

1,500 Mutations

Genome Commons was established by Berkeley computational biologist Steve Brenner to foster the creation of public tools and resources for personal genome interpretation. He wants to build an open access Genome Commons Database and the Genome Commons Navigator. He is also launching a community experiment called CAGI (The Critical Assessment of Genome Interpretation) to evaluate computational methods for predicting phenotypes from genome variation data (http://genomeinterpretation.org).

One notable private effort in clinical genome annotation is that of Omicia, a San Francisco-based software company founded by Martin Reese in 2002.

Omicia is taking genome data and extracting clinical meaning, focusing on DNA variation, rather than gene expression or pathways. “We have one of the best systems for interpreting the genome clinically,” claims Reese. He started with Victor McKusick’s classic Mendelian Inheritance in Man catalogue, which now lives online as OMIM, mapping a “golden set” of disease mutations to the reference genome. Omicia is also developing algorithms to predict the effect of protein-coding variants to better understand which mutations are medically relevant.

Reese sums up the goal: “You have 21,000 protein coding mutations compared to the reference genome. 10,000 of them are non-synonymous. We have 3,500 in disease genes. That’s roughly 15%. So 15% of 10,000 is 1,500 protein coding mutations. The goal is to interpret 1,500 mutations.”

For the time being, Omicia is offering its services through collaborations. Reese has a three-year collaboration with Applied Biosystems, and was a co-author on the first NGS human genome paper using the SOLiD platform in 2009. Then there is the Life Alliance, a cancer genome alliance, featuring various medical centers and Life Technologies. “We’re doing their interpretation of these cancer genomes for 100 untreatable cancers,” says Reese.

Presenting the data for a physician is a challenge, says Kiruluta, but not as bad as the scant amount of time a physician has to see a patient. “The reporting is to help a physician make a decision quickly—green light, red light. Then there’s a much more detailed interface behind the scenes,” where other medical professionals can study the patient’s data in more detail.

Reese sees advantages to the commercial approach for genome software compared to academic solutions. “This will be a big play in next few years as people make clinical decisions. So the quality of the software, the QC of the assembly, how transparent you are, the annotation, is critical. It will be a big problem for academia to do that—you know how it is when a postdoc writes something! [Yes, I do. Government Software does not measure up, because it is not in touch with any real market. University Software does not mesure up, since when a postdoc writes a code - and graduates - the University Software either takes a new life in Industry if the postdoc leaves there - or quite simply dies without maintenance. For Genome Informatics software the only way to go is through "Industrialization of Genomics" - AJP]

Reese has also been spearheading the effort to develop a new Genome Variation Format with Mark Yandell (University of Utah) and others, which was recently published in Genome Biology.

DNA Partners

The challenge facing the affable Samuel (Sandy) Aronson, executive director for IT at the Partners HealthCare Center for Personalized Genetic Medicine (PCPGM) and PCPGM’s clinical laboratory director, Heidi Rehm, is to deliver clinically actionable information to physicians in the Partners HealthCare network. “This challenge cannot be entirely solved by a single institution,” Aronson notes. “It takes a network of institutions working together.”

Rehm maintains a knowledge base of 95 genes that are routinely curated by the PCPGM’s Laboratory of Molecular Medicine and supplies information to physicians on the status of those genes in their patients in real time. The PCPGM’s GeneInsight suite, developed by Aronson’s team, has been in use for about seven years. There are two components—one for the laboratory, the other for the clinician. The lab section consists of a knowledgebase—the tests, genes, variants, drug dosing, etc—as well as an infrastructure to generate reports via the Genome Variant Interpretation Engine (GVIE).

On the clinical side is a new entity, the Patient Genome Explorer (PGE), which allows clinicians to receive test results from an affiliated lab and query patient records. “The PGE, without a doubt, is one of its kind,” says Rehm. “There’s no other system out there. There’s a lot of excitement about it. Labs are choosing us for testing because we offer that service.” When an update is made to the PCPGM knowledgebase on a variant that is clinically significant, the PGE proactively notifies the clinicians caring for patients with that variant. If there are 100 clinics with 10 patients each, and Rehm updates the knowledgebase, then 1,000 patient updates are dispatched automatically.

For inherited disease testing, the alert changes the variant from one of five categories to another: 1) pathogenic 2) likely pathogenic 3) unknown 4) likely benign, or 5) benign. The PGE made its debut last summer in the Brigham and Women’s Hospital Department of Cardiology. When the system launched, a dozen “high alerts” (meaning a variant has shifted from one major category to another) were immediately dispatched. The physicians’ response has been really positive, says Aronson. “There’s a significant disconnect between the level of quality of data being used for clinical purposes and the quality of data in the research environment,” says Rehm. “Our hope with the distribution of this infrastructure is to get more data validated for clinical use.”

Core Challenges

The Partners effort is a worthy start, but the larger goal is to build a network where labs with expertise in other genetic disorders such as cystic fibrosis contribute their data, perhaps by offering attribution or a nominal transaction fee. “We can’t maintain data on every gene, but we’re willing to establish nodes of expertise,” says Rehm. As for the IT infrastructure, Aronson hopes to enable organizations to create a node on the network, link to the PGEs, and then operate under their own business models—whatever it takes to make the data accessible. The first external partner that linked to GeneInsight was Intermountain Healthcare (IHC) in Utah. “We believe this is the first transfer of fully structured genetic results between institutions so that they got into IHC’s electronic health record and are now available for decision support,” says Aronson.

Aronson anticipates a day where whole-genome sequencing for patients will be a clinical reality. “It’s very much on our radar,” he says, but doesn’t appear unduly concerned. After all, he says, the PGE is designed to store highly validated clinical information, and he doesn’t expect the millions of variants in a whole genome to contain enough clinically actionable variants to overwhelm the database. The challenge will come in understanding complex/low-penetrance diseases, “where we’re more algorithmically dependent. That will require new infrastructure.”

A bigger problem is facilitating the business models that will solve personalized medicine challenges. “Our goal is to expand networking, adding labs, PGEs and going after a network effect,” says Aronson. “We have a structure that could present an answer to how do you—in a true patient-specific, clinically actionable way that clinicians can use in their workflow—help interpret the data?

[Comment (1)

Kevin has pointed to a most important development if genomics is to deliver on long-awaited promises. An available, affordable personal genome has little value without analysis and interpretation delivered in consumable application.

Our "Genome Revolution" often invites analogy with the Space Age, e.g. comparing the Genome Project to the Moon Project.

The comparison is unfair to the Moon Project unless genomics delivers the full ride.

The Moon Project promised “to put a man on the Moon, and bring him safely back”. If we sequence the entire human DNA and fail to deliver the harder half of interpreting the sequence, the comparison is more akin to the Russians blasting a dog into space and leaving him there.

The ambitious approaches noted in this article all seem to be charting new territory and struggling to break from long-held erroneous beliefs. Trying to fix hereditary diseases without solid principles of recursive genome function as explained by (holo)genome physiology and biophysics is like trying to fix a broken television set without understanding how it works in the first place.

Thank you, Kevin, for addressing the key topic for our critical time.

---

For a preview of this message, see YouTube "Is IT ready for the Dreaded DNA Data Deluge?" 2 years ago . Also, see The Principle of Recursive Genome Function (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Mastering Information for Personal Medicine

Pathway
Sept. 27, 2010
Eric E. Schadt, Chief Scientific Officer of Pacific Biosciences

To make the most of new technologies, medicine must come to grips with the mountains of data it will produce.

Sometimes, mastery over the raging influx of information all around us has life and death consequences. Consider national security agencies charged with ensuring our safety by detecting the next big terrorist threat. Presented with worldwide email traffic, phone conversations, credit card purchase histories, video images from pervasive surveillance cameras and intelligence reports, the challenge for these agencies is to integrate all this abundant, disparate information and to present it to analysts in ways that help to identify significant threats more quickly.

Soon we will all be faced with a similar challenge that could have a dramatic impact on our well-being. In the not-too-distant future, average Americans will have access to detailed information about their genetic makeup, the molecular states of cells and tissues in their bodies, longitudinal collections of readings on their weight, blood pressure, glucose and insulin measurements, and myriad other clinical traits informative about disease, disease risk and drug response. Whereas classic molecular biology and clinical medicine offered only simple links between molecular entities and disease (for example, relating insulin levels and glucose levels to risks of diabetes), new technologies will provide comprehensive snapshots of living systems at a hierarchy of levels, enabling a more holistic view of human systems and the molecular states underlying disease physiologies. All this data—once appropriately integrated and presented—will allow us and our doctors to make the best possible informed decisions about our risks for disease; it will also help us to tailor treatment strategies to our particular disease subtypes and to our individual genetic and environmental backgrounds.

Powerful examples of how this new era of personalized medicine will change diagnosis and treatment are already available. A now-routine genetic test can indicate whether breast cancer patients will respond to treatment with the drug Herceptin, and testing for certain changes in DNA that affect blood-clotting can help doctors decide what dose of the anticoagulant warfarin would be safest for certain patients.

However, unlike the doctor of today, armed with a stethoscope and thermometer, tomorrow’s doctors will have access to a multitude of biosensor chips and imaging technologies capable of monitoring variations in our DNA and in the activities of genes and proteins that drive all cellular functions. They will be able to order scans with singlecell resolution for any organ in our bodies. How will such data be managed? How will it be analyzed and contrasted with similar types of data collected from populations so that the totality of these data help us to better understand our specific condition? How will the complex models derived from such data be interpreted and then applied to us as individuals by our doctors? Is the medical community prepared—and are individuals ready—for this revolution?

Managing Mountains of Data

The biomedical and life sciences are not the first to encounter this type of big data deluge. Google, which is among the most sophisticated handlers of big data on the planet, aims for no less than “organizing the world’s information” by employing high performance, large-scale computing to manage the petabytes of data available on the Internet. (A petabyte is one million gigabytes. By one estimate, 50 petabytes could store every written work in every language on earth since the beginning of recorded history.)

Companies such as Microsoft, Google, Amazon and Facebook have become proficient at distributing petabytes of data over massively parallel computer architectures (think hundreds of thousands of sophisticated, highly interconnected computers all working in concert to solve common sets of problems). Their technologies grab bits of those data on the fly, link them together and present them to users on request in fractions of a second.

However, the problems those companies have solved thus far are much simpler than understanding how the millions of variations in DNA distinguish us as individuals, the activity levels of genes and proteins in all the cell types and tissues in our bodies, and the physiological states associated with disease. The data revolution in the biomedical and life sciences is powered by technologies that provide insights into the operation of living systems, the most complex machines on the planet. But achieving such understanding will require that we tame the burgeoning information those technologies generate.

Within the next five to 10 years, for example, companies like Pacific Biosciences will deliver new singlemolecule, real-time sequencing technologies that will enable scans of a person’s entire genome (DNA), transcriptome (RN A) and chemical “epigenetic” modifications of the genome in a matter of minutes and for less than $100. For a single individual, hundreds of gigabytes of this information could be gathered from many tissue and cell types, at multiple time points and under varying environmental stresses. Layer on top of it more data from imaging technologies, other sensing technologies and personal medical records, and one might possibly produce terabytes (trillions of bytes) of data per individual, and well into the petabyte range and beyond for populations of individuals. Hidden in those collective data sets will be answers relating to the causes of disease and the best treatments for disease at the individual level.

Integrating such data and constructing predictive models will require approaches more akin to those now employed by physicists, climatologists and workers in other strongly quantitative disciplines. Biomedical investigators and information specialists will need to develop tools and software platforms that can integrate the large-scale, diverse data into complex models; experimental researchers will need to be able to use these models and refine them iteratively to improve their ability to assess the risk and progression of disease and the best treatment strategies. In the end, these models will need to be able to move into clinical settings where doctors can employ them effectively to improve patients’ conditions without necessarily having to understand all of the underlying complexities that produced the models.

Only by marrying information technology to the life sciences and biotechnology can we realize the astonishing potential of the vast amounts of biological data that new generations of devices can gather and share. Such data, if properly integrated and analyzed, will enable personalized medicine strategies that could lead to everyone making better choices, not only on treating disease, but preventing it altogether.

Eric E. Schadt is chief scientific officer at Pacific Biosciences in Menlo Park, Calif. and co-founder and a director of Sage Bionetworks in Seattle, Wash.

[For a preview of this message, see YouTube "Is IT ready for the Dreaded DNA Data Deluge?" 2 years ago - (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Cacao Genome Database Promises Long Term Sustainability

Chocolate-based Economy? (Potato, too... ) - AJP

Triple Pundit
September 20, 2010
by Leon Kaye

It is one of the oldest foods and is the subject of ancient texts and myths. One of the top ten most traded commodities in the world, this plant is a huge part of the economies of countries ranging from Cote d’Ivoire to Ecuador to Papau New Guinea. The finished product props up some of the world’s best known brands, and it exudes luxury while also contributing to a brutal way of life for some of the world’s poorest people—including hundreds of thousands of children. The cacao tree is also the subject of science, from botanists, agronomists, and now, geneticists.

Now scientists have decoded 92% of the cacao tree’s genome. Funded by MARS with the support of the US Department of Agriculture, IBM, and several universities, the Cacao Genome Database project is three years ahead of schedule. With the sequenced genotype, Matina 1-6, the project promises to solve such problems as pests and diseases that often plague cacao farmers, and in the long run, could improve both the production and sustainability of the cacao industry.

With this genome sequencing, cacao will join other commodities including rice, corn, and wheat, all of which have already gone through the process. Promises are aplenty: improved crop yields, heartier cacao beans, and improved production within the entire supply chain from farmers to chocolatiers. The project also tackles the long term sustainability of what some would call big chocolate: the demands of giants including Hershey, MARS, Cadbury, Nestle, Kraft, and Lindt. Over the past quarter-century, the growing global appetite for cacao has caused its global production to double, but the increase has come through more land use, not improved yields.

So will we all nosh on Matina Bars or Matina Kisses in the near future? It definitely could boost demand for organic and fair trade chocolate brands, as plenty of customers will add chocolate to the “non-GMO” shopping list. Large cacao farming operations stand to benefit as well. The environment possibly could be a winner, with a reduction in the use of pesticides and other chemicals. Plenty of long term questions, however, remain. Will less common varietals of cacao trees survive? What about smaller farmers, who could find themselves squeezed by the price of coveted pods and seedlings?

Or is this just the reality farmers, producers, and consumers must face in order for an industry to survive? The global cacao industry lost $US700 million the past 15 years from a trio of fungal diseases alone. Such losses do not only affect the bottom line of companies like Nestle & Hershey: the livelihoods of many people who have limited economic opportunities may hang in the balance as well.

[As detailed on "Genome Based Economy", the present "Genome Revolution" is not at all the first chapter in entirely changing the global economy. Norman Borlaug (Nobel Prize, 1970) started the "Green Revolution" - that with the help of Genomics at that time, saved billions of people from starvation (and by removing the reality of dying of hunger, turned India and China into global powers, as they are now). With the global population exploding over 7 billion, there is universal agreement that a "Second Green Revolution" is needed. By means of full DNA sequencing, PostModern Genomics delivers (no wonder that Product and Food Companies, such as NESTLE, KRAFT pitch in with very substantial funds, as well as agricultural giants like Monsanto are in the first 10 on the Pacific Bioscience' "short list" that already bought in cash the PacBio SMRT sequencer. Another aspect of this news is, that the already existing "Personalized Chocolate, best fitting to your Genome" (with technology shown here) will reach masses of consumers. (Without and way before full DNA sequencing, diabetics already have sugar-free chocolate...). Some might say "chocolate is insignificant" (which it isn't). Potato certainly is one of the main sources of global nutrients (and look what the US did with rice for India in the First Green Revolution...). Now high-protein potato is claimed to have been developed by India, for themselves and perhaps to the World. - (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


US clinics quietly embrace whole-genome sequencing

Published online 14 September 2010 | Nature | doi:10.1038/news.2010.465

It may be small-scale and without fanfare, but genomic medicine has clearly arrived in the United States. A handful of physicians have quietly begun using whole-genome sequencing in attempts to diagnose patients whose conditions defy other available tools.

As hospitals and insurers battle over coverage for single-gene diagnostic tests, and the US Food and Drug Administration cracks down on the products of personal genomics companies, a growing number of doctors are relying on the sequencing of either the whole genome or of the coding region, known as the exome. [No, the FDA did not "crack down" on personal genomics - if anything, they blew their case with an incompetent (admittedly non-scientific "analysis" of a science) and malicious and deplorable "From Gulf oil to Snake oil" soundbite of a lame-duck politician]

"If one hospital is doing it, you can be sure others will start, because patients will vote with their feet," Elizabeth Worthey, a genomics specialist at the Human and Molecular Genetics Center (HMGC) of the Medical College of Wisconsin in Milwaukee, said at the Personal Genome meeting at Cold Spring Harbor Laboratory in New York last weekend.

In May 2009, the genetic-technology provider Illumina, based in San Diego, California, launched its Clinical Services programme with two of its high-throughput genome analysers. The company now has 15 such devices dedicated to this programme.

Illumina provides the raw sequence data attained from a patient's DNA sample to a physician, who passes it on to a bioinformatics team, which works to crack the patient's condition. However, Illumina is working to develop tools to help physicians navigate genomes and identify genes already associated with diseases, as well as novel ones.

So far, the company has sequenced more than 24 genomes from patients with rare diseases or atypical cancers at the request of physicians at academic medical centres. The standard US$19,500 price tag is typically covered by the patient, by means of a research grant, or with the help of private foundations, although one patient is currently applying for insurance reimbursement.

Steering treatments

Such efforts are having a direct effect on treatment decisions. For three years, physicians at the Children's Hospital of Wisconsin in Milwaukee had struggled to treat a child whose intestines had become swollen and riddled with abscesses. At the age of 3, he had more than 100 separate surgeries and his colon was later removed, but his doctors were stumped.

They called on Worthey and her colleagues at the HMGC. The team obtained a completed exome sequence for the child and used in-house tools to identify the disease culprit as the protein XIAP, which inhibits a programmed-cell-death pathway called apoptosis. XIAP has a role in the immune system and is conserved across organisms including primates, flies and frogs.

The hospital's lab was then able to show that the child's cells were more sensitive than normal to apoptosis, and the gene is known to play a role in the immune system. On the basis of this diagnosis, the physician recommended a bone-marrow transplant in June 2010. By mid-July, the child was eating his first meal.

Such work demands substantial resources. That child's case took a team of 30, says Worthey, and included a 12-person bioinformatics team, three sequencing technicians, five physicians, two genetic counsellors and two ethicists. The hospital is already working on a handful of other whole-genome sequences, and plans to be analysing 90 per year by 2014.

During the past year, familial whole-genome and exome sequencing has identified gene variants with a role in disease at a rate of two to three per month. One major programme, the Undiagnosed Diseases Program at the National Institutes of Health in Bethesda, Maryland, has received more than 3,000 enquiries and reviewed 1,192 medical records, diagnosing 15% of the cases they have accepted. As of this month, the programme has also completed 59 exomes from 15 families. Thomas Markello, a geneticist and paediatrician on the project, says that the team has confirmed one genetic cause for a disease, and has a dozen new candidates to be validated.

Whole-genome sequencing is also affecting treatment choices for atypical cancers. Richard Wilson, director of the Genome Sequencing Center at Washington University in St. Louis, Missouri, spoke at the meeting of a 39-year-old woman who was thought, from a bone-marrow biopsy, to have acute promyelocytic leukaemia (APL).

However, when she was given the standard diagnostic test for the disease, this failed to demonstrate the expected exchange of a large piece of chromosomes 15 and 17, which causes two genes to fuse together.

But when Wilson and his colleagues sequenced and analysed cancerous tissue in the bone marrow, they found that a small chunk of chromosome 15 had popped out and been inserted into chromosome 17, fusing the two affected genes in a novel way. As a result, the woman was prescribed a drug known to improve survival in patients with APL. "We were able to assist oncologists in making an effective diagnosis and treatment, which — not trying to hype it at all — saved the patient's life," Wilson said.

Tim Aitman, a molecular geneticist at the UK Medical Research Centre's Clinical Science Centre in London, says that cases in which whole-genome sequencing has directly benefited the patients involved are still rare. "The view is these are anecdotes and one-off occasions," he says, "but it is inescapable that within the next 10 and 20 years that will become much more routine."

[It is not up to FDA or any red-tape, if We The People (who, by the way) pay for the FDA "non-scientific experts" by their tax dollars, decide to go to the genomic culprits of deadly diseases. - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


The Broad's Approach to Genome Sequencing (Part II)
Bio-IT World | September 17, 2010

Since 2001, computer scientist Toby Bloom has been the head of informatics for the production sequencing group at the Broad Institute. Her team is the one that has to cope with the “data deluge” [see my 2008 YouTube "Is IT ready for the Dreaded DNA Data Deluge - AJP] brought about by next-generation sequencing. Kevin Davies spoke with Bloom about her team’s operation, successes and ongoing challenges.

BIO-IT WORLD: What’s the mission of your team?

TOBY BLOOM: Our goal is to be able to track everything that’s going into the lab from the time samples enter the building, through all standard processing of the data. My team is responsible for managing the sample inventory, keeping track of the projects, the LIMS (Laboratory Information Management System), which tracks processing events in the lab, the analysis pipeline, which at this point includes the standard vendor software, then alignment, various quality checks, generating metrics, and we do SNP [single nucleotide polymorphism] calling for fingerprinting. We have a data repository – a content management system that makes all that sequence data available to the researchers when it gets handed over to their side for further analysis.

Did your team build the LIMS?

We did, many times! We have a couple of times in the past looked at what’s on the market, but not recently. Because of the scale we’re at and how fast we change things, we usually find that any one product that’s out there is aimed at one of the things we do, but not all of them. We’ve had our own LIMS since before I got here, nine years ago, but for next-gen sequencing, we’ve had to rebuild much of that. We’re far enough ahead of the curve on most of this stuff that most of the LIMS out there wouldn’t be ready in time.

How big is your team and what is its chief expertise?

There are about 25 people. The vast majority is software engineers, but I have a couple of people in data management, and a couple deal with the databases. Everybody else is writing code, mostly Java programmers.

What brought you to the Broad, or the Whitehead Genome Center as it was then?

I was just fascinated with what was going on! It was clear from as far back as ’95 they were bringing in computer scientists to deal with the data challenges. I didn’t get here until ’01. I was looking to change positions, and it’s just a fascinating place to be. I didn’t know a lot about the biology but it’s exciting to be a part of what Broad is doing.

What do you do in terms of downstream processing?

We do a number of things in the software. We handle a variety of different technologies. We built a pipeline manager that allows us to specify the workflows that should be used for various types of analyses or sequencing. So if we’re doing RNA-sequencing for example, we’re doing something a little different than whole-genome shotgun or targeted sequencing. It [the pipeline manager] lets us handle many pipelines at once. It lets us pull in information from our instruments, our LIMS, and our sample repository to decide what to do on the fly. All of those pieces are integrated.

We’re doing 0.5-1 Terabases a day. We’re doing a lot of processing! There’s a focus on high throughput and automation. Having it under our own control and being able to change rapidly is important.

Within the pipeline manager, we run the Illumina vendor software. What part runs off instrument changes over time. We started out doing the image processing off the instrument, but it got to the point where [Illumina] could do the image processing reliably enough on–instrument that we could use that. Then we started pulling the intensities from the instrument, instead of images. I hope that with the HiSeq 2000s, we soon get to the point where they do the base calling on the instrument. We’re still doing all the base calling in the pipeline right now, but maybe we’ll soon get to the point where we can take the base calls off the instrument and just do the downstream processing – creating BAM files, recalibration, alignment, deduplication, quality analysis, SNP calling, etc.

Are you able to distribute Broad Institute resources such as the pipeline manager to the community?

We’re not proprietary about it, but because all our pieces are integrated, it’s sometimes hard to release pieces of code because they depend on our databases or other internal metadata. We’ve released some of our BAM file processing tools, the Picard tools, publicly. I’ve been trying to see if we could make our pipeline manager run in the Cloud, to make it available to other people. We’ve done some work to isolate our actual pipeline management from our database and internal structures, but it’s not ready to do that yet . . . The goal is to be able to flexibly create and change workflows, and run many at once. It’s focused on massively high throughput and very automated processing. We want to make sure that if something fails -- because we’re running 2,000 compute cores at a time, things will fail, servers will drop out – it can track where everything is, what’s failed, what’s stuck and hasn’t turned up. Our goal is to be able to restart from the last step automatically without a lot of human intervention. In some ways we do that better than others but we’re always working on it. The goal is to push all of this stuff through without a lot of people.

Do you function at all like a core lab?

We don’t’ function as a core informatics facility, in other words we don’t provide a service for building custom software, but we will do specialized things for certain projects. We’re not here as a service, we’re here to support production, but we do get requests. We’re very much a production system, so we’re not [writing] research algorithms [This is very interesting. Although Eric Lander et. al. have been exploring in the cover article that "Mr. President, the DNA is Fractal ! (see top diagram of this column), HolGenTech, Inc. is let to spearhead algorithmic approaches of DNA function - in part because of my trail of IP protection since 2002. Algorithms and Software for "Recursive Genome Function" [presently, at 269,000 hits on Google] is to emerge from the intellectual resource and facility of HolGenTech, Inc. in Silicon Valley - not from Broad - AJP]. We do take research algorithms when they’re working well enough and optimize them to make them more robust to run in a pipeline. There are specialized cases where [Broad] research groups will come to us and say, we need specialized processing of the data in the pipeline. So we have to do something different, e.g. for epigenetics or RNA-seq, things like that. Sometimes they’ll ask us to write specialized code, and we can sometimes, but we don’t always have the bandwidth.

We often hear about the “data deluge.” [See above YouTube - AJP]. What’s it like facing the brunt of that?

From the informatics side, we were all prepared to see the data volume go up. We were ready for the hardware choices … same for the software. The big surprise wasn’t that or the amount of data coming off the machines. What had us playing catch up was the impact on the lab and the LIMS. It was more the change in the number of libraries we were making, the number of samples in the lab at once, than the actual amount of data – from my point of view. I’m not saying it was easy to scale the data, but we weren’t predicting the whole lab process would have to change so much because of the volume of samples. When we were doing large genomes on the [ABI] 3730s, a library would last a month or more. We didn’t have to worry about 1,000 samples in the same step in the lab at the same time and having to track them to make sure they didn’t get mixed up or lost. We built many layers of tracking in the LIMS that weren’t needed before. That’s been a major change.

Other parts of the Broad’s sequencing operation borrow from proven factory automation methods. Can you apply any of those methods on the informatics side?

Well it’s not quite the same -- you can’t put tape on your screen! It’s more of focus on how do we do rapid iterations reliably. Things change in the lab very quickly. Your standard engineering software practices – gathering requirements and writing careful specifications and then doing careful design and building – isn’t a process that works in this environment, because by the time you get through all those steps, you’re building the wrong thing. We need to move rapidly, whereas that model is made for building software for processes that are well established and you’re automating them.

We are building software often ahead of the process. We’re trying to get enough working that they can function and get the data they need, before they really know what they need from us. They’re doing ongoing process improvement . . . We have to focus on how we can identify what they need most, and how to change along with their process changes. We need to build it in small pieces and then add to it without rebuilding what you already did. In many ways, it’s agile software development. It’s as close to agile as anything else, but it doesn’t follow a (typical) agile process.

How much are you involved in the decision to put machines or new platforms into production?

We’re definitely part of that. It’s not just that we have to be satisfied that it’s ready to go, it’s that “we’ve got the software changes in that you need.” The Illumina software that runs in our pipeline has changed several times. We had to get that software into production too. When there are changes in data types and metrics, we have to change all that in our system.

Do you communicate much with the vendors on the software?

We often take beta versions of their software before it’s out. We’re expected to find their bugs! We can’t wait for their official release. We’re the first ones to get the instruments often, so yes, there’s a very definite interaction on the informatics side as well.

We have weekly calls with [Illumina] about the informatics. On the HiSeqs, we don’t want to pull the intensity data if we don’t have to. We’re just validating we can get the same results using their on-instrument base-calling; we need to understand the failure modes in the integration so we can make sure we’re not going to lose data.

What impact will the third-generation sequencing technologies such as PacBio and Ion Torrent have on your team? [Note, that the Broad Institute is on the short-list to receive shipment of PacBio' SMRT sequencer... -AJP]

The long reads will help with a number of kinds of analyses downstream, but they don’t affect the production software I build as much. We’re looking forward to having long reads.

I try to make sure I know what all of the things are on the horizon that might show up and what the informatics implications are. Sometimes it matters, sometimes we don’t have to do much to prepare. We initially thought the biggest difference might be the size of the data, the bytes/base might be different on different instruments. On the HiSeq, Illumina has gotten down to essentially one byte/base, so that difference -- where you didn’t have to pull images -- has gone away. There are some that have much higher volumes of data than others.

The differences in the [sample] prep process matter to us, because it matters what our LIMS can handle. We watch but we don’t jump into active building until we have a machine in house and we think it’s time to ramp up.

What have you been doing in the Cloud?

There are a couple of reasons for exploring the Cloud. One is small centers that don’t have the IT infrastructure to be able to handle the volumes of data and the complexity of the processing. That’s not an issue for us but it is for small centers. The other side is the big collaborative projects, where you have many, many centers sharing data -- many centers producing data, and many centers that are then processing the data, e.g. 1000 Genomes or TCGA. (TCGA we can’t yet put on the Cloud because of security concerns.)

For some of the groups that want to do analysis, getting the data back and forth from NCBI or other centers is a burden. If the data could be put in one place and everyone could move the compute to the data . . . When you get to these very big projects, moving the compute to the data seems very much more efficient than having to keep moving the data every time you want to compute. So that’s the model: can we put the data in one place and move the compute to the data as needed?

That’s the experiment. I’m not saying we’ve gotten there. Very different from some of the Grid models, who provide the compute, and essentially, if you don’t have the compute you need, you can just borrow it, but you have to get your data there. … This is very much, let’s not move the data—the data is the problem. I have a grant to experiment with my pipeline. One of the experiments is: Could we put a pipeline manager up there [Amazon EC2] with a lot of the standard analysis steps, let people go to the Cloud and figure out how to use it, for their own pipelines and workflows? I don’t’ think we’re there yet. This only works if it’s easy to get the data in the Cloud. It’s clearly not the kind of application that was targeted originally by the public Cloud vendors.

Are putting the data and running the pipeline in the Cloud two separate issues?

They are. We all submit these data to NCBI or EBI, and those two data repositories are already putting some of their data up on the Cloud to make it more available. Whether it goes on a public Cloud or not is not the question. If we’re already sending data to NCBI and EBI, and those guys exchange and replicate all the data anyway, could that be the foundation for the data already being there? NCBI isn’t about to provide all the compute for the world. But the notion is we already have a central repository, if that were part of a Cloud-like infrastructure -- whether on the Amazon Cloud or another commercial Cloud, or a private Cloud -- is that kind of architecture useful?

The jury is still out on the Cloud and how to do it. Is the Cloud model a helpful model for the community? We won’t know that until we can run things enough to test it. Can we make it work on the existing Cloud infrastructure? The jury is still out on that one also.

[What has happened/is happening at Broad appears to be a major validation of the YouTube-s "Pellionisz", with HolGenTech, Inc. (not everything is on the website...) slated to be for the role of the kind of advanced algorithm- and software developments that is suitable for the (near) future of "Private Clouds" for DNA Analytics in Hospitals' basement, next to their "traditional data labs" - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Pellionisz Principle; "Recursive Genome Function" gets well over a quarter of a Million hits (261,000)

"Facts don't kill theories - only a better theory can surpass obsolete dogmas [AJP]". The "Death of Central Dogma" used to be a monthly event for some significant time by now, but Nobelist Crick's obsolete theory only became dead as a doornail by Pellionisz Principle of "Recursive Genome Function" actually replacing BOTH Crick's arbitrary "dogma" (that was quite laughable from its first utterance in 1956 e.g. by Nobelist Francois Jacobs) and Ohno's also doubted "Junk DNA" fatal mistake (1972).

"Recursive Genome Function" gets not only more hits (today, 261,000 on Google) than the two other COMBINED, but now is consistently well over quarter of a Million hits.

It may be particularly noteworthy that this is the second time when Pellionisz' biophysics came out ahead of Crick's failed efforts addressing solid intrinsic mathematics of living systems. Crick started from modest physics (B.Sc.) and invoked Erwin Schroedinger's essay "What is Life?" - where it was crystal clear to the physicist Schroedinger as early as in 1944 that the ultimate secrets of "Life" could, therefore must be cracked by the power of mathematics. Yet, Francis Crick, gotten lucky by Rosalind Franklin's revealing chrystallography of the double helix (not triple, as Linus Pauling almost got it right) Rose's photos cleverly obtained by and Francis and so skillfully co-publicised the structure of the DNA scaffold with Jim Watson - Francis Crick failed to deploy any advanced biophysics to the actual coding of Schroedinger's predicted "covalent bondings". Instead, itchy because he was still not getting the Nobel, in 1956 Francis Crick scribbled his Central Dogma longhand, and published and laudly promoted till the end of his life in 2004 (c.f. "I have never seen Francis in a modest mood" - a famous saying by Jim Watson). A setback of Genome Informatics ensued for over half a Century by his "Dogma" (later he confessed that he did not know what the Latin word "Dogma" meant - believable since he had failed to gain his intended place at a Cambridge college, probably through failing their requirement for Latin).

The half a Century setback setback never bothered Crick much, since upon getting his Prize he changed fields - realizing that the sugar double helix was a mere scaffolding, he ventured into proteomics - only to realize that it was too difficult. Soon after the "Central Dogma" almost collapsed (though Ohno made in 1972 an all-out effort with his "Junk DNA" silly notion to argue that even if there is a protein-to-DNA recursion, it would only find "Junk", devoid of information), by the Nobel to Baltimore et al. (1975) Crick elected to change fields from Genomics to Neural Networks. (The paradigm shift that instead of AI, creating intelligent systems without and before understanding what intrinsic mathematical language actual neural networks are using to produce brain functions). My earlier victory over Crick on the turf of Neural Nets was easy - as a Neural Net pioneer I explained the function of cerebellar function of actual neural nets of cerebella (spacetime coordination) in crisp terms of tensor geometry by Tensor Network Theory- whereas Crick was aiming at "consciousness", largely devoid of mathematics. A direct comparision of my Tensor Network Theory (verified by independent experimentalist on basic sensorimotor systems) is provided by the same Pat Churchland, who collaborated with Crick to solve higher order brain functions (consciousness) at the level of philosophy. Devoid of mathematics, it is clearly a futile exercise.

Let no one believe that I have any personal grudge against Francis Crick. I never-ever met him (even my Google Tech Talk YouTube in 2008 when I claimed that "there was no Emperor", Crick was gone already for 4 years).

My pure interest with destroying roadblocks and thus getting us on the open road again is to focus at this crucial time of "inflection from data-overaboundance and theory-void Genomics" to "Industrialization of Genomics". Let's get back to the solid biophysics-path laid down by Schroedinger, since the very sustainability of "Industrialization of Genomics" is at stake at its upswing as long as Genome Informatics is deadlocked by evidently mistaken paradigms. The road has been cleared at least by the overwhelmingly acknowledged "recursive genome function" in a peer-reviewed publication in 2008 June, and popularized by YouTube in 2008 October.

Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Victory Day of Recursion over Junk DNA and Central Dogma COMBINED

In a mere two years since its publication in "The Principle of Recursive Genome Function" 2008 (presented in Google Tech YouTube 2008, the almost a full hour long presentation viewed 8,639 times) "Recursive Genome Function" clearly surpassed (with 238,000 Google hits) the "Junk DNA" and the "Central Dogma" obsolete paradigms COMBINED (together 234,000 Google hits). Half a Century of Genome Informatics retarded by two demonstrably false paradigms is finally OVER.

Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Complete Genomics to Sequence 100 Genomes for National Cancer Institute Pediatric Cancer Study
Bradenton.com

Tuesday, Sep. 07, 2010

Complete Genomics Inc., a life sciences company focused on human genome sequencing, today announced a collaboration to identify and validate somatic mutations from 50 pediatric cancer cases from multiple research centers across the United States.

SAIC-Frederick, on behalf of the National Cancer Institute (NCI), will be using Complete Genomics’ sequencing, bioinformatics and scientific services to sequence and analyze 50 tumor-normal pairs. This analysis could enable researchers to identify patterns of tumorigenesis and ultimately lead to improved diagnosis and treatment of pediatric cancers.

SAIC-Frederick is the prime contractor for the NCI’s R&D facility in Frederick, Md. This project, which is being undertaken by Complete Genomics forms part of the NCI’s Therapeutically Applicable Research to Generate Effective Treatments (TARGET) Initiative. TARGET seeks to use genomics technologies to rapidly identify valid therapeutic targets in childhood cancers so that new, more effective treatments can be developed. It is currently focusing on five childhood cancers: acute lymphoblastic leukemia, acute myeloid leukemia, neuroblastoma, osteosarcoma, and Wilms tumor.

Complete Genomics will seek to identify and validate mutations found in the pediatric tumor genomes. These could include somatic single nucleotide polymorphisms, insertion/deletions, copy number variations, and somatic variations. The sequenced data, as well as the assembled and validated data sets are expected to be submitted to the National Center for Biotechnology Information’s Sequence Read Archive database, as well as the TARGET Database.

Complete Genomics will be paid $1.1 million for completing this project, which is funded by the American Recovery and Reinvestment Act (ARRA) of 2009.

Upon success of this project, the contract contains an option for SAIC, on behalf of the NCI, to sequence more than 500 additional NCI cancer cases (more than 1,000 genomes) over an 18-month period with Complete Genomics.

[This very interesting news item signals a turmoil of the (private domain) "sequencing industry" to find the sustainable business model - such that the ecosystem does not crash because of a glut of unprocessed (not fully interpreted) DNA sequences. While formerly (see my quotation of Complete Genomics in my Google Tech YouTube, 2008) they promised not just sequencing but also a "Google-type Data Center" for Analytics. Later, CG declared itself (similar to PacBio) a "pure-play genome sequencing company" - with an uncertain market that is capable of taking up the raw sequences, threatening with a "Dreaded DNA Data Deluge". Price of a full DNA sequence was heralded to sink just under $5,000. Now, the new twist more than doubled the price to $11,000 per pop, to include some sorts of (certainly less than full) Analytics - with all the financial burden shouldered by the taxpayers (the government's National Cancer Institute) - also meaning that both the sequences and the partial analytics are "public domain" for the entire World. This appears to be a transitional business model that makes US taxpayers blindly finance the New Chapter of Genomics for the entire World, where the direction is governed by the Government's experts - who contributed to the over half a Century of setback that Genomics could not get rid of the obsolete axioms of "Junk DNA" and "Central Dogma". With the US government funds dwindling (ARRA is running out...) this business model is unlikely to be either fair to the US taxpayers or sustainable in the long run. From science viewpoint, cancer clearly seems to have to do with defective methylation in "recursive genome function" (Google hits today: 238,000). While this wide acceptance (that is within 10% of beating COMBINED hits of "Junk DNA" (132,000) plus "Central Dogma"!(128,000) signals an essentially completed paradigm-shift, it is still questionable how and when (not if) e.g. the NCI will embrace the new paradigm to accelerate (and make less expensive by competitive private sector...) those long-promised medical breakthroughs (that did not happen in the First Decade after the Human Genome Project) accomplished in the NEXT decade - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Junk DNA can rise from the dead and haunt you [Comments]

By Boonsri Dickinson | Aug 20, 2010 |

[98.7% of her DNA don't look like "zombie genes" at all... - AJP]

Scientists have found that junk DNA can come back to life and can cause disease.

As it turns out, a region of the genome — that is hundreds of thousands of years old, mind you — can bring on trouble.

Soon after the zombie gene wakes up, people can no longer smile and their upper body muscles begin to waste away.

Those victims suffer from facioscapulohumeral muscular dystrophy (FSHD), one of the more common forms of the genetic disease.

1 in 20,000 people suffer from FSHD — what makes it different than diseases like diabetes is that inheriting the gene means the person will one day get the genetic disease.

While scientists knew genetics was to blame, they didn’t exactly know why and how it caused disease. Now they know, zombies are to blame.

The researchers published their results in Science.

There’s a certain rhythm to this madness. To cause disease, the gene needs to be repeated a number of times and has to have the right sequence. The trouble gene is found on chromosome 4 (which is a region scientists eyed for several decades).

If the zombie gene is repeated more than 10 times, the person will not develop FSHD. Researchers believe the surplus copies change the structure of the chromosome so the zombie gene can’t attack. Otherwise, the gene (DUX4) is allowed to be made and becomes toxic to the muscle cells.

This finding fundamentally changes how geneticists think about simple genetic diseases. It’s clear that even though FSHD was thought to be a simple disease because having the gene meant the person would definitely get the disease, it’s actually much more complex than that.

In the future, researchers could knock out this dead gene and develop new treatments.

Scientists believe they will discover that other diseases will have similar causes.

Geneticist Dr. Francis Collins told The New York Times, “the first law of the genome is that anything that can go wrong, will.”

Comments--

1

Pellionisz

08/20/10

The "Junk DNA" misnomer was put forward as a "theory" (though totally wrong, and suspect after its first utterance) by Susumu Ohno (1972), an otherwise serious scientist; see "So much 'Junk' DNA in our genome" (full text in my junkdna.com domain). He mistakenly argued that the non-genic sequences are in the genome "for the importance of doing nothing" (p. 367).

It was only later that this catastrophic misnomer became widely accepted at face value - as an excuse for the establishment to (BTW negligently) ignore 98.7% of (human) DNA - while millions if not hundreds of millions were (and still are) dying of "Junk DNA diseases".

"The Principle of Recursive Genome Function" (2008), by retiring both the "Junk DNA" and "Central Dogma" obsolete axioms became the first full genome (hologenome) theory based on sound genome informatics, sweeping away science-nonsense that retarded genomics for over half a Century (starting from Crick's notion in 1956, that he dared to call "Central Dogma", that information "never" recurses from either RNA or from
PROTEINS to DNA).

Upon conclusion of ENCODE (2007) Francis Collins was the one to issue a public call "the scientific community will have to re-think long-held beliefs". Some lucky few did not have to re-think as they never believed the two false axioms in the first place - but now we have (really, not the first...) experimental evidence as a de facto basis to discard wrong assumptions that were totally absurd from the viewpoint of genome informatics. 1.3% of human DNA (in fact, much less since "genes" may contain a big majority of non-coding introns) is simply not enough information to govern growth of as complex organisms as humans.

FractoGene (2002) clinched it in one year after it was established that the expected 140-300,000 "genes" were nowhere to be found by The Human Genome Project. Today, "recursive genome function" prevails with close to 200,000 hits in Google...

"Fractal defects" such as reported, with the recursion derailed and/or not supported with enough auxiliary information by sufficient number of recursion not only "have been expected", but theoretically predicted. - Pellionisz_at_JunkDNA.com

2

IMWeira

08/23/10

Dear Pellionisz; excellent post. I have not heard it put quite that way but some of us borderline science junkies have not accepted the junk dna designation nor could we understand why anyone would accept it.

I will mention your site to some of my friends and see you there.

3

JohnMcGrew@...

08/23/10

Interesting.

@Pellionisz, thanks for that info. Very interesting reading.

4

wizoddg

08/23/10

What Pellionisz said... Is far better put than I've heard before.

But it is a symptom of the way we do research on nearly everything--if it's not seen to be part of the current problem, we discard it...only later finding that it is causing other problems or preventing them.

Part of this in medicine is the practice of studying diseases (disorders) while ignoring the study of properly functioning systems.

We've spend many many decades studying disorders to find cause and cures--at the expense of ignoring the 'benign' organisms...many of which, once examined, turned out to be essential for our health--not merely not causing problems.

We finance medical research by popularity--rather than spending our time and energy where we as a society will get the most return for the effort, we routinely spend money based upon the ability to raise funding--filling the world with images of dying children and causing the public to mis-perceive the actual risk/benefit ratios.

Investing by emotion rather than analysis, while extremely human, is counter-productive.We end up spending money to treat 'horrific' problems, rather than the problems which kill or disable the most people.

The number one killer in the world today is heart disease and related circulatory issues--yet I routinely receive pleas for funding for many other conditions which affect far fewer people, and far too often those making the appeal believe that they are actually working to find a cure for something which is kills "the most."

Much unexpressed DNA is like that in the article--old ills piggyback ridding upon our genes.

Another large group are genes which are expressed only under certain circumstances

In the past couple decades we've learned that such gene expression can continue generations after the triggering circumstance has disappeared from the environment.

With the discovery of what crystallized DNA looks like decades ago, the public assumed that it meant we understood DNA. This has happened anew with each new advance.

The latest, the mapping of the genome is seen by the public as an end--that mapping means understanding.

In fact, each major step we make is merely the beginning of understanding--often forcing us to toss out our favorite theories of how things work.

In a world where understanding changes rapidly, it is every bit as valuable to be able to 'unlearn' and let go of previously loved theories as new data arises.

The one advantage that the scientific method has over the dogmatic methods of our more distant past, is the recognition that knowledge [better said, "understanding", since knowledge constantly advances, but understanding leaps with paradigm-shifts -AJP] is not static.

[It is a delight to see that the public is far ahead of some of the ossified parts of the establishment in understanding that Genetics-Genomics-HoloGenomics experiences far more profound "paradigm-shift" than previously imagined. We have to bring attention that by "What is Life?" Schroedinger questioned (1944) none of the "diseases" but the informatics-axioms of how Life is encoded (he predicted that covalent H-bindings of an aperiodical crystal encode life - later acknowledged by Crick the pioneering vision of Schroedinger, when Watson, Crick, Wilkinson and Franklin realized (1953) that the DNA crystal is aperiodical in the nucleotides, though is periodical (helical) in the physical arrangement of the aperiodical bases. Crick attempted to provide a further axiomatic basis of Life (not diseases...) by putting forward in 1956 (the unfortunately totally mistaken, and brazenly called "Central Dogma of Molecular Biology") that "information never recourses from Proteins to DNA". This wrong axiom (shored up by Ohno's "Junk DNA" second fatal axiom, 1972) set modern Genomics back by more than half a Century - which is relatively a short time comparing to the setback that the suppression of dismissal-attempt of "Geocentric" mistaken axiom inflicted upon our understanding of the surrounding universe. The Vatican admitted some 300 years after Giordano Bruno was torched to death alive and his ashes were thrown to the Tiberis, that actually his paradigm-shift was correct. No wonder that bringing a "lucid heresy on two counts"; dismissal of both mistaken axioms have proven to take such toll in the process of bringing it to the status of the presently prevailing leading paradigm. The blogger is "right on the money" that US Congress finances its Agencies based on "popularity" (meaning, votes) - and (mostly through NIH) e.g. The Human Genome Project deteriorated into a "gene discovery" - where the surprisingly few genes were (largely mistakenly) associated with particular diseases. Now, this horrendously expensive detour (yielding not much, if anything say the very leaders...) seems finally over. Mostly, not because a rigorous scientific rationale necessarily results in automatic victory (see Giordano Bruno...). Two non-scientific factors appear actually more important. One is (proven by this blog...) that the general public (who are not only voters, but actual taxpayers) realizes in massive droves that Agencies put on an auto-pilot of a mushrooming bureaucracy are heading into the void of a wrong direction, and thus look at some Agencies in a different way than before (and may cast their votes differently, how to spend their own hard-earned tax-dollars). Most important, however, it seems that postmodern Genomics is already deep into its "Industrialization" (i.e. a massive migration from government-led R&D to businesses of the Global Private Sector), where "popularity" drops into the important, but not so essential "marketing department" - while the primary drivers are the validated science, technology, performance, cost-efficacy, proper supply-chain management, etc., etc. The first segment where the transition from government R&D to Private Industry is practically complete is "Affordable Full Human DNA Sequencing". The dollar billions already invested into sequencing industry, however, will relentlessly drive postmodern Genomics away from "popularity of Sick-Care", into the direction of valid science-based predictive, participatory and personalized prevention; a new paradigm not only in Genomics, but genome-based Health Care. To maintain health, we must first understand how the healthy hologenome functions - and not in less than exact biological terms, but in terms of software-enabling algorithmic explanations, like "recursive genome function". Thus, sequencing industry will become a driver, since their only product (DNA sequences) are virtually worthless without interpreting their function by means of intrinsic algorithms, turned into potent software - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Pacific Biosciences Denies Helicos' Infringement Claims

September 01, 2010

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – Pacific Biosciences today said that it believes the patent infringement claims brought against the company last week by Helicos Biosciences are without merit and that it intends to vigorously defend against the claims.

Helicos filed the suit Friday in the US District Court for the District of Delaware. It claims that Pacific Biosciences is infringing claims in four of its US Patents: Nos. 7,645,596; 7,037,687; 7,169,560; and 7,767,400.

Those patents cover Helicos' methods for sequencing a single strand of DNA by synthesizing a complementary strand of DNA using labeled nucleotide bases. This sequencing-by-synthesis method underlies Helicos' single-molecule sequencing platform.

"Helicos' patents are directed to methods used in their second generation 'flush and scan' system, and even at that, do not represent the earliest publication of those concepts," Hugh Martin, Pacific Biosciences' chairman and CEO, said in a statement. "Our third generation SMRT technology observes single molecules in real time, a fundamentally different approach."

Menlo Park, Calif.-based PacBio is gearing up for a commercial launch in early 2012 of its RS sequencing instrument. Among customers who have placed orders for the system are Baylor College of Medicine, the Broad Institute, Cold Spring Harbor Laboratory, the US Department of Energy Joint Genome Institute, The Genome Center at Washington University, Monsanto, the National Cancer Institute, the National Center for Genome Resources, the Ontario Institute for Cancer Research, Stanford University, and the Wellcome Trust Sanger Institute.

[Molecular Sequencing Industry is thriving - IPO-positioning, major M/A, IP lawsuits are all clear signs of an Industry coming alive. The question is not if, but when, who and (most importantly) "what is in it for me?" by everyone. One important reminder: NOTHING is in mere sequencing for anybody (except patent attorneys, but what else is new?) until and after the algorithmic understanding of "recursive genome function" will have been achieved, even partially, and software-enabling algorithms, now with the escalating "cloud solutions" are properly wrapped into the game - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Will Fractals Revolutionize Physics, Biology and Other Sciences?

[Look at Romanesca also in The Principle of Recursive Genome Function - AJP]

A new discovery, reported in the latest Nature, hints at higher universal laws of the physical world, as well as new ways to approach and understand life in general. Even though the European discovery actually dealt with superconductors, it has an interesting twist with implications for the life sciences [predicted by FractoGene by Pellionisz, 2002 - AJP].

A group of physicists from London Centre for Nanotechnology at UCL and their collaborators at Sapienza University of Rome and European Synchrotron Radiation Facility in Grenoble, France were studying properties of so-called high-transition-temperature (high-Tc) copper oxide superconductors. They were looking at the microstructures that these superconductors form as they are cooled down. To the surprise of investigators, they discovered that microstructures, exhibited by oxygen atoms, seemed to organize into self-repeating fractals. Moreover, these fractal shapes, some extending almost to the millimeter scale, were correlating to superconductivity. In fact, larger fractals correlated with higher superconductivity temps.

What does it have to do with life? We think, plenty. Fractals, known for their geometric morphologies that are made up of patterns that repeat themselves at smaller scales infinitely, were first discovered by mathematician Benoit Mandelbrot in 1960s. Since then, they took the world of natural sciences by storm. As mathematicians and physicists discovered more and more interesting properties of these unique constructs, people started to notice fractals' ubiquitous presence in nature. Whether in the living world or in inorganic one, they seem to pop up in unexpected places. Somehow, there are laws of physics that favor these structures for whatever reason.

To us, the discovery of fractal function is eerily reminiscent of polarization in pre-quantum mechanical physics. Not until Niels Bohr, Albert Einstein and others laid the foundations of quantum mechanics, polarization of light has remained a mystery. Now we have a new puzzle to answer. Fractals are ubiquitous in the physical and living world for some unknown reason, and there is a function to them.

Paper in Nature: Scale-free structural organization of oxygen interstitials in La2CuO4+y

UCL press release: Fractals make better superconductors ...

More from Wired: Inexplicable Superconductor Fractals Hint at Higher Universal Laws...

Wired

Inexplicable Superconductor Fractals Hint at Higher Universal Laws

Wired
By Brandon Keim
August 11, 2010

What seemed to be flaws in the structure of a mystery metal may have given physicists a glimpse into as-yet-undiscovered laws of the universe.

The qualities of a high-temperature superconductor — a compound in which electrons obey the spooky laws of quantum physics, and flow in perfect synchrony, without friction — appear linked to the fractal arrangements of seemingly random oxygen atoms.

...

“Everyone was looking at these materials as ordered and homogeneous,” said Bianconi. That is not the case — but neither, he found, was the position of oxygen atoms truly random. Instead, they assumed complex geometries, possessing a fractal form: A small part of the pattern resembles a larger part, which in turn resembles a larger part, and so on.

“Such fractals are ubiquitous elsewhere in nature,” wrote Leiden University theoretical physicist Jan Zaanen in an accompanying commentary, but “it comes as a complete surprise that crystal defects can accomplish this feat.”

If what Zaanen described as “surprisingly beautiful” patterns were all Bianconi found, the results would have been striking enough. But they appear to have a function.

...

However, while the arrangement of oxygen atoms appears to influence the quantum behaviors of electrons, neither Bianconi nor Zaanen have any idea how that could be. That fractal arrangements are seen in so many other systems — from leaf patterns to stock market fluctuations to the frequency of earthquakes — suggests some sort of common underlying laws, but these remain speculative.

According to Zaanen, the closest mathematical description of superconductive behavior comes from something called “Anti de Sitter space / Conformal Field Theory correspondence,” a subset of string theory that attempts to describe the physics of black holes.

That’s a dramatic connection. But as Zaanen wrote, “This fractal defect structure is astonishing, and there is nothing in the textbooks even hinting at an explanation.”

[FractoGene (Pellionisz, 2002) attributes efficacy to the fractal coding of organelles, organs and organisms by means of fractal DNA. The "frighteningly unsophisticated" (quoting Venter) "Genes and Junk" primitive notion of genome function would have you believe the idiocy that the exon-fractions of 1.3% of the non-Junk DNA (information of a stamp-sized digital picture...) would suffice to generate e.g. a human body and brain. The second decade of Genome Revolution (now with Fractal Revolution...) is to rectify that historical insult. - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Reanimated ‘Junk’ DNA Is Found to Cause Disease

New York Times
By GINA KOLATA
August 19, 2010

The human genome is riddled with dead genes, fossils of a sort, dating back hundreds of thousands of years — the genome’s equivalent of an attic full of broken and useless junk.Some of those genes, surprised geneticists reported Thursday, can rise from the dead like zombies, waking up to cause one of the most common forms of muscular dystrophy. This is the first time, geneticists say, that they have seen a dead gene come back to life and cause a disease.

“If we were thinking of a collection of the genome’s greatest hits, this would go on the list,” said Dr. Francis Collins, a human geneticist and director of the National Institutes of Health.

The disease, facioscapulohumeral muscular dystrophy, known as FSHD, is one of the most common forms of muscular dystrophy. It was known to be inherited in a simple pattern. But before this paper, published online Thursday in Science by a group of researchers, its cause was poorly understood.

The culprit gene is part of what has been called junk DNA, regions whose function, if any, is largely unknown. In this case, the dead genes had seemed permanently disabled. But, said Dr. Collins, “the first law of the genome is that anything that can go wrong, will.” David Housman, a geneticist at M.I.T., said scientists will now be looking for other diseases with similar causes, and they expect to find them.

“As soon as you understand something that was staring you in the face and leaving you clueless, the first thing you ask is, ‘Where else is this happening?’ ” Dr. Housman said.

But, he added, in a way FSHD was the easy case — it is a disease that affects every single person who inherits the genetic defect. Other diseases are more subtle, affecting some people more than others, causing a range of symptoms. The trick, he said, is to be “astute enough to pick out the patterns that connect you to the DNA.”

FSHD affects about 1 in 20,000 people, causing a progressive weakening of muscles in the upper arms, around the shoulder blades and in the face — people who have the disease cannot smile. It is a dominant genetic disease. If a parent has the gene mutation that causes it, each child has a 50 percent chance of getting it too. And anyone who inherits the gene is absolutely certain to get the disease.

About two decades ago, geneticists zeroed in on the region of the genome that seemed to be the culprit: the tip of the longer arm of chromosome 4, which was made up of a long chain of repeated copies of a dead gene. The dead gene was also repeated on chromosome 10, but that area of repeats seemed innocuous, unrelated to the disease. Only chromosome 4 was a problem.

“It was a repeated element,” said Dr. Kenneth Fischbeck, chief of the neurogenetics branch at the National Institute of Neurological Disorders and Stroke. “An ancient gene stuck on the tip of chromosome 4. It was a dead gene; there was no evidence that it was expressed.”

And the more they looked at that region of chromosome 4, the more puzzling it was. No one whose dead gene was repeated more than 10 times ever got FSHD. But only some people with fewer than 10 copies got the disease.

A group of researchers in the Netherlands and the United States had a meeting about five years ago to try to figure it out, and began collaborating. “We kept meeting here, year after year,” said Dr. Stephen J. Tapscott, a neurology professor at the University of Washington.

As they studied the repeated, but dead, gene, Dr. Tapscott and his colleagues realized that it was not completely inactive. It is always transcribed — copied by the cell as a first step to making a protein. But the transcriptions were faulty, disintegrating right away. They were missing a crucial section, called a poly (A) sequence, needed to stabilize them.

When the dead gene had this sequence, it came back to life. “It’s an if and only if,” Dr. Housman said. “You have to have 10 copies or fewer. And you have to have poly (A). Either one is not enough.”

But why would people be protected if they have more than 10 copies of the dead gene? Researchers say that those extra copies change the chromosome’s structure, shutting off the whole region so it cannot be used.

Why the reactivated gene affects only muscles of the face, shoulders and arms remains a mystery. The only clue is that the gene is similar to ones that are important in development.

In the meantime, says Dr. Housman, who was not involved in the research but is chairman of the scientific advisory board of the FSHD Society, an advocacy group led by patients, the work reveals a way to search for treatments.

“It has made it clear what the target is,” he said. “Turning off that dead gene. I am certain you can hit it.”

The bigger lesson, Dr. Collins said, is that diseases can arise in very complicated ways. Scientists used to think the genetic basis for medical disorders, like dominantly inherited diseases, would be straightforward. Only complex diseases, like diabetes, would have complex genetic origins.

“Well, my gosh,” Dr. Collins said. “Here’s a simple disease with an incredibly elaborate mechanism.”

“To come up with this sort of mechanism for a disease to arise — I don’t think we expected that,” Dr. Collins said.

[Susumu Ohno (1972), an otherwise serious scientist, put forward his (totally wrong) theory, see in junkdna.com full text free, that the non-genes are in the human genome "for the importance of doing nothing". It was only later that this catastrophic misnomer became widely accepted at face value - as an excuse such that the establishment could feel free to ignore 98.7% of (human) DNA - while millions if not hundreds of millions were (and still are) dying of "Junk DNA diseases". "The Principle of Recursive Genome Function" (2008), by retiring both the "Junk DNA" and "Central Dogma" obsolete axioms became the first full genome (hologenome) theory based on sound genome informatics, sweeping away science nonsense that retarded genomics for over half a Century (starting from Crick's notion in 1956, that he dared to call "Central Dogma" that information "never" recurses from either RNA or from PROTEINS to DNA). Upon conclusion of ENCODE (2007) Francis Collins was the one to issue a public call "the scientific community will have to re-think long-held beliefs". Some lucky few did not have to re-think as they never believed the two false axioms in the first place - but now we have (really, not the first...) factual evidence to discard wrong assumptions that were totally absurd from the viewpoint of genome informatics. 1.3% of human DNA (in fact, much less since most of the "genes" are non-coding introns) is simply not enough information to govern growth of as complex organisms as humans. FractoGene (2002) clinched it in one year after it was established that the expected 140-300 thousand "genes" were nowhere to be found by The Human Genome Project. Today, "recursive genome function" prevails with close to 200,000 hits in Google.... - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Life Technologies inks $725M deal for Ion Torrent

August 18, 2010 — 11:23am ET | By John Carroll
FierceBiotech

Arming itself for a looming showdown with Illumina over the booming market for second-generation gene sequencing technologies, Life Technologies struck a deal to buy Ion Torrent for $375 million in cash and stock with $350 million more on the line based on a series of milestones.

The deal gives Life Technologies a shot at introducing a gene sequencing device later this year that will be sold for less than $100,000, according to the San Diego Union-Tribune. A variety of companies have been angling for the fast track in the race to market new, faster and far cheaper sequencing technologies.

"We believe Ion Torrent's technology will represent a profound change for the life sciences industry," said Gregory Lucier, chairman and chief executive of Life Technologies. "This technology will usher in a new era in science, one in which DNA sequencing can be done easier, faster and more cost effectively than ever before."

- check out the San Diego Union-Tribune story [below]
- here's the Life Technologies release [below]

SingOn San Diego

Life Technologies of Carlsbad said Tuesday it will acquire Ion Torrent, a Guilford, Conn., company that has developed a new way of sequencing genes, in a deal worth as much as $725 million.

The move is Life Technologies’ latest effort to boost its position in the increasingly competitive and potentially lucrative genomic sequencing technology business.

It also intensifies the rivalry between Life Technologies and Illumina, another San Diego genetic sequencing technology company, said Daniel MacArthur, a genetic researcher at the Wellcome Trust Sanger Institute in Cambridge, England.

“Life Technologies already owns a second-generation sequencing platform [SOLiD - AJP], but has been struggling to compete against the current market-dominating technology from Illumina [with their Genome Analyzer - AJP],” MacArthur wrote Tuesday on his blog, Genetic Future.

Life Technologies is paying $375 million in cash and stock for Ion Torrent, a privately held company created in August 2007 by high-speed DNA sequencing developer Jonathan Rothberg. The sellers will receive up to $350 million in additional cash and stock if certain milestones are reached by 2012.

“We believe Ion Torrent’s technology will represent a profound change for the life sciences industry,” said Gregory T. Lucier, chairman and chief executive of Life Technologies. “This technology will usher in a new era in science, one in which DNA sequencing can be done easier, faster and more cost effectively than ever before.”

Standard DNA sequencing devices use a chemical process to attach fluorescent tags to individual pieces of DNA, which are then illuminated by lasers and photographed with a high-tech camera. Super computers interpret the light signals and convert them into usable genetic data. The process is time consuming and costly.

With the technology developed by Ion Torrent, tiny pH meters measure the change in acidity that occurs in a solution when DNA base pairs are formed. Those pH changes are then translated into the alphabet code that identifies genetic strands.

Life Technologies will launch the Personal Genome Machine, the first commercial sequencing device to use Ion Torrent’s process, later this year at a cost of less than $100,000, the San Diego company said. [Life Technologies, with its SOLiD, may not be known as best in "on rig software" - thus, this business move opened the "Big Four" (PacBio, Complete Genomics, Oxford Nanopore and Ion Torrent) to the vulnerability of software, especially algorithmic IP, as the critical factor for users - AJP]

Life Technologies announced the purchase after stock markets closed Tuesday.

In after-hours trading, the company’s shares were up 36 cents, or nearly 1 percent, to $45.31.

[Business Wire - Company Release]

CARLSBAD, Calif. & GUILFORD, Conn., Aug 17, 2010 (BUSINESS WIRE) -- --Complements Existing Sequencing Solutions Portfolio

Life Technologies Corporation, a provider of innovative life science solutions, today announced a definitive agreement to acquire Ion Torrent for $375 million in cash and stock.

The sellers are entitled to additional consideration of $350 million in cash and stock upon the achievement of certain technical and time-based milestones through 2012. Life Technologies' Board of Directors has approved an additional share repurchase program in order to repurchase its shares associated with the stock portion of the consideration. The impact on total share count is expected to be neutral.

Formed by life sciences pioneer Dr. Jonathan Rothberg, founder of CuraGen, 454 Life Sciences and co-founder of Raindance Technologies, Ion Torrent has revolutionized DNA sequencing by enabling a direct connection between chemical and digital information through the use of proven semiconductor technology. Ion Torrent's proprietary chip-based sequencing represents a new paradigm in DNA sequencing by using PostLight(TM) sequencing technology, the first of its kind to eliminate the cost and complexity associated with the extended optical detection currently used in all other sequencing platforms.

The first product using this technology will be the Personal Genome Machine (PGM), an easy-to-use, highly-accurate benchtop instrument optimal for mid-scale sequencing projects, such as targeted and microbial sequencing. The instrument is currently available through an early access program and will be launched later this year at an entry cost of less than $100,000. Subsequent products will benefit from cutting edge semiconductor fabrication technologies that can expand throughput at an accelerated pace, thereby dramatically lowering the cost to sequence a genome.

Gregory T. Lucier, Chairman and Chief Executive Officer of Life Technologies, said, "We believe Ion Torrent's technology will represent a profound change for the life sciences industry, as fundamental as the one we saw with the introduction of qPCR. This technology will usher in a new era in science, one in which DNA sequencing can be done easier, faster and more cost effectively than ever before.

"By leveraging the cumulative $1 trillion already invested in semiconductor research and development, we believe that Ion Torrent will drive unprecedented scalability, delivering the solution required for future generations of sequencing," Lucier continued. "With a heritage of more than 25 years as a leader in sequencing, Life Technologies is perfectly suited to bring such an innovative technological breakthrough to market."

Mark Stevenson, Life Technologies' President and Chief Operating Officer, said, "This transaction enhances our strategy of providing a complete sequencing offering to our customers across the research and applied markets. Ion Torrent's technologies are highly complementary to our existing portfolio of sequencing CE and SOLiD platforms."

Dr. Rothberg said, "The entire Ion Torrent team is excited to be joining the talented people at Life Technologies. Our products and mission make this an ideal and logical strategic fit for both companies. Both Ion Torrent and Life Technologies share rich cultures of innovation and excellence, and I firmly believe that Life Technologies is the right partner to bring such revolutionary technology in the sequencing arena to market."

Dr. Rothberg will continue to lead Ion Torrent with the support of the Ion Torrent leadership team. Life Technologies intends to retain Ion Torrent's presence in Guilford, CT and South San Francisco, CA where it has established R&D centers of excellence.

Life Technologies will finance the transaction with cash on hand, available lines of credit, and stock. Including the impact of specific cost saving initiatives, the transaction is expected to be 2 cents dilutive to Life Technologies' earnings per share in 2010, neutral in 2011, and accretive in 2012 and beyond. Earnings per share guidance for 2010 remains unchanged at $3.35 to $3.50. Life Technologies expects to deliver double-digit earnings-per-share growth in 2011 including the impact of this transaction. Upon closing, Life Technologies expects to benefit from synergies created by combining Ion Torrent's proprietary technologies, product pipeline and R&D capabilities with Life Technologies' commercial channel, sample preparation, sequencing automation, informatics, and reagent expertise.

Life Technologies remains committed to a strategy of balanced capital deployment, including the execution of the previously announced $350 million share repurchase. In addition, Life Technologies reaffirms its goal of reaching 10% ROIC by 2012.

The transaction, which is expected to close in the fourth quarter, is subject to customary closing conditions, including regulatory approval.

About Life Technologies

Life Technologies is a global biotechnology tools company dedicated to improving the human condition. Our systems, consumables and services enable researchers to accelerate scientific exploration, driving to discoveries and developments that make life even better. Life Technologies customers do their work across the biological spectrum, working to advance personalized medicine, regenerative science, molecular diagnostics, agricultural and environmental research, and 21st century forensics. Life Technologies had sales of $3.3 billion in 2009, employs approximately 9,000 people, has a presence in approximately 160 countries, and possesses a rapidly growing intellectual property estate of approximately 3,900 patents and exclusive licenses. Life Technologies was created by the combination of Invitrogen Corporation and Applied Biosystems Inc., and manufactures both in-vitro diagnostic products and research use only-labeled products. For more information on how we are making a difference, please visit our website: http://www.lifetechnologies.com.

About Ion Torrent

Ion Torrent has developed a DNA sequencing system that directly translates chemical signals (A, C, G, T) into digital information (0, 1) on a semiconductor chip. The result is a sequencing system that is simpler, faster, less expensive and more scalable than any other technology available. Because Ion Torrent produces its proprietary semiconductor chips in standard CMOS factories, it leverages the $1 trillion investment that has been made in the semiconductor industry. Ion Torrent uniquely and directly benefits from four decades of exponential improvement in semiconductor technology, expressed as Moore's Law. Ion Torrent will launch the Ion Personal Genome Machine sequencer in 2010. Ion Torrent was founded in August 2007 by Dr. Jonathan M. Rothberg, who pioneered high-speed, massively parallel DNA sequencing. Ion Torrent is based in Guilford, Connecticut, with an office in South San Francisco. For more information about Ion Torrent, visit www.iontorrent.com.

[Molecular DNA sequencing is at full swing with the Life Technologies' deal with Ion Torrents (grand total $725 M). Illumina invested into Oxford Nanopore, and both Complete Genomics and Pacific Biosciences filed for IPO. Quite a foursome! The net result is that sequencing technology is in a "home run" - thus the emphasis shifted to Intellectual Property in Full DNA Analytics and Consumerism of the results (a HolGenTech profile). An interesting question is why Jonathan Rothberg "gave away" his company Ion Torrent for under a $ Billion? An answer may be "bis dat qui cito dat" - Jonathan's R&D is to be kept both in CT and in CA (San Francisco) ... instead of going through an agonizing public scrutiny of IPO (e.g. full disclosure of all their "risk factors", such as IP) he can plunge into his third major venture (actually his sixth, after 454 sold to Roche, and Ion Torrent sold to Life Technologies, and also including CuraGen, Clarifi and Raindance). Brilliant technologies, brilliant business moves ... - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


PacBio files for $200 million IPO

FierceBiotech
August 16, 2010 — 10:39am ET | By Maureen Martino

Money-raising machine Pacific Biosciences--a 2009 Fierce 15 winner--has decided to take its act public. The company has filed a $200 million IPO, the proceeds of which will help the company fund R&D of its products and SMRT technology, which uses nanofabrication, biochemistry, molecular biology, surface chemistry and optics to enable real-time analysis of biomolecules.

Just last month, PacBio closed a $109 million Series F, bringing its total haul to $370 million, and in June landed a PacBio gets $50 million investment from Gen-Probe. In addition to R&D expenses, PacBio will boost its sales and marketing in advance of commercial launch, increase its manufacturing operations, and for general corporate purposes. PacBio added in its filing that it may be in the market to acquire complementary technology or businesses that would boost its own operations, but that no acquisitions were planned at this time.

In the short term, PacBio will focus its technology on clinical, basic and agricultural research. It hopes to expand into molecular diagnostics, drug discovery and development, food safety, forensics, biosecurity and bio-fuels. "We believe that our SMRT platform represents a new paradigm in biological science...that has the potential to significantly impact a number of areas critical to humankind, including the diagnosis and treatment of disease as well as efforts to improve the world's food and energy supply," the company boasts in its SEC filing.

----

PacBio's SEC filing stipulates (verbatim quote):

RISK FACTORS

Investing in our common stock involves a high degree of risk. You should consider carefully the risks and uncertainties described below, together with all of the other information in this prospectus, including our financial statements and related notes, before deciding whether to purchase shares of our common stock. If any of the following risks is realized, our business, financial condition, results of operations and prospects could be materially and adversely affected. In that event, the price of our common stock could decline, and you could lose part or all of your investment.

Risks Related to Our Business

We are a development stage company with limited operating history.

We may never achieve commercial success and have not yet commercially launched our first product. We have no historical financial data upon which we may base our projected revenue. We have limited historical financial data upon which we may base our planned operating expense or upon which you may evaluate us and our prospects. Based on our limited experience in developing and marketing new products, we may not be able to effectively:

• drive adoption of our products;

• attract and retain customers for our products;

• comply with evolving regulatory requirements applicable to our products;

• anticipate and adapt to changes in our market;

• focus our research and development efforts in areas that generate returns on these efforts;

• maintain and develop strategic relationships with vendors and manufacturers to acquire necessary materials for the production of our products;

• implement an effective marketing strategy to promote awareness of our products;

• scale our manufacturing activities to meet potential demand at a reasonable cost;

avoid infringement and misappropriation of third-party intellectual property;

• obtain licenses on commercially reasonable terms to third-party intellectual property;

• obtain valid and enforceable patents that give us a competitive advantage;

• protect our proprietary technology;

• provide appropriate levels of customer training and support for our products;

• protect our products from any equipment or software-related system failures; and

• attract, retain and motivate qualified personnel.

In addition, a high percentage of our expenses is and will continue to be fixed. Accordingly, if we do not generate revenue as and when anticipated, our losses may be greater than expected and our operating results will suffer. You should consider the risks and difficulties frequently encountered by companies like ours in new and rapidly evolving markets when making a decision to invest in our common stock.

[About two weeks after Complete Genomics filed for IPO, almost overnight challenged by Illumina intellectual property infringement lawsuit, PacBio decided to deny the uniqueness of Complete Genomics to capture public funds. This move will no doubt re-vamp the competitive landscape - perhaps most critically in intellectual property matters - Pellionisz_at_JunkDNA.com or Andras Pellionisz at FaceBook]


How Can the US Lead Industrialization of Global Genomics? [AJP]

[Francis Collins, Head of NIH, USA (left) and BGI co-founder Wang Jian (right) - AJP]

There is no question that a Global Industrialization of Genomics is taking place - in which the USA is hard pressed to (maintain to) lead with its somewhat antiquated and fragmented R&D system. As we recall, the shock of Soviet Satellite "Sputnik" (1957 - over half a Century ago...) triggered a successfull re-vamping of the entire US education and R&D system. Hruschev' Soviet Union was provocative and arrogant - while (as the brilliant write-up by Kevin Davis, shows below) - China deliberately underplays their cards. Nonetheless, a somewhat similar re-adjustment at every half a Century might be needed if the US is to maintain her lead - this time in the Global Industrialization of Genomics. I illustrate this thesis by a contrasting triad of write-ups . One is from Nature on NIH, the other is in Bio-IT World, and the third is my conclusion that "Industrialization of Genomics is not possible either by the brute force of government administration of orthodox Biochemistry Research or massive deployment of the shere force of Big Information Technology (their converge I predicted since 2004, and disseminated in YouTube 2008 - by now viewed over 8,500 times) - without a theoretical understanding of recursive genome function" - Pellionisz_at_JunkDNA.com or Andras Pellionisz on FaceBook.

^ back to top


Francis Collins: One year at the helm [US government over the cliff in Genomics - AJP]

Nature 466, 808-810 (2010) | doi:10.1038/466808a
Published online 11 August 2010 |

Meredith Wadman

Having taken on the biggest job in biomedicine - leading the US National Institutes of Health - Francis Collins must now help his agency over a funding cliff. Meredith Wadman looks at his record so far, and his plans to cushion the fall.

There were three scans of Francis Collins's genome, and all showed the same thing: the geneticist and physician has an increased risk of developing type 2 diabetes. After Collins received the results from the genetic-testing companies in the spring of 2009, shortly before he became director of the US National Institutes of Health (NIH), he hired a personal trainer and began working out three times a week. He jettisoned his favourite junk food - honey buns and oversized muffins - in exchange for yoghurt, granola bars and broccoli. The 60-year-old now dead lifts 48 kilograms, chest presses 43, and has lost more than 11 kilograms himself. [So much for some bureaucrats who consider DTC "useless"; if it did change the lifestyle of one of the most knowledgeable man, perhaps its relatively small impact is because others are not yet at his level - AJP] "It has helped me a lot in terms of being able to take on the intensity of the job," he says.

That salubrious slimming is nothing compared with the crash diet that Collins's US$31-billion-a-year agency is about to go on. Collins took control of the NIH - the world's largest biomedical-research funder - in the middle of a feast: a $10.4-billion, two-year boon delivered in 2009 by the American Recovery and Reinvestment Act, as part of the US government's effort to revive a moribund economy. Next month, the last of that money will go out of the door, and its recipients will have spent the bulk of it by September 2011. "The Recovery Act provided an enormously timely and appropriate stimulus for the community after five years of flat funding," Collins said in an interview with Nature at the NIH's Bethesda, Maryland, campus last month. "But now we face this potential of falling off a cliff. That's the biggest challenge" of his job, he says.

Collins comes equipped for challenges, intellectually and temperamentally. From his co-discovery of the gene for cystic fibrosis 21 years ago, to his 15 years of leadership of the NIH'sNational Human Genome Research Institute - and, with it, the Human Genome Project - he has proved that he combines serious scientific know-how with a leader's vision (see 'Francis Collins: in sequence'). With his boy-scout manners and folk-guitar habit, he is also a decided contrast to his immediate predecessor, the sharp-suited Elias Zerhouni, a radiologist whom many bench investigators viewed warily for not being a scientist's scientist.

Collins's exceptional self-discipline extends well beyond dieting. By the time he started the job, he had already formulated a 'pocket list' of 22 goals for his first year in office, from hosting a visit to the NIH by President Barack Obama to hiring a new cancer-institute director. Now, he proudly hands over the list of mostly ticked-off accomplishments: Obama visited the NIH last September, and Harold Varmus, a former NIH director, took the reins of the National Cancer Institute in July. "He's in a hurry," says Susan Shurin, the acting director of the NIH's National Heart, Lung and Blood Institute (NHLBI). "He moves fast and he likes to be surrounded by people who are going to make things happen."

Collins has detractors as well as fans. When he was appointed, some scientists voiced loud scepticism that he could separate his very public Christian faith from his policy decisions. There were also fears that his roots heading the Human Genome Project would lead him to favour NIH-initiated mega-projects over proposals by individual scientists. Others scolded him - and still do - for what they call his perennial overpromising on the fruits of the genomic revolution. "He is still leading people to believe that genetics is the key to everything," says Neil Greenspan, an immunologist at the Case Western Reserve University School of Medicine in Cleveland, Ohio. If, five or ten years from now, only a handful of therapies emerge as a direct result of the genome project, "you could end up with a lot of people [in Congress] getting upset and cutting the NIH because they are not producing what they claimed".

Such concerns do not worry a lean, list-checking Collins. "My job it seems to me is not to spend my time apologizing for being optimistic. But rather to try to take that optimism and turn it into reality," he says.

Morning to night

On a sultry morning in mid-July, Collins straps on his black motorcycle helmet and rides his Harley-Davidson the 15 minutes from his suburban Maryland home to the NIH campus. Collins had grabbed his usual, abbreviated night of sleep, after recording an interview for the Charlie Rose Show, marking the tenth anniversary of the draft sequencing of the human genome, and then staying up until nearly midnight to watch the popular talk-show air. In between, he had participated in a conference call with senior government officials, discussing how to enrol 20,000 subjects in a long-term study of the health effects on workers cleaning up the Gulf of Mexico oil spill. Having risen at his usual time of 5:00 a.m. - "that is a protected time, before all hell breaks loose, when I can actually try to think and plan," he says he is now on his way to a 7:45 a.m. interview with a candidate to head the NHLBI.

Collins wasted no time on his first day as NIH director either, when he announced five 'themes' - areas of what he calls "exceptional opportunity" - that would receive special priority during his tenure (see Nature 460, 939; 2009). Collins targeted translational medicine, health-care reform, global health and "empowering and energizing the research community". And he said he wanted to apply high-throughput technologies including genomics and proteomics to answer, as he puts it, questions with 'all' in them, such as "what are all of the major pathways for signal transduction in the cell?"

He also had to deal with some of the issues left over from Zerhouni's watch. He was faced with the delicate job of making new human embryonic stem-cell lines available for federal funding fast enough to suit a community that was hankering for them after eight years of drought - without any missteps that would provide ammunition to opponents of the research. Between December and June, the agency approved 75 new stem-cell lines. (Collins points to the approvals as evidence that he "will not allow my own personal spiritual beliefs to interfere with decision-making or priority setting".) But the agency has also drawn criticism for rejecting scores of disease-specific cell lines because of the broad legal language used in patient-consent forms (see Nature 465, 852; 2010).

Collins also faced the aftermath of several scandals in which NIH-supported academics had flouted reporting rules by failing to disclose five- and six-figure sums that they had collected from drug companies. In May, the NIH published proposed changes that would tighten the rules governing financial-interest reporting by its grantees.

Still, nothing Collins has faced so far comes close to the budget straits that the agency now confronts as the government struggles to control ballooning deficits, fight two wars and deal with the detritus of a major economic crisis. As NIH director, "what happens to you is going to depend on things beyond your control", says Anthony Fauci, director of the National Institute of Allergy and Infectious Disease since 1984. "I hope that circumstances beyond his control start leaning towards helping him rather than hindering him."

Slim chances

Already, this year, success rates for scientists applying for the agency's research-project grants have dipped to an estimated 19%, down from 21% in 2009 and far lower than the comfortable 32% of a decade earlier (see 'Grant applications to the NIH'). The worsened odds partly reflect an increase of about 10% in the number of applications, many of which are recycled from failed stimulus grant proposals. In 2011 and 2012, the grant success rates are expected to fall further as stimulus funding runs out and its recipients attempt to extend support for their projects.

The NIH's baseline budget is also approaching dangerous waters. Although agency supporters were heartened last month when key subcommittees of the Senate and House of Representatives approved Obama's request for a 3.2%, $1-billion boost that would bring the budget to $32 billion in 2011, the increase is not guaranteed to survive final congressional wrangling this autumn or winter. And it does no more than match the government's predicted biomedical inflation rate. Things could be even bleaker in 2012: this June, Collins, like every other federal agency director, was asked by the White House's Office of Management and Budget, as part of its planning process for the 2012 US budget, to identify cuttable programmes amounting to 5% of the agency's budget. This is hardly a calamity compared with the deep research cuts occurring in some European countries, but still a shock to the NIH, which has faced only one absolute funding cut since 1970, and that only a 0.1% shave (see 'NIH budget'). Late last month, Collins collected from the directors of the NIH's 27 institutes and centres a list of targeted programmes, constituting 7% of their budgets - the 7% giving him some flexibility to cut less here and more there. The final list is due to the White House in mid-September.

The initial response of the institute directors to his request was "full of angst", says Collins. "But there has also been a sense of 'We need to look hard at everything we are doing at a time like this'." He remains hopeful that given Obama's emphasis on science, "when the dust all settles and they [the White House] decide exactly what to do, we will be at some level a bit protected, but we don't know that".

All or none

All this has been a growing cloud on the horizon even as Collins has been fleshing out his five themes. He has emphasized translational research, throwing his weight behind a programme aimed at speeding treatments for rare and neglected diseases towards human trials. He has embraced health reforms by overseeing the spending of $400 million in Recovery Act money earmarked for research into the 'comparative effectiveness' of medical treatments. And he has promoted his global health priority with initiatives such as a collaboration involving Britain's Wellcome Trust medical charity, in which the NIH will contribute $25 million over five years to study the genetic and environmental underpinnings of chronic diseases in sub-Saharan Africa.

Collins has also been launching high-tech assaults on the 'all' questions, committing $175 million in Recovery Act money to accelerate The Cancer Genome Atlas - a five-year-old effort to develop a detailed catalogue of all of the mutations associated with 20 common cancers. [Results of these programs will, or course, be available for the World, for free - AJP]. Collins's emphasis on these types of ambitious projects has led some to question his commitment to the individual investigator and the mainstay, multi-year 'R01' grants that fund many such scientists. But his defenders say there is no evidence that Collins is advancing the first at the expense of the second. "Francis fully gets the importance of funding some of the larger efforts that can be so transforming. But I think he's also paying very close attention to maintaining a vigorous pipeline of R01-funded research," says Levi Garraway, a cancer biologist at Harvard Medical School and Dana-Farber Cancer Institute in Boston, Massachusetts, who holds investigator-initiated NIH grants and also participates in The Cancer Genome Atlas project.

Collins says that big-team science is the only way to produce some tools that greatly benefit individual investigators. [Craig Venter, who single-handed matched at least to a tie the $3 Bn "Human Genome Project" is likely to question the rationale of this statement - AJP]. But he says that the individual lab "is where almost all of the discoveries of the present and the future are going to come from". [It is interesting that Dr. Collins uses the term "individual lab" as the source of advancement of science. Albert Einstein, who never ran any lab of physics, might question the rationale of this statement, since advancements of science have come in the past from "individual brain" of Newton, Heisenberg, Planck, Schrodinger, Einstein, etc - AJP], And these labs are at the centre of his push to "energize and empower" the research community by addressing peer review, training and other workforce issues. Anaemic success rates for research-project grant applicants have created "a terribly stressful circumstance, particularly for early-stage investigators", says Collins, noting that the average age for winning a first R01 award has now crept above 42 years old. As a partial response to this, he has been planning the launch in 2011 of an award that will allow promising young investigators to skip postdoc positions entirely, giving them five-year funding to launch independent labs.

As for the immediate concerns of thousands of NIH grantees edging towards the funding cliff, Collins says that the agency will be "sympathetic" in allowing Recovery Act-funded grantees to spend their money over more than two years, "making it more of a ramp instead of a cliff". [Does it matter if US leadership slides or falls into demise? - AJP] "We will be doing other things which may assist the ability to give new grants, but hurt the people who already have them," he adds. Those will include cutting individual grant budgets "as we have to, in order to keep as many researchers going as possible".

These measures bring cold comfort for many in postdoc purgatory with little prospect of securing independent funding. "I didn't think it would be some Glory Hallelujah moment when Collins was appointed," says one 35-year-old scientist in his second postdoc, who asked to remain anonymous. He would like Collins to make it possible for those more than five years beyond their PhDs to secure transition funding such as a coveted 'K99' award, which supports postdocs in the shift to independent positions. "To be brutally honest, I haven't noticed any difference in his tenure after the first year compared to Zerhouni," he says.

But if Collins hasn't impressed some struggling bench scientists, his skill as a public communicator may nonetheless help to improve the NIH's prospects — or at least lessen its immediate peril. William Talman, president of the Federation of American Societies for Experimental Biology, attributes the White House's request for a $1-billion boost for the NIH - even in a stark funding climate - to Collins's persuasive powers. "He has been a superb advocate for the NIH with the administration and with Congress." Collins has the rare gift of being able to translate complex concepts into simple language, leaving his audiences - including all-important congressional audiences - feeling brilliant about their grasp of his material. (In one typical analogy he describes a haplotype, a group of genetic markers that are inherited together, as being like a neighbourhood of houses that moves together - with a causative mutation residing at one street address.)

"The most important thing he has done really is his public outreach," says Shurin, who recalls as typical Collins's May guitar performance for patient advocacy groups affiliated with her institute. Set to the tune of Del Shannon's hitRunaway, his lyrics described the anxieties raised by confronting a readout of one's own genome "I'm a walking through the genes/Don't know what all this means/Oh what can the meaning be?/Behind that G and T?/And I wonder …" He received a standing ovation. [Others worked their butts off to focus not on a song but on a breakthrough theory.. - AJP]

Collins is going to need all of that support and more to help those funded by the agency over the cliff - or down the ramp - ahead. "I don't have any magic here," says Collins. "I wish I did." [For believers in God, there is always a hope for a miracle ...]

[NIH got $40 Bn "for a song" - as in 2007 when DECODE was wrapped up - Genomics confessed in science papers - as well as by public whining - "Don't know what it all means". We simply can't make it without "theory of genome function" - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


BGI Americas [and BGI Europe] Offers Sequencing the Chinese Way

By Kevin Davies
Bio-IT World
August 11, 2010

25 years ago, Shenzhen was a tiny fishing village in southwest China, just one hour north of Hong Kong. Today, [Shenzhen] is the country’s second largest port after Shanghai, a booming technology haven and since 2007, home to BGI, formerly known as the Beijing Genomics Institute.

With 3,000 employees currently rising to an expected 5,000 by the end of this year, and a fleet of more than 150 Illumina and Life Technologies next-generation sequencing instruments, most of which are being installed in a former printing press in Hong Kong, BGI is poised (if it isn’t already) to become the world’s largest genome sequencing center. And it wants to share its extraordinary resources and expertise with, well, everybody.

Last April, BGI Americas was officially incorporated in Delaware as the official interface for BGI in North America. BGI Europe followed suit the next month (see “European Union”). From a small office in an incubator space overlooking Boston’s Charles River, a stone’s throw from the Broad Institute, the husband-and-wife team of Paul Tu (president) and Julia Dan (CEO) are reaching out to potential academic and commercial partners and customers. By the end of 2010, BGI Americas will have as many as 20 sales representatives spanning the continent in search of partners who wish to avail themselves of BGI’s prodigious sequencing capacity.

“We’re an interface representing BGI to collaborators in America and to promote the BGI brand,” says Tu. “That means finding collaborators working on different interesting projects, or fee-for-service projects, to support our operations.” He smiles: “3,000 people need to eat!” [American scientists need to eat, too - AJP]

Tu graduated from MIT’s Sloan School of Management and worked in venture capital for ten years before meeting BGI co-founder Wang Jian and “drinking the Kool-Aid”. Indeed, Tu and Dan abandoned their own start-up plans in China to sign on with BGI Americas. Tu’s wife is also his boss: Dan previously worked in corporate development for Genzyme. She lets Tu do most of the talking, but corrects him occasionally, just like any happily married couple.

Lucky Numbers

The growth and data output at BGI is nothing short of astonishing. The institute currently employees 3,000 staff in Shenzhen, including 1,500 working in bioinformatics, including programmers and IT staff. As of July 2010, BGI had 40 Illumina HiSeq 2000 instruments installed in its new facility in Hong Kong (a former printing press), growing to 100 by the end of 2010. When Illumina introduced its new state-of-the-art sequencer in January 2010, BGI immediately ordered a total of 128 machines -Tu explains that 128 is a lucky number in Chinese. (The number eight sounds like ‘wealth.’)

“God forbid it was 124,” he adds dryly. “Four would sound like ‘death!’ ” [2, 6, 8 or 9 are also "lucky numbers" in China; why do they need two orders of magnitude bigger numbers? - AJP]

The new facility in Hong Kong will greatly facilitate the shipment of samples from the rest of the world. [Yes, it is "lucky"... AJP]. “It’s a British system: one China, two systems,” says Dan about Hong Kong. “It’s the same thing with BGI.” For investigators leery of sending samples to Hong Kong, Dan hopes to give them the option of shipping to a sample receiving lab attached to BGI Americas headquarters in Boston, which will then handle the paperwork and shipment. That could be ready as early as September 2010.

By the time the Hong Kong facility is fully operational at the end of 2010, BGI will have a total sequencing output of 5 Terabytes/day—the equivalent of 1500x human genome/day. The data center now boasts 50,000 CPUs, 200 Terabytes of RAM and will reach a whopping 1,000 Petabytes—1 Exabyte—of data storage by year’s end. “It’s an awesome machine to play games on,” jokes Tu.

Such infrastructure comes at a price. BGI spends an estimated $10 million on electricity annually. “We cannot be a non-profit organization without any external support,” says Tu.

That is where BGI Americas and BGI Europe come in. “We’d love to work with [principal investigators] around the world, not just the U.S., on any interesting projects,” says Tu. Back in China, a committee of animal, plant and disease experts will select which projects BGI takes on. “BGI can be flexible—give us the samples, we can fund everything, and then we co-author the publication,” says Tu. A variant of that model would have BGI split everything—the costs, authorship and IP—with its partners.

“Not everyone can offer that. We don’t just do human, we do animals, plants, bacteria, complex diseases,” says Tu. “That’s the non-profit aspect. We want to sequence 1,000 plants and animals and have set aside $100 million for this initiative… It’s all about the science.”

But BGI is also offering a fee-for-service option. “We are a contractor,” says Tu. “Every single profit generated by the fee-for-service division will be returned to BGI to support the non-profit research agenda.” As a contractor, BGI will take any specs and deliver what the client wants. “If you want your data via FTP, or hard disk, we’ll do that. We give you a report, annotation, mapping, analysis. Not just sequencing, we also do all the back end as well,” says Tu.

Cost and Competition

Tu turns very diplomatic when asked about potential competition to BGI’s sequence service plans. “Personally, and throughout the organization, we don’t view anybody as our competitor. This field is extremely nascent. In science, what we know today may be only 2-5% of what it will be later. The science keeps advancing, we keep discovering new things.” As an example, he cites the recent UC Berkeley/BGI publication in Science that described a highly selected gene variant associated with altitude adaptation.

Dan says that, unlike a commercial service provider such as Complete Genomics, BGI’s value proposition is the flexibility to offer a pure fee-for-service as well as a collaborative model. “We don’t want to be restricted by funding for which research we can do. That’s the reason we do fee-for-service, and we love to do collaborations. We spend a lot on the collaboration side.”

Diplomacy turns to downright evasion when the subject is cost. “It depends,” says Dan not unpredictably, “on coverage, analysis, volume, and so on.”

“All these players—Complete Genomics, Broad Institute, etc.—are just collaborators for us,” says Tu. “When it comes to fee-for-service, we’re at the mercy of what Illumina charges us for reagent costs. We have gigantic overheads… We’ll eat some of the overhead, but the variables, somebody has to cover.”

“Can we compete head-to-head with Illumina and Complete Genomics, where this is all they do? They don’t even do exomes, they only do whole genome humans? They make their own machines and reagents, how can we compete with that?”

Tu marvels at the drop in sequencing prices over the past 12-24 months. “I’ve never seen such price erosion! This is like, Whoosh!... We’re a service provider, how can we compete with that? We compete with the back end, our bioinformatics. That’s where we’re good. Who else has 1,500 staff?” BGI already makes its popular software SOAP (Short Oligonucleotide Alignment Program) freely available (See http://SOAP.genomics.org.cn). Any data or tools built for the SOAP platform (using C++) are being donated to the public sector [This may not apply on everything beyond SOAP... AJP]

The average age of the BGI staff is just 24.7. [Compare this to the average age of US researchers getting their FIRST RO1 NIH grant - at the tender age of 42]. Tu calls the legions of bioinformatics workers “the young and the brightest,” drawn from the top tiers of mathematicians and scientists from the top universities around the country, supplemented with operations people who have worked abroad. “They work around the clock,” he continues. “If they come to BGI, they get to work on real projects. Plus you get to program all day, with these toys in the background! It’s like a video game, they love it!” New recruits cannot rest on their laurels however: every month for the first six months, there’s a test. Fail it, and it’s bye-bye BGI.

Tu and Dan have only been on the job a few months, but they too are working pretty much around the clock. Inquiries are already flooding in—mosquito genomes from Brazil, palm oil from Costa Rica, ancient DNA from the University of Massachusetts Medical Center. Tu was preparing to visit researchers at the Children’s Hospital of Philadelphia, and is already discussing projects with partners and clients at the Dana Farber Cancer Institute, Harvard Medical School, and the Broad Institute. He hasn’t had time to speak to all of the U.S. genome centers yet.

“We want to be a trusted scientific partner and research collaborator,” stresses Tu, speaking on behalf of 3,000 BGI scientists and counting.

European Union

BGI Europe was registered in Copenhagen, Denmark, in May 2010, and officially launched in June at the European Society of Human Genetics. The plan is to invest $10 million and to recruit 20 local staff in the organization’s first year alone. The CEO of BGI Europe is Mason Mak, who joined BGI earlier this year, although he is based primarily in Shenzhen.

Given BGI’s historic ties with Denmark, it is no surprise that BGI Europe headquarters is at the University of Copenhagen, Faculty of Life Sciences. The president of BGI, Yang Huanming, obtained his PhD from the University of Copenhagen in 1988. BGI’s director, Wang Jun, is a visiting professor at the University of Copenhagen and Aarhus University.

A new Copenhagen research institute on metabolic diseases, funded by a $170-million donation from Novo Nordisk, will strengthen an ongoing collaboration with BGI, led by diabetes researcher Oluf Pedersen. He says the alliance with BGI will create “an international powerhouse in the field of medical genetics.” (Diabetes and obesity are a growing health concern in China.) “Genomics cannot be done alone,” says BGI director Wang Jun. He says the Sino-Danish collaboration harnesses the superb medical infrastructure in Denmark with “Chinese genomics muscle” in the study of type 2 diabetes and obesity.

According to BGI Europe’s business development director, Danish-educated Wang Xuegang (he prefers to go by the name ‘Greg’), BGI Europe will offer European clients two models—collaboration or fee-for-service—just like its American counterparts. BGI Europe has six sales people already, but will be recruiting additional staff specializing in different fields such as agriculture, pharmaceutical, and biotech, spread across key regions including the UK, Germany, and Scandinavia.

As for what BGI’s key selling point is, Wang Xuegang says it is not necessarily the cost of sequencing. “Price is not what we sell on,” he says. A bigger selling point is BGI’s “very strong bioinformatics team,” with its immense experience in genome data analysis and de novo sequencing.

BGI Europe has an even more ambitious agenda than its American counterparts. The current plan is to establish a sequencing facility, probably in Copenhagen, within a couple of years, while growing the local staff to around 100 people. It would be futile for BGI Americas to set up a sequencing operation in the Broad Institute’s backyard, but BGI Europe may see an advantage to establishing a local production base in Copenhagen.

“Our vision is to make BGI Europe to be one of largest centers of sequencing services and bioinformatics,” says BGI Europe’s Xuan Min. “We’re trying to set up a sequencing lab in Copenhagen,” adds Wang Xuegang, likely in collaboration with a biotech partner or partners. The availability of a local facility might appease some potential biotech clients worried about data security and privacy. “We can set up a pipeline where everything is under control by the customer,” says Wang.

[Hope that this masterful interview by Kevin Davies will ring some bells - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Junk DNA: Does it Hold More than what Appears?

Junk DNA: The secret key

What is the mystery behind so called “junk DNA” afterall? Is it really “junk” just because pretty little has been done to reveal its’ exact function? But, their codons have in them all! This particular variety of the “junk DNA” regulates the behavior of the normal coding genes. Presently, researchers [except the FractoGene explanation of Recursive Genome Function by Pellionisz] are unable to say anything with certainty about the supposed functions of these “non-coding” genes.

It is the same reason why scientists abhor from the usual practice of inserting foreign genes in the normal gene sequence. They fear that it could start the production of hitherto unknown proteins due to its uncontrollable spread. But, numerous possibilities exist due to the striking resemblance of these to the normal genes. [That is, synthetic genomics will not really take off, until regulation of recursive genome function is mathematically understood - AJP]

The future

The possibilities are endless. In fact, scientists do speculate that these genes could contain some kind of coded information for the future. Even if they ascribe the reasons of dreaded diseases like cancer to the “junk DNA”, others like Haig H. Kazazian, chairman of the University of Pennsylvania, has linked them evolution of the new species.

A June 2004 Harvard Medical School report while working on a “junk DNA” gene in the yeast, quotes an evidence regarding a new find, gene SRG1. It is believed that it physically blocks the transcription of the adjacent normal “coding” gene, SER3. Studies done elsewhere say that the non-functional DNA, aids in the gene expression regulation during development.

Over 700 studies done in this arena prove the role of “junk DNA” as an enhancer for the transcription of proximal genes. While around 60 studies in the same, have again proved that the non-coding DNA acts as a silencer during the suppression of transcription of “proximal” genes. Apart from that yet others talk about the function of the non-coding DNA in the regulation of the translation of proteins.

The twist in the tale

Russian researchers like Gariaev suggest the possible mediation of the non-coding genes at the quantum level. The studies done by the above and group confirm about the chromosomal ability “to gyrate the polarization plane of its own radiated and occluded photons”.

The very idea that about 98% of the DNA belongs to the junk category is a misnomer now. Biologists have long since ignored the fact that majority of the biological species — complete energy entities in themselves – function on the principle of minimal energy expenditure. This therefore, means that it is completely against the norm of energy saving feature of these organisms to include in their biological mechanism, the “non-coding DNA”.

It is but a reality that the unused DNA does have a function according to the researchers now. But, what that is can be anybody’s hard guess as of now.

The mathematical link

What is appearing now is the new branch of genomics itself supported by the likes of Andras Pellionisz, a biophysicist [formerly] at the New York University. He has some done ground-breaking work in the field of biological neural networks, based on the fractal geometry of cellular development addressing the decisive role of recursive genome function [both at the level of peer-reviewed science publication and having secured IP, widely distributed the concept of "Recursive Genome Function" in Google Tech Youtube (presently fetching 181,000 hits on a regular day, see below, with occasional peaks close to a million). The FractoGene concept was also embraced in the Churchill Club YouTube, as well as upon inviation of George Church (a BoA of holgentech.com), in Cold Spring Harbor, 2009] According to him, protein structures act back upon their genetic code, which is supported by many an observation and analysis of the genome sequences.

This, in fact, is undoubtedly a vast subject. But, amazing strides made in the recent times have left everyone dumbfounded as one upon another, information unlocks itself. The above principles could find their usage in the customization of the foods, drugs, cosmetics, chemicals; materials etc. which could then be matched with our genome. Of course, this will prevent the diseases from occurring, or maybe even halt the adverse conditions from developing further in their lethality.

[Algorithmic approach by A. Pellionisz, HolGenTech, Inc.]

[Cloud-solutions (this is by DNAnexus - in addition to the slew of such services) rushing to the market created by the "Dreaded DNA Data Deluge" need algorithms, e.g. for targetable structural variants (fractal defects) - AJP]

^ back to top


Biotech is back [in Korea - AJP]

JoonAng Daily
Korea
August 11, 2010

The [Korean - AJP] nation’s biotechnology and life sciences industries are hitting one milestone after another. Many of the biological and genetic experiments undertaken in laboratories more than a decade ago are finally ready to see daylight and hit drug store shelves.

According to the Korea Food and Drug Administration, a local biotechnology enterprise has completed clinical safety testing of a drug aimed at treating acute myocardial infarction - or heart attacks - and has filed for approval with health authorities. If it passes the screening, the company will be able to release the world’s first legitimate drug derived from stem cell technology.

Other biotechnology companies are also actively involved in developing disease-specific stem cells into drugs to treat and repair knee cartilage and spinal cord injuries. These companies may open the door to entirely new medical treatments and prop up the health care industry by releasing blockbuster drugs using human embryonic stem cells.

News of breakthroughs in the local bioengineering industry is coming from many areas. Single-gene malady treatment using an individual’s DNA genome is ready for commercialization. Thanks to advances in genome analysis and technology, the cost of having a hospital study your gene sequence is expected to fall to $1,000 within two to three years. We may not be far away from the days when we’ll commonly receive accurate diagnosis of ailments and individually tailored treatments based on our genetic makeup.

Competition in the area of low-cost generic products marketed after the expiration of patents is also getting fierce. Large business groups like Samsung, LG, Hanwha and CJ are jumping into the fray to sell drugs with similar properties to biotech drugs whose patents are about to expire.

The local biotechnology industry has groped its way through a dark tunnel over the last 10 years. The gold hunt in the Kosdaq market and controversy over scientist Hwang Woo-suk’s fabrication of his stem-cell research also scarred and crippled the industry.

But news of technological headway is triggering excessive competition and reckless scouting of scientists and bioengineers. Authorities must now step in to redefine the industry and revisit the nation’s strategy for biotechnology.

The industry is a high-end market that can fuel the country’s future growth. The government should administer state projects in key areas and promote strategic alliances among companies. A new business model can be created by fostering partnerships between large corporate giants and upstart biotech companies.

[Wishing for some specifics - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


CLC bio [of Denmark] and PSSC Labs [California] Deliver Turnkey Solution for Full-Genome Data Analysis

Business Wire
Aug 12, 2010

AARHUS, Denmark, Aug 12, 2010 (BUSINESS WIRE) -- Today, CLC bio and PSSC Labs announced a new turnkey solution, CLC Genomics Factory, for assembly, read mapping, and subsequent downstream analysis of very large amounts of high-throughput DNA and RNA sequencing data. Built as a high-performance bioinformatics appliance, CLC Genomics Factory comes in three different sizes with varying numbers of compute nodes, capable of processing the data output from up to 10 Illumina HiSeq2000 or 7 Life Technologies SOLiD 4 systems.

Vice President Bioinformatics Solutions at PSSC Labs, Alex Lesser, states, "As we're already working with the leading instrument providers such as Roche 454, Life Technologies, and Illumina, it was a natural step for us to partner with the leading software provider within high-throughput sequencing data analysis, CLC bio. Based around their enterprise platform, we have tailored an extremely powerful turnkey solution for analyzing the vast amounts of data coming off all the different high-throughput sequencing instruments, including upcoming technologies such as Ion Torrent and Pacific Biosciences."

CEO at CLC bio, Thomas Knudsen, continues, "It was obvious for us to combine our expertise on high-performance bioinformatics algorithms and user-friendly software, with PSSC Labs' extensive experience in cluster solutions for the life science industry. We now provide the first and only turnkey solution for full-genome analysis of data from all types of high-throughput sequencing instruments. Our customers don't have to invest in a new cluster for each technology they wish to adopt: CLC Genomics Factory handles them all - now and in the future!"

CLC Genomics Factory is built around CLC bio's enterprise platform, including the award-winning CLC Genomics Server, as well as all of CLC bio's accelerated algorithms for full-genome assembly and analysis. Multiple licenses for CLC Genomics Workbench enable users to interface with the server software through either a user-friendly graphical user interface or, optionally, a command line interface. CLC Genomics Factory includes also support for CLC bio's Software Developer Kit, for those wishing to integrate 3rd party systems and software.

CLC Genomics Factory includes a master node, multiple job nodes, as well as varying amount of storage. Read more about CLC Genomics Factory including full technical specifications at http://www.clcfactory.com [The specs reveal that in complete configuration the system can analyze 32 full human genomes per week ~ 5 hours per individual genome - about an order of magnitude more than will be required in biopsy-sequencing AND analysis - AJP]

CLC Genomics Factory is sold by CLC bio and CLC bio resellers. The computer hardware is assembled, tested, distributed, and supported by PSSC Labs who have great experience in delivering complex computer setups, with more than 1000 running installations of their clusters across 35 countries around the world. CLC bio handles the support on the software.

[The California-based hardware manufacturer, because of competitive funding requirement, is again forced to be quite specific, but we don't learn a lot here about the software created in Denmark - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


GenomeQuest and SGI Announce Whole-Genome Analysis Architecture

PRWeb
August 4, 2010

Immediate Availability of World's First Whole Genome Analysis Services for Researchers

Westborough, MA and Fremont, CA (Vocus) August 4, 2010

GenomeQuest and SGI (NASDAQ: SGI) today announced the immediate availability of the world’s first whole-genome analysis (WGA) services for researchers. As a result, pharmaceutical companies, core labs, biotechs, government agencies, and clinics now have direct access to whole-genome processing previously found only inside genome centers combined with comprehensive, self-serve analysis.

The WGA services allow whole-genome/exome research teams to:

Store, manage, and compare their sequences and annotations

Assemble and align sequences from any instrument

Interactively query and analyze their runs and projects

Merge and re-analyze with findings from colleagues and public studies

Use standard workflows, including Variant Detection, RNA-Seq, and ChIP-Seq

Build and query enterprise-wide variant archives

"Data analysis is recognized as the bottleneck of whole-genome research. Traditionally, researchers receive static reports for their sequence runs which, at today's volumes, are impossible to analyze and increasingly siloed,” said Jean-Jacques Codani, GenomeQuest Chief Scientific Officer. “From its inception, the GQ Engine has provided researchers with rich, interactive reports and the ability to integrate and re-analyze with other work. Now, with SGI's longstanding experience in high-performance computing, we have found the bow that best fits our arrow for WGA-scale services.”

GenomeQuest and SGI co-developed a software and hardware architecture that is optimized for next generation sequencing and enables whole-genome scale and performance. Based on this architecture, the WGA services are available through the just-upgraded GenomeQuest data center or deployed directly into a customer data center, as may be required by larger accounts, core labs, and clinics.

“Clearly, the storage and computational needs of WGA are massive and unique,” said Dr. Eng Lim Goh, senior vice president and chief technology officer at SGI. “Given the complexity of the algorithms and the scale of the data, success in this area requires a careful factoring then optimization across four key parts of the system -- software, computational and I/O capabilities, and burstability. We are very excited about the new GenomeQuest center, and applying this enabling blueprint for life science organizations.”

The GenomeQuest center was upgraded on May 18, 2010. A major investment, it radically improved the user experience and performance for over 2000 existing commercial and academic users.

“We have observed a radical change in the use of GenomeQuest since the opening of the new center,” comments Ron Ranauro, GenomeQuest CEO. "Our personalized medicine research application is processing thousands of exomes per month and will soon scale to over 1000 full-human genomes at deep-coverage.”

The SGI-based upgrade includes:

Multiple, load-balanced head nodes to service high-volume user requests and interactions

Rackable XE, server stack of high-performance compute nodes to service highly-parallelized, sequence database comparisons

Storage solution featuring high-performance I/O subsystem scalable to Petabytes

Housed in a Type II SAS 70 compliant data center with fully redundant hardware and software for 24/7 availability

In related recent news at BIO-IT WORLD, GenomeQuest also announced GQ-PMR, the world’s first genomic reference system for personalized medicine-based research. With GQ-PMR, pharmaceutical companies can integrate raw data from public whole genome studies, such as the 1000 Genome Project, directly into their private research. The combination allows research organizations to massively expand sample sizes at virtually no cost and accelerate their transition to molecular-based personalized medicine.

As background, in a July 8, 2010, feature story in USA Today titled "The human genome: Big advances, many questions", Vivien Bonazzi, head of computational biology for National Human Genome Research Institute at the National Institutes of Health, comments, "We're really crying out for the ability to analyze this (output of genome sequencing machines) efficiently and effectively."

[By this illustration of a US (MA-based) software company (Genomequest), working with the Silicon Valley based company SGI (California), the point is to contrast the transparency of USA technologies (also applying to HolGenTech' "Genome Computing Architecture" that - having filed for IP protection - has even been broadcast on YouTube-s). Such dangerous transparency is necessitated in the USA because of competitive funding pressures - and is contrasted by the largely elusive (at times evasive) nature of certain Asian comparable efforts - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Pacific Biosciences Expands into European Union

Pacific Biosciences - release
August 3, 2010

Terry Pizzie Appointed Vice President, Europe

UK-based Wellcome Trust Sanger Institute Becomes Early Access Customer Menlo Park, Calif. – August 3, 2010

Pacific Biosciences, a private company developing a disruptive technology platform for real-time detection of biological events at single molecule resolution, today announced it has expanded its operations into the European Union with the appointment of Terry Pizzie as Vice President, Europe and with its first European customer, the Wellcome Trust Sanger Institute. Mr. Pizzie has more than 20 years experience in a broad range of international commercial life sciences industry management positions. Previous to joining Pacific Biosciences he was Director of Global Commercial Operations (a board position) for Genetix (now part of the Leica Division of Danaher Corp.) Prior to joining Genetix in 2007 he was Senior Vice President of Commercial Operations from 2005-2007 at Biacore (now part of GE Healthcare), a market leader in protein interaction analysis in research, drug discovery development and manufacturing. Before joining Biacore, he was Vice President Europe of Applied Biosystems (now part of Life Technologies Corp.) from 2002-2004 with responsibility for all European commercial operations including strategy, performance and operational efficiency. He joined Applied Biosystems in 1988 as a sales engineer and advanced through the organization in increasingly responsible positions including Vice President of Sales and Marketing for Europe from 2000-2002. Mr. Pizzie holds a degree in Physiology and Biochemistry from the University of Reading. “Terry has an exceptional track record of strategically managing the commercial success of leading life science organizations in Europe, and we are delighted that he will lead the establishment of our European operations,”said Hugh Martin, Chairman and Chief Executive Officer of Pacific Biosciences.

Pacific Biosciences also announced that the Wellcome Trust Sanger Institute has purchased a PacBio RS 3rd generation sequencing system as part of the company’s early access program. Earlier this year, Pacific Biosciences announced the first 10 customers as part of its North American early access program. These sites, which represent genome centers, cancer research institutions, commercial organizations, and universities, have begun receiving their instruments.

Today’s announcement reflects the company’s expansion of its early access program to a limited number of sites outside of North America.

[While PacBio already made Full DNA Genome Analytics GLOBAL by establishing Partnership with Canadian and Danish developers (see DevNet below) - the actual physical delivery of the now "lead-technology" nanosequencer equipment (by PacBio) into foreign land, starting with the UK, completes the globalization of the spread of sequencing machines (and, therefore, puts the produced full human DNA sequences outside the jurisdiction of the USA). For the previous generation of machines (Illumina Genome Analyzer, Roche/454 and Life Technologies' SOLiD the global spread, especially to China, has alredy been a fact for quite a while - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Pacific Biosciences launches PacBio DevNet at ISMB 2010

9. July 2010 08:29
The Medical News

As part of its commitment to introducing third generation DNA sequencing technology to the market, Pacific Biosciences today announced the launch of a software developer's network - the PacBio DevNet - at the Eighteenth International Conference on Intelligent Systems for Molecular Biology (ISMB 2010).

“We look forward to contributing to a robust analytical ecosystem that allows more scientists to exploit the new possibilities enabled by this technology.”

Throughout the development of the company's Single Molecule Real Time (SMRT™) technology platform, Pacific Biosciences has been working closely with members of the informatics community to develop and define standards for working with single molecule sequence data. Now, as the company prepares for the commercial launch of the PacBio RS, it is launching a more formal program to support the needs of the informatics community.

The PacBio DevNet was created to support the ecosystem of academic informatics developers, life scientists, and independent software vendors interested in creating tools to work with PacBio's third generation sequencing data. Interested parties can sign up for the PacBio DevNet at www.pacbiodevnet.com, a hub for data sets, source code for algorithms, application programming interfaces (APIs), conversion tools to industry standard formats, and documentation related to SMRT sequencing.

Eric Schadt, Ph.D., Chief Scientific Officer for Pacific Biosciences commented: "Single Molecule Real Time sequencing introduces entirely new dimensions to data, such as a time component, that are unlike anything the bioinformatics community has encountered to this point. Therefore, in addition to a strong internal focus on informatics development, we are committed to supporting third-party software development and facilitating the rapid adoption of this new data type into the scientific community where the really exciting 'big science' can begin [already begun in 2002 - AJP] to happen."

At the ISMB conference, PacBio scientists and collaborators will present results from some of their informatics development efforts to date, including new algorithms tailored to the unique characteristics of the SMRT data such as its long reads. Pacific Biosciences has also developed a suite of data management and analysis software tools that mimic the granularity, scalability and functionality of the PacBio RS. These informatics solutions are designed to efficiently integrate with the user's LIMS system, making them accessible not only to high-end informatics researchers, but also to biologists and clinical researchers.

"The release of PacBio's tools under an open source license and the launch of its Developer's Network will foster the creation of tools that maximize the value of SMRT sequencing," said Reece Hart, Chief Scientist, Genome Commons. "We look forward to contributing to a robust analytical ecosystem that allows more scientists to exploit the new possibilities enabled by this technology."

[The key to "industrialization of genome analytics" is precisely how some "open source license" system could be brought into a productive synthesis with "professional (and for profit) private entrepreneurs". Our "Internet Boom" in Silicon Valley provided some very important lessons; e.g. the Internet browser started with Jim Clark founding Netscape - but having failed to develop a secured business model around it, reverted into Mozilla (open source) - while Microsoft and now Google ran away with the software profit. (With Cisco to be mentioned as the winner in Internet hardware). PacBio' DevNet requires registration, accepting their terms & conditions, but even the public interface reveals their "Partnership" with Amazon Web Services, BioTeam, CLCbio, GenoLogics, GenomeQuest and Geospiza - yet none of them are the Silicon Valley HQ-d "genome informatics pure-play" with proprietary algorithmic approach to Recursive Genome Function, like HolGenTech, Inc. - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Illumina Inc. et al. v. Complete Genomics Inc.

1:10-cv-00649; filed August 3, 2010 in the District Court of Delaware

• Plaintiffs: Illumina Inc.; Solexa Inc.

• Defendant: Complete Genomics Inc.

Infringement of U.S. Patent Nos. 6,306,597 ("DNA Sequencing by Parallel Oligonucleotide Extensions," issued October 23, 2001), 7,232,656 ("Arrayed Biomolecules and Their Use in Sequencing," issued June 19, 2007), and 7,598,035 ("Method and Compositions for Ordering Restriction Fragments," issued October 6, 2009) based on defendant's manufacture and sale of its Complete Genomics Analysis Platform products and services. View the complaint here.

[This lawsuit, aimed at core technologies, is an obvious setback not only to Complete Genomics, but the entire "Full DNA sequencing" industry. Immediately, it may adversely affect Complete Genomics' determination to go public at this time, since in some investors' eye a relatively novel company that is sued by one of the leading and well-established companies (Illumina) at the least would lessen the financial success of a Complete Genomics IPO. A quick settlement looks unlikely as Complete Genomics is capital hungry, and Illumina by demanding "jury trial" looms large, possibly inflicting a multi-year and expensive process. This three-pronged lawsuit is also a set-back to the entire "Full Human DNA Sequencing" industry - and given that Complete Genomics altered twice the expectations (first as quoted in my YouTube 2008 October, promising a "Google-type Data Analytics Center", and secondly the service seems to be open to batches of 100-s of sequences, in a whole-sale mode to R&D, rather than to the public) - Illumina (and Life Science) may stay longer as an established provider with their "pre-nanosequencing technology" (the Genome Analyzer by Illumina and SOLiD by Life Science). This altered dynamics is likely to trigger a ripple of business decisions - Andras Pellionisz (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


I was wrong ...

At the Churchill Club event "Personal Genome Computing: Breakthroughs, Risks and Opportunities on January 22, 2009 in the Q&A period the question was asked when would the first genome computer game appear. I said "In Two Years". I was wrong. In exactly One Year (January 22, 2010, that is precisely half the time predicted) the following paper was submitted to Nature (accepted June 30, 2010):

Predicting protein structures with a multiplayer online game

Seth Cooper, Firas Khatib, Adrien Treuille, Janos Barbero, Jeehyung Lee, Michael Beenen, Andrew Leaver-Fay, David Baker, Zoran Popović & Foldit players

Nature 466, 756–760 (05 August 2010) doi:10.1038/nature09304

Received 22 January 2010 Accepted 30 June 2010

People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully ‘crowd-sourced’ through games1, 2, 3, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology4, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.

^ back to top


"Recursive Genome Function" - winner takes it all

[Screenshots of Google, Aug.4, 2010 - AJP]

The astonishing fact overnight is not that "Recursive Genome Function" inched forward from 160,000 by two thousand hits.

"The Winner Takes It All" is shown by the obsolete axioms dropped overnight by almost thirty thousand hits (combined). An almost 10% daily drop of the stock market index halts trading. A more common example: You never want to take away toys, even old ones, from those (not scientists) playing in a sandwalk. They scream from the top of their lungs. The thing to do is to offer a new (and better) toy. The obsolete ones are quietly dropped by those grabbing the novelty, while those who drop the old but fail to grasp the new show all the signs of hard times - FaceBook: Andras Pellionisz, or email Pellionisz_at_JunkDNA.com

^ back to top


Pellionisz' "Recursive Genome Function" supersedes both obsolete axioms of "Central Dogma" AND "Junk DNA"

[Screenshots of Google, Aug.3, 2010 - AJP]

At the tenth Anniversary of The Human Genome Project, there is a unanimous agreement of scientists worldwide that there is tremendous intellectual void between our ability of reading the Full Human DNA (all 6.2 Bn A,C,T,G-s of a diploid human genome) on one hand, and our (limited) ability of writing new sequences, on the other hand. It is not the greatest problem that both our reading (e.g. precision) and synthesis (size) needs improvements. E.g. synthetic sequences presently must be small enough in size and in difference from existing forms of life to remain "stealth" to Genome Regulation, such that they can be synthesized and even started to function (nobody knows through how many generations, since they have been put into a freezer...).

By far the most critical void is the present almost complete lack of professional theoretical foundation of genome informatics. This coming intellectual void was actually predicted by Francis Crick when he attempted to temporarily shore up his collapsing "Central Dogma" (upheld since his promulgation in 1956 till his passing in 2004); "If it were shown that information could flow from proteins to nucleic acids, he said, then such a finding would 'shake the whole intellectual basis of molecular biology' "(Crick, 1970). Although Ohno (1972) rushed to save the establishment with his misnomer that even if there were such recursion, it would only find "Junk DNA" in the 98.7% non-genic part of the genome, which is there, he falsely claimed, for "the importance of doing nothing" (p.367), by now the "genes-Junk" primitive and obsolete school is what Venter calls flatly: "we have a frighteningly unsophisticated view of genome function". What happened to Dr. Collins' call upon concluding ENCODE in 2007 that "the scientific community has to re-think long-held beliefs"?

A lucky few did not have to re-think "Central Dogma" and "Junk DNA" - since they never believed them in the first place. Thus I had a decade-long lead to build The Principle of Recursive Genome Function (2008) - that elaborates my FractoGene concept (2002). Nobody seems to raise "no, the genome function is not recursive" - that would be pretty foolish. If some think "yes, it is recursive, but not fractal", one would recommend the recent Science cover issue (Oct. 9, 2009) "Mr. President, the Genome is Fractal!" (Lander et al., 2009) Though Pellionisz Principle was quoted by a dozen or so authors within 6 months of its publication in a peer-reviewed science paper, totally independent authors are joining the fray (Jean-Claude Perez, France, 2010, Borros M. Arneth, Germany, 2010) - before it will (soon) be everywhere "that we have all been saying that the genome function was recursive, and in fact was based on my thoughts of fractal recursive iteration...". People might forget that it was my double "lucid heresy" to put to rest the two ruling false axioms that hidered genome informatics over half a Century - by discovering and publishing (now applying...) the more advanced Principle of "Recursive Genome Function".

Some may ask why do I make such a strong point about theoretical foundation of full (holo)genome function (including the epigenomic pathway through which extrinsic proteins make the hologenome "an open loop" - The Circle of Hope).

The primary of the two strong reasons is, that most of the general public (and unfortunately, even some scientists) maintain that "our genome is our destiny"; there is nothing we can do about it, and it deterministically defines our future. Clearly, this totally mistaken attitude stems from Crick's "Central Dogma", that sees a "straightforward, DNA>RNA>PROTEIN genome function that "dead end"-s with proteins, and the genome is assumed to lack even the possibility of a PROTEIN>DNA recursion. The deterministic philosophy also has deep roots in natural sciences, since e.g. in physics it was only a mere 83 years ago (a footnote in the over 2,000 years of physics) that the Heisenberg Uncertainty Principle was put forward. Physics of today is probabilistic (rather than deterministic), but such a profound change of attitude (in fact, philosophy) certainly does not happen overnight in postmodern genomics - or even biology. Most unfortunately, the old and gloomy (and false) legacy often prevails, e.g. in discouraging people of taking advantage of genomic tests; "why should I know if nothing can be done". Once I published The Principle of Recursive Genome Function (2008) I rushed to disseminate "the Circle of Hope" widely over Google Tech Talk YouTube - such that the public picks up a new wave of positive motivation based on rock-solid new science.

Unfortunately, the public's view greatly affects both the government R&D programs, as well as the sustainability of private domain R&D and applications. Those who paid $3 Billion by their taxes for sequencing a single Full Human DNA, and a decade after the same scientists openly declare that they don't have the slightest how to understand it, could significantly tone done government funding enthusiasm. Even with today's "point of inflection" when the Private Sector is taking over, the stakes are enormous if the Genome Informatics Industry gears up for "Brute Force Approaches" (that can always be done, albeit with horrific expense) - or processing of experimental data is guided (actually, predicted) by the best theories. No sane person (or Country...) would build enormous accelerators if nuclear physics would not be available to interpret the trajectories. However, there are $Billions invested into "DNA sequencing technologies" - and theories of doing away with half-a-Century dogmas often go overlooked in anywhere - but on the Google search engine "Recursive Genome Function" beating (at some peaks by close to a million hits...) both stone-age unsophistication of "Central Dogma" and "Junk DNA" - Pellionisz_at_JunkDNA.com

^ back to top


Mountain View's Complete Genomics to make Wall Street debut

Silicon Valley Mercury News
By Scott Duke Harris

Posted: 07/30/2010 07:39:49 PM PDT
Updated: 07/30/2010 09:03:44 PM PDT

A Mountain View startup that aims to drive the cost of sequencing a human genome to below $1,000 in hopes of advancing health care is preparing for a Wall Street debut and staking its claim to the ticker symbol GNOM.

In documents filed Friday with the Securities and Exchange Commission, Complete Genomics signaled its intent to make an initial public stock offering on the Nasdaq, hoping to sell more than $86 million in stock.

Startups that register for Wall Street debuts often postpone or withdraw such plans. Given the uncertain market for IPOs and excitement surrounding the commercial potential of DNA research, Complete Genomics' effort figures to attract considerable attention.

All life-forms carry a genome, a strand of chromosomes that is a full reflection of its hereditary traits. Complete Genomics is among a group of Silicon Valley startups at the cutting edge of DNA research — an endeavor that promises to advance human health while also playing a role in agriculture and biofuel development.

Complete Genomics' proprietary, assembly-line-like laboratory operation began sequencing human genomes about a year ago. The company has focused on doing large studies on human genomes for customers such as Genentech, the pharmaceutical giant Pfizer and major research institutions.

Sequencing the human genome was pioneered a decade ago at a cost of about $500 million for one genome. The expense has been an obstacle for researchers who are trying to decode patterns that correspond to maladies known to have genetic roots. Driving down the cost — the central mission of Complete Genomics' technology and business model — has been critical to performing large studies.

"It's all about scale. Sequencing one human genome is a scientific curiosity," Clifford Reid, co-founder and CEO of Complete Genomics, said in an interview with the Mercury News in September. That month, when the company announced it had sequenced a batch of 14 genomes, "we probably doubled the number of known genomes in the world," Reid said.

Last May, when Complete Genomics started commercial operations and after medical researchers published a report in the journal Science based on genomes it had sequenced, Reid was quoted as saying that his company was processing 500 genomes a month and had dropped the cost to $4,000.

In its filing with the SEC, Complete Genomics asserted that it has "optimized" its technology platform and is "able to achieve accuracy levels of 99.999% at a total cost that is significantly less than the total cost of purchasing and using commercially available DNA sequencing instruments."

It also touted "significant competitive advantages" as an independent lab: "Because our technology resides only in our centralized facilities, we can quickly and easily implement enhancements and provide their benefits to our entire customer base. We believe that we will be the first company to sequence and analyze high-quality complete human genomes, at scale, for a total cost of under $1,000 per genome."

[Assuming that the uncertain economic conditions will allow the successful IPO to happen, this is wonderful news - both for Complete Genomics and the rest of the world. While there are public companies that do DNA sequencing (best known are Illumina, Life Sciences), Complete Genomics aptly celebrates the 10th Anniversary of The Human Genome Project by becoming the first "pure play" human DNA sequencer company, using "assembly-line" mass-production by molecular nanosequencing. The planned IPO seems to be timed to pre-empt the company's main rival, Pacific Biosciences (of Menlo Park, CA) flooding the market with their sequencing machines. Watching carefully the market, if it will validate Craig Venter (see below) that full human DNA sequences provide "useless information" (without Analytics), or with the ramping up of full human DNA interpretation (also by HolGenTech, Inc. stealth-mode breakthrough) will contribute to a successful IPO is critical e.g. for Silicon Valley jobs. - "Andras Pellionisz" (FaceBook), Pellionisz_at_JunkDNA.com]

^ back to top


SPIEGEL Interview with Craig Venter: 'We Have Learned Nothing from the Genome'
Spiegel
07/29/2010

[This interview deserves to be amply commented, especially since Dr. Venter (not Mr. Venter...) made this recent statement: We’re at a frighteningly unsophisticated level of genome interpretation.” . I will make my comments as my personal opinion of Dr. Venter's statements below, based on my overall assessment that I expressed for some time that "Venter is the Tesla of PostModern Genomics" - Comments on my remarks can be posted at my FaceBook ("Andras Pellionisz") or email Pellionisz_at_JunkDNA.com]

---

Ten years ago, Craig Venter had plenty of reason to feel triumphant. Standing at the White House together with his rival Francis Collins of the National Institutes of Health as well as then-President Bill Clinton and former British Prime Minister Tony Blair, he announced the successful sequencing of the human genome. The historic press conference marked the end of a bitter race between Venter's firm Celera and the Human Genome Project, a government-sponsored consortium of around 1,000 scientists from around the world. Both groups had technically mapped the genome, but Venter's team had done it faster and cheaper. Since then, multimillionaire Venter, 63, has established a reputation within the scientific community for being a rebel. It's an image he appears to relish, and he stuns the world again and again with one brash victory after another. He is currently sailing around the world in his Sorcerer II research yacht documenting the genetic diversity of the world's oceans. He recently departed Valencia, Spain, to begin an expedition in the Mediterranean Sea. In May, he announced that his team had produced the world's first bacteria with a synthetic genome.

[Dr. Venter's perspective from sailing around the Globe might be motivated by the famous saying: "Be careful what you wish, since it might become true" ... Craig might feel that glorious emptiness, now for the second time, that he has accomplished the unthinkable, and "there is nothing simpler than a problem solved" (Faraday). Ten years ago he successfully competed in a single-handed manner (except computers...) with a world-wide consortium in full DNA sequencing of a human. Now he succeeded, after 15 years of effort, much longer than he anticipated, in synthesizing a functioning genome that was not the exact duplication, but slight modification, of a tiny DNA - and could implant it in a cell that accepted the "newcomer" DNA. Success is, like arriving at the mountain-top, a funny feeling. Glorious, yes, "mission accomplished" - but from the hilltop the next mountain presents a perhaps even more mounting challenge. Accomplishing the unthinkable can thus also is disappointing; a problem solved may reveal that the questions that it opened are even more formidable than what has just been answered. Venter said recently; "Were at a frighteningly unsophisticated level of genome interpretation". He acquired the entire set of character-sequence of the "big Russian novel" (quote from Esther Dyson) but realizes much more keenly than most, that "our 100-word dictionary" (which may be Chinese-English...) is a miserable failure to understand, let alone enjoy, the meaning of the Russian novel that myriads of characters tell us in an unknown language.

Craig is undoubtedly very aware of what's missing; actually both in making sense of the sequences, as well as synthesizing new, complex and big synthetic genomes. We can not go from "reading" to "writing" - without "understanding". Booting the modified and synthesized tiniest free-living organism could not succeed since the basic principles of genome function regulation were "frighteningly unsophisticated" for most. Since Craig is not (yet?) frontally attacking the "next big problem" (of analytics of genome regulation), some of his comments perhaps unduly play down the significance of what has actually been achieved - and his self-contradiction at times can be easily spotted. Maybe the journalistic title "we have learned nothing from the genome" picked up on a temporary disappointment, a "let down" after yet another major accomplishment. Venter says below, that (a truly frighteningly unsophisticated "genome function theory") expected maybe up to 300,000 "genes" in the human genome. We absolutely did learn from his (and the Consortium's) results, that the primitive assumption was totally wrong, didn't we?

In a year after the human DNA was sequenced, the mouse DNA showed that 98% of the (mere 20,000 or so) "genes" are functionally identical in human or mouse (and by now we realize that the "basic building materials" are astonishingly similar in practically all species; from the ringworm to human). For a biophysicist who devoted his oeuvre to the intrinsic mathematics of biology (and explained brain function for cerebellar neural nets by tensor geometry), the 98.7% of "Junk DNA" misnomer was dead on arrival of 2002. With sequencing techniques becoming widespread and available, we did absolutely learn, too, that the genome DNA>RNA>PROTEIN "forward model" is just absurdly incorrect, didn't we? Genomics and Epigenomics, therefore (particularly by experimental investigation of methylation) is almost universally accepted as the two sides of the same coin. Thus, the "Central Dogma" (having died a thousand deaths) was put to rest, along with JunkDNA - leading to The Principle of Recursive Genome Function - explaining e.g. the fractal growth of cerebellar Purkinje neurons as governed by the fractal DNA - through the epigenomic channels methylating auxiliary information in the "non-coding DNA" upon perusal in fractal iterative recursion. Perhaps we should have listened to Crick, that if his "Central Dogma" were shown to be untrue, "If it were shown that information could flow from proteins to nucleic acids, he said, then such a finding would 'shake the whole intellectual basis of molecular biology' "(Crick, 1970). What happened to the earth shake? What happened to Dr. Collins' call (wearing his scientist' hat upon concluding ENCODE in 2007 that he put together wearing his administrator's hat) that "the scientific community has to re-think long-held beliefs"? We did learn from the genome, did we not, that the genome itself (because of the epigenomic "open loop" - the "Circle of Hope") provides us with "probabilities", rather than certainties - but who says that "probability theory" equals to "useless information"? When will the philosophy sink in (like it did with the Heisenberg Uncertainty Principle) that sophisticated science at times must proceed in a way that the whole intellectual basis is shaken?

In the afterglow following a second accomplishment of the unthinkable one may be overly pessimistic on particular details. A good example is the statement that knowing his genome, Dr. Venter could not use it for anything. He is on record, however, that as early as during the process of his full DNA sequencing (since he was one of the five DNA donors) he started to take statins as the genomic signature of the sample in which he was dominant warranted controlling his cholesterol. The "close to zero" medical benefit from genomic knowledge is demonstrably untrue for individuals who showed elevated probability of e.g. colon cancer, followed up with colonoscopy and small, pre-cancerous polyps were removed.

Because of his double towering accomplishments, Dr. Venter is entitled to vocalize that "the more we know, the more we realize what we don't know". Those with lesser achievements can not afford to be less than optimistic. - You can comment at my FaceBook wall "Andras Pellionisz" or email Pellionisz_at_JunkDNA.com]
---

The world's first bacteria with a synthetic genome was even coded with an e-mail address. [Is SPAM-ing a genome a brand new epigenomic effect, or what? - AJP]

In a SPIEGEL interview, genetic scientist Craig Venter discusses the 10 years he spent sequencing the human genome, why we have learned so little from it a decade on and the potential for mass production of artificial life forms that could be used to produce fuels and other resources.

SPIEGEL: Mr. Venter, when the elite among gene researchers undertook the decoding of the human genome, you were their greatest enemy. They called you "Frankenstein," "blood sucker," "Darth Venter" and even "asshole." Why do you attract so much hostility?

Venter: Well, nobody likes to be beaten -- by superior intelligence, planning and technology. That gets people upset.

SPIEGEL: Every area of science is competitive. But it doesn't lead to that kind of hostility in all areas.

Venter: The human genome project was completely different, it was supposed to be the biggest thing in the history of biological sciences. Billions in government funding for a single project -- we had never seen anything like that before in biology. And then a single person comes along and beats scientists who have been working on it for years. It is no wonder they didn't like that.

SPIEGEL: Wasn't it more the case that your opponents were afraid that you, as a profit-oriented entrepreneur, would make the human genome your own private property?

Venter: That is totally absurd; and you know it. Initially, Francis Collins and the other people on the Human Genome Project claimed that my methods would never work. When they started to realize that they were wrong, they began personal attacks against me and made up these things about the ownership of the genome. It was all absurd.

SPIEGEL: So it was all just propaganda?

Venter: At the end of the day, it is an argument over nothing. But this battle between common good and commerce -- that is the kind of story that sells newspapers.

SPIEGEL: Was the importance of gene patents, which fueled the dispute, exaggerated?

Venter: First of all, nobody has made any serious money off patents on human genes except patent attorneys. Second, I do not hold any patents on human genes. You can do a patent search. Then you can convince yourself.

SPIEGEL: On June 26, 2000, you had a major event -- you met with Francis Collins at the White House …

Venter: … yeah, it was obviously a big historic event. It was pretty stunning, making an announcement at the White House to the entire world. It was a big triumph for me and my team because it proved that we had won.

SPIEGEL: At the time, none of you had won. Nobel Prize recipient John Sulston, one of the researchers of the government-funded genome project wrote …

Venter: … What was his quote? That he and his people were a bunch of phonies who had nothing?

SPIEGEL: In essence, he wrote that you both had nothing.

Venter: He had no idea what we had. Sulston has proven he is not the most credible source on anything other than his own data. He said they were a bunch of phonies, we have to take him at his word on that.

SPIEGEL: It seems to have been the only time in history that a new scientific discovery was announced officially by the government. How did that unusual agreement in the White House take shape?

Venter: It was a political compromise because the people at the public Human Genome Project were afraid we would announce what we had. And we were afraid they would use the White House to make it look like they had won.

SPIEGEL: It appeared at the time that you had agreed to be undecided. Do you now view yourself as the winner of the race?

Venter: I don't think it really matters.

SPIEGEL: The New York Times later declared the public Human Genome Project to be the victor. Can you really claim that you don't care?

Venter: Oh, the New York Times! How do you define the "winner" in this case? What is decisive is that it is our data that is in the databases -- not the data the consortium put together back then.

SPIEGEL: The genome project has been called the Manhattan Project or Moon Landing of its era. It has also been said that knowledge of the genes will change the future of humanity and become a "main driver of the world economy."

Venter: Who said that? I didn't. That was the people at the consortium.

SPIEGEL: You're wrong. You made all those statements in an interview with DER SPIEGEL in 1998.

Venter: Really? Those are Francis Collins' lines. So I may have said that that's how he describes it. I, on the other hand, have always said, "This is a race from the starting line to the finish."

SPIEGEL: The genome project hasn't just raised hopes -- but also worries. Do you understand those concerns?

Venter: Yes. There are two groups of people. People either want to know the information or they prefer to live like an ostrich with their head in the sand, not knowing anything. The fear is based on the ill-founded belief that those who know the DNA sequence also know every aspect of life. This nonsense has been spread by the same geneticists who were afraid of the commercialization of this stuff. From the time of the first few discoveries of gene defects -- Huntington's disease, for example, everybody thought that if you knew your genome, you would know when you would die and what you would die from. That is nonsense.

SPIEGEL: So the significance of the genome isn't so great after all?

Venter: Not at all. I can tell you from my own experience. I put my own genome on the Internet. People had the notion this was the scariest thing out there. But what happened? Nothing.

SPIEGEL: Nevertheless, Jim Watson, the co-discoverer of the DNA double helix, has said he doesn't want to know which variant of the so-called ApoE gene he has -- it could say something about his risk for developing Alzheimer's, and he's afraid of that …

Venter: That was silliness. At that age? Watson is over 80.

SPIEGEL: Are you interested in finding out what ApoE variant you have?

Venter: I know it. And according to it, I have a slightly increased risk for Alzheimer's disease. But it impresses me little because I could have dozens of other genes that counteract it. Because we do not know that, this information is meaningless.

Part 2: 'We Couldn't Even Be Certain from my Genome What My Eye Color Was'

SPIEGEL: And what about the fears about the abuse of gene data through insurers or employers, for example? Do you see that as sheer hysteria?

Venter: Abuse is not a question of whether the data is available. It is an issue of laws. You can't do anything to change the availability of genetic data. Look at this bottle that you have touched -- that's all I need to obtain your entire genetic information.

SPIEGEL: How much would you be able to learn about us by doing so?

Venter: If anything, we don't really know how to read the genome and it can't tell us very much right now. So what's the ethical debate about?

SPIEGEL: The decoding of your personal genome has so far revealed little more than the fact that your ear wax tends to be moist.

Venter: That's what you say. And what else have I learned from my genome? Very little. We couldn't even be certain from my genome what my eye color was. Isn't that sad? Everyone was looking for miracle 'yes/no' answers in the genome. "Yes, you'll have cancer." Or "No, you won't have cancer." But that's just not the way it is.

SPIEGEL: So the Human Genome Project has had very little medical benefits so far?

Venter: Close to zero to put it precisely.

SPIEGEL: Did it at least provide us with some new knowledge?

Venter: It certainly has. Eleven years ago, we didn't even know how many genes humans have. Many estimated that number at 100,000, and some went as high as 300,000. We made a lot of enemies when we claimed that there appeared to be considerably fewer -- probably closer to the neighborhood of 40,000! And then we found out that there are only half as many. I was just in Stockholm for the 200th anniversary of the Karolinska Institute. The first presentation was about the many achievements the decoding of the genome has brought. Then I spoke and said that this century will be remembered for how little, and not how much, happened in this field.

SPIEGEL: Why is it taking so long for the results of genome research to be applied in medicine?

Venter: Because we have, in truth, learned nothing from the genome other than probabilities. How does a 1 or 3 percent increased risk for something translate into the clinic? It is useless information.

SPIEGEL: There are hundreds of hereditary diseases that can be traced to defects in individual genes. You can determine a lot more than just probabilities through them. But that still hasn't led to a flood of new treatments.

Venter: There were false expectations. Take Ataxia telangiectasia, for example, a horrible disease. The nervous system degenerates, and people who have it often die in their early teens. The cause is a defect in a single gene, but it is a developmental gene. If your body is built in the wrong way, then you can't just take a magic pill to rebuild it. If your brain is wired wrong, then it is wired wrong.

SPIEGEL: Who is to blame for those false expectations?

Venter: We were simply always looking at single genes because they were the only genes we had. When people lose their keys at night, they look under the lamp post. Why? Because that's where you can still see something.

SPIEGEL: But the keys are really located in the dark?

Venter: Exactly. Why did people think there were so many human genes? It's because they thought there was going to be one gene for each human trait. And if you want to cure greed, you change the greed gene, right? Or the envy gene, which is probably far more dangerous. But it turns out that we're pretty complex. If you want to find out why someone gets Alzheimer's or cancer, then it is not enough to look at one gene. To do so, we have to have the whole picture. It's like saying you want to explore Valencia and the only thing you can see is this table. You see a little rust, but that tells you nothing about Valencia other than that the air is maybe salty. That's where we are with the genome. We know nothing.

SPIEGEL: Do you think there will be a time when you can extract all this information to yield real medical results?

Venter: For that to happen we need a lot more information: Information about your body's chemistry, your physiology, your complete medical history, your brain and your entire life. We would need to do that a million times on different people and correlate that data with their genetic information.

SPIEGEL: Will that lead in the end to the kind of personalized medicine that genetic researchers have always touted? Each person would get his or her own personal treatment that is tailored precisely to that person's genetic make-up?

Venter: That was another one of these silly naïve notions that was out there. It's not, 'Oh, we know your genome, we're going to make this drug for you.' That will never happen. It is more important that you use the information in the genome about your personal risks and reduce them through intelligent behavior.

SPIEGEL: You have complained about how naïve genome researchers were in the beginning. Will future generations eventually make fun of us in the same way for how naïve we still are today?

Venter: Only time will tell. Nevertheless, we now have what is going to be one of the most important tools for interpreting the human genome: the first synthetic cell. It will enable us to ask questions that would have been inaccessible before.

SPIEGEL: When no progress was made through the reading of the genome, you shifted to rewriting it. You sequenced the entire genome of a bacteria and used it in another cell. How is the microbe you created doing today?

Venter: It's sitting in a freezer, doing extremely well. We'll keep it for the historians.

SPIEGEL: You stored a code in the genome of this cell. Has anyone decoded it?

Venter: Yes, it is the first genome in the world to include an e-mail address. So far, 50 scientists have cracked the code and answered us.

Part 3: 'We Don't Need Any More Neanderthals on the Planet'

SPIEGEL: Many fear what might happen if humans craft new life forms. They repeatedly say that you are playing God …

Venter: Yes, and I find them frightening. I can read your genome, you know? Nobody's been able to do that in history before. But that is not about God-like powers, it's about scientific power. The real problem is that the understanding of science in our society is so shallow. In the future, if we want to have enough water, enough food and enough energy without totally destroying our planet, then we will have to be dependent on good science.

SPIEGEL: Some scientist don't rule out a belief in God. Francis Collins, for example …

Venter: … That's his issue to reconcile, not mine. For me, it's either faith or science - you can't have both.

SPIEGEL: So you don't consider Collins to be a true scientist?

Venter: Let's just say he's a government administrator.

SPIEGEL: When can we anticipate seeing the next tailor-made microbes from your laboratory?

Venter: Well, the goal is multifold. We have to start by creating minimal cells. A human cell is too complex -- we have no idea how any human cell works. We don't even know how the simplest bacterial cell works. We want to learn what the minimum cellular components are, so we're going to be taking out all the non-essential genes. But we're also trying to design new life forms for energy production, capturing carbon dioxide or to produce chemicals.

SPIEGEL: Wouldn't it be easier to modify existing bacteria using the established methods of biotechnology?

Venter: It isn't that simple. For example, there is no other way of creating a minimal cell. You can only add or take out genes at will if you have built the genome from scratch.

SPIEGEL: How long does it take to create such new forms of cells?

Venter: Right now we have the technology to make several a day, and the goal is to make a million a day.

SPIEGEL: How long will it be until the life forms you have created start producing fuel for our cars?

Venter: Not only gasoline. Plastic, asphalt, heating oil: Everything that we make from oil will at some point be made by bacteria or other cells. Whether that is in five, 10 or 20 years is unclear. Why don't we have fuel now other than alcohol from microbes? It's because nothing evolved that can produce great amounts of biofuel out of CO2. That's why we have to make it.

SPIEGEL: ExxonMobile, at the very least, appears to be convinced by your vision …

Venter: … yes, they are investing $600 million in the project, with half going to our partnership. It's a good round number. It's the same money that PerkinElmer gave me to decode the human genome. With it, we sequenced the human genome in nine months instead of many, many years. The public money that flowed into the Human Genome Project, above all, created an enormous, inflexible bureaucracy. And it is only because of private money that we can now sail across the ocean with this sailboat and discover 40 million genes -- there are only 41 million genes known to all of science. All you need are a few innovative ideas and independent funding to allow you to do things that other people can only dream about.

SPIEGEL: It took eight years from the time the first bacterial genome was decoded until the human genome was completed. How much time will elapse between the creation of the first synthetic bacteria and the creation of the first synthetic human?

Venter: There is currently no reason for us to synthesize human cells. I am, for example, a fan of the work that was done a short time ago that led to the decoding of the Neanderthal genome. But we don't need any more Neanderthals on the planet, right? We already have enough of them.

SPIEGEL: Mr. Venter, we thank you for this interview.

Interview conducted by Rafaela von Bredow and Johann Grolle

^ back to top


GenePlanet in Europe makes Genome Testing Global

GenePlanet.com
Headquarters:
Gene Planet Limited
Upper Pembroke Street 28
Dublin
Ireland

Logistics centre and research:
GenePlanet d.o.o.
Technology park Ljubljana
Teslova 30
Slovenia

GenePlanet genetic testing service starts with us sending you a saliva collector by mail. In the laboratory we extract your genetic material which is used to perform the analysis. As a result you will find out to which diseases you are susceptible, what effect do certain medications have on you, what are your talents and special abilities, as well as who are your ancestors.

Your personal genetic testing is now available at 549$ / 399€.

A list of diseases, talents and medications, tested by GenePlanet

Genetic testing for disease susceptibility

ARMD
Alzheimer`s disease
Ankylosing spondylitis
Asthma
Coronary artery disease
Atrial fibrillation
Bipolar disorder
Breast cancer
Celiac disease
Crohn`s disease
Depression
Dyslexia
Endometrial cancer
Gallstones
Hypertension
Long QT interval
Lung cancer
Multiple sclerosis
Peripheral arterial disease
Prostate cancer
Psoriasis
Restless leg syndrom
Rheumatoid arthritis
Type 1 diabetes
Type 2 diabetes

Genetic testing for medicament response

Beta blockers and heart
Beta blockers and tension
Efficacy of Aspirin
Headache and triptans
Statins against heart attack
The effect of antidepressants
The secret of Viagra

Genetic testing for traits and talents

Alcohol flush reaction
Avoidance of errors
Birth weight
Bitter taste perception
Earwax type
Effect of breastfeeding on IQ
Episodic memory performance
Eye colour
Fat intake and BMI
Freckles
HDL cholesterol level
Lactose intolerance
Malaria resistance
Measures of intelligence
Memory of older people
Muscle explosiveness
Norovirus resistance
Odour detection
Pain sensitivity
Skin colour
Nicotine dependence

With Geneplanet genetic testing, you can reveal what is written in your genes, it helps you understand the effect of genes on your life and advises how to make the most of your genetic advantages.

[GenePlanet released the Genome Testing genie completely out of the bottle. Silicon Valley is losing jobs left and right. Meanwhile, the Seoul (Korea) DTC picked up the Asian market. Now, after DeCodeMe in Iceland, GenePlanet metastased Genome Testing business on the Planet to Europe, with its Headquarters in Ireland, and labs in Slovenia - that some politicians can't even distinguish from Slovakia. Independent if US inventors (etc) like it or not, Genome Testing DTC is beyond control of US - Pellionisz_at_JunkDNA.com]

^ back to top


Pfizer to Study Liver Cancer in Korean Patients with Samsung Medical Center

GEN
Jul 14 2010

Pfizer formed a research partnership with Samsung Medical Center to generate gene-expression profiles of tumors from Koreans with liver cancer. The hope is that the findings will lead to targeted therapeutics that can be used not just in Korea but also in the rest of Asia.

A research team led by Samsung Medical Center scientists including Prof. Park Cheol-Guen, Prof. Im Ho-Young, and Prof. Paik Soon-Myung, director of the cancer research center, will conduct research in Seoul. Neil Gibson, Ph.D., vp of oncology research, will be responsible for the joint research program at Pfizer.

Samsung Medical Center has built a base of specimens in the liver cancer area. “This partnership will serve as a great opportunity to combine Pfizer's know-how in drug development and Samsung Medical Center's extensive genome information and technology in the liver cancer area,” says Dr. Gibson. “We further plan to share the ownership of collected and analyzed data with Samsung Medical Center.”

Pfizer signed a memorandum of understanding with the Ministry of Health and Welfare in 2007, agreeing to invest $300 million in R&D in Korea. As part of its commitment, the company also formed a strategic partnership with the Korea Research Institute of Bioscience and Biotechnology and has been leading joint research since then.

In February Pfizer linked up with Eli Lilly and Merck & Co. to set up the Asian Cancer Research Group (ACRG) to concentrate on drug R&D for the most common cancers in Asia. The nonprofit company will initially focus on lung and gastric cancers, which are two of the most common cancers in Asia.

The aim for ACRG is to generate a pharmacogenomic cancer database comprising data from about 2,000 lung and gastric cancer tissue samples. The resulting data will be made publicly available to researchers and expanded through the addition of clinical data from a longitudinal analysis of patients.

The ACRG will initially establish collaborative relationships throughout the Asian region to collect tissue samples and data. “The ACRG is about sharing information for the common good,” stresses Kerry Blanchard, M.D., Ph.D., vp and leader of drug development in China for Lilly.

^ back to top


Lee Min-joo donates 3 billion won to genome project

2010-07-26 16:45

Lee Min-joo, 62, chairman of Atinum Partners, a Seoul-based privately held investment company, donated 3 billion won ($2.5 million) Friday to the Asian Genome Road Project being conducted by the Genomic Medicine Institute at Seoul National University.

The Asian Genome Road Project aims to establish an Asian-specific genome database by sequencing individuals from across Asia, including South Koreans, Mongolians and Turks. Genome analysis will make it possible to individually tailor medical treatment.

The donation pledge was a result of year-long discussions between Seo Jeong-sun, director of GMI-SNU, and Lee about the significance of the project.

Deputies of Atinum Investment Chairman Lee Min-joo deliver his donation to Genomic Medicine Institute, Seoul National University on Friday. From left are GNI-SNU director Seo Jeong-sun; SNU College of Medicine assistant dean for academic affairs Choi Jin-ho; Atinum Partners President Chung Kyung-soo and Atinum Investment President Shin Ki-chun.

GNI-SNU

Lee has shown keen interest in the health care business. The Yonsei University graduate who majored in applied statistics started up a toy company in 1975, and grew it remarkably. Around the 1997 financial crisis, he began a regional cable television business. In March, 2008, he sold C&M, one of the largest cable TV networks in the Seoul area, to a group of investors led by Macquarie, an Australian financial company, for about 1.45 trillion won.

Lee then began to invest more boldly. For the first time as a private businessman in Korea, he acquired Sterling Energy plc, a U.S. oil company, for $90 million in December, 2009. Along with an increase in investment, he reportedly has donated about 30 billion won to Yonsei, KAIST, Myongji University and Seoul Women’s University. Atinum Partners has assets under management of over $1.5 billion.

Lee is known to have a business philosophy that investment and donation should be based on a vision of a future society and the changing needs of people.

The low-profile businessman, who refrains from public appearances, did not attend the donation ceremony held at a meeting room of the College of Medicine, Seoul National University. Instead, he sent his deputies to the event.

Seo is a pioneer in Asian genome study; his research group recently published his work on Korean whole genome analysis, being the fourth group in the world to publish a whole genome study in Nature utilizing next-generation sequencing technology. His group also completed the whole genome analysis of a Korean female, which will be the first Asian female whole genome sequence published.

[The two articles above (shipping potential US jobs to Asia, according to economics of global business) and the two articles below (shipping potential US jobs to Asia, if an overzelous "regulation" in the US would happen) could be viewed through the looking glass of Andy Grove (former CEO of Intel and a major donor to Parkinson's research). Andy Grove argues in Bloomberg BusinessNews that unless the USA finds ways to create jobs, we face a grim picture. It is known that some venture philanthropists set as a requirement that jobs must be created on their money. Suppose those who donated so generously to Parkinson's research, and e.g. an other industry giant who might contemplate donating new funds to liver cancer research would unite, and seed a "Full DNA Analytics Center" in Silicon Valley, where two of the leading sequencer companies are about to deluge genomics with sequences, yet it is questionable if "is IT ready for the dreaded DNA data deluge" - with the requirement that a certain percentage of their funds must create Silicon Valley jobs? - Pellionisz_at_JunkDNA.com]

^ back to top


Working with regulators—the road ahead

July 27, 2010
Navigenics,

Validity. Accuracy and quality. Clinical relevance. Security and privacy. These were among the top themes highlighted over and over when federal officials convened a series of meetings and hearings last week in the Washington D.C. area to discuss the prospects for personal genomics services and other innovative types of health-related tests.

For long-time readers of this blog, these ideas are nothing new. When Navigenics launched its personal genome service more two years ago, we issued a 10-point proposal for a set of industry standards to ensure the integrity of this new field of science and health and safeguard consumers. We reiterated the need for these principles again early last year, when we helped the Personalized Medicine Coalition convene a meeting on standards for personal genomics services.

So when last week’s events kicked off with a two-day meeting called by the U.S. Food and Drug Administration, we were pleased that the need for industry standards has been acknowledged at a high level. At the gathering of experts in health, genetics, science, and the law, many good points were raised and excellent ideas exchanged. Navigenics was among a group of leading personal genetics companies that presented a company overview to the gathering, and our CEO, Vance Vanier, M.D., was the only executive from a personal genomics service given the opportunity to speak on a panel. In its inclusiveness, broad discussion, and scientific rigor, the FDA meeting reflected the type of approach and expertise that will be required to develop effective standards for personal genomics.

The next day, however, saw a very different – and less productive – atmosphere come to light. On Capitol Hill, a subcommittee of the House of Representatives’ Committee on Energy and Commerce held a hearing on “Direct-To-Consumer Genetic Testing and the Consequences to the Public Health.” A key part of this hearing was a report by the Government Accountability Office, or GAO, on 15 personal genetic testing companies.

The ultimate aim of the GAO report was to inform and protect consumers. At its best, the report sheds further light on an important and well known issue in the personal genomics field – how the current lack of regulatory standards can lead to very different approaches between personal genetics companies. But as the writers of the report acknowledged, they “did not conduct a rigorous scientific study.” As a result, many of the report’s findings are anecdotal, partially informed, or incomplete.

We would have been happy to work with the authors of the report to answer any questions or provide further information along the way. Our CEO testified at the hearing, and we filed thousands of pages of informational documents with the committee before the hearing. But Navigenics was not permitted to see the report before its release. Nor were our company’s representatives even allowed to see a copy at the hearing itself. As a result, we could not always fully address questions from Congressional representatives during the hearing, and regret not having been given the opportunity to prepare all the answers that were sought.

Furthermore, the report makes assertions using incomplete information. We have made a formal request to the GAO for the detailed information behind these assertions. Should that detailed information be forthcoming, we are confident we can address any issue arising from the report. We are also appreciative of the fact that Congressional representatives, realizing the many questions left open by the report, extended the period of time to submit additional information for another 10 days. We look forward to submitting additional input to provide a more complete, more accurate picture of our company and our industry.

In the meantime, we will continue to pursue the path we started on more than two years ago. Our discussions with the FDA began even before our service first launched, and we most recently met with FDA officials in May of this year. We look forward to working further with the FDA to develop regulations for our industry at our next meeting with the agency next month. We also look forward to upcoming scientific studies of personal genomics services conducted by researchers at institutions such the Scripps Translational Science Institute, the Mayo Clinic, and Johns Hopkins. These studies, conducted with scientific rigor and involving participants whose only agenda was better understanding of themselves through their personal genetics, will provide useful, informative insights into how consumers interact with genetic information.

As plans for regulation unfold, we stand by our science, our service, and the standards we first proposed in 2008. The FDA, along with other federal officials, is making productive steps towards a new framework for our industry. We look forward to continuing to be part of the discussion.

[The question is not, if GAO will provide an answer - bureaucrats are paid (by our tax dollars) to produce papers. More interesting is the question if Dr. James P. Evans, M.D. will continue lending his reputation to "certify the uncertifiable"; i.e. make an admittedly "non-scientific" set of assertation of the GAO insinuation of "collective guilt", by his standards,"scientific". Prediction is clearly "NO". In some sense the outcome of the Congressional Subcommittee's investigation was predictable, once it became evident that Chairmanship was assigned to a "lame duck" politician, who had announced his retirement from politics weeks before this (inconsequential to him) "final act" - and thus will not be available to rectify a historical insult to a class of pioneer scientists, e.g. by his careless sound bite "From Gulf Oil to Snake Oil". In a larger sense, it has increasingly became evident that a multi-agency expert panel would have to propose new legislation to bring the 1976 mandate of FDA up-to-date - but the political will is simply not there to see through many years (or decades...) while Congressional Legislation would entangle itself in an exercise of futility of trailing an escalating Genome Revolution that they simply don't even pretend to understand even today. Thus, the Congress "won a battle" - but the FDA "lost the war". The extremely blunt Congressional "final act" now forces the FDA to come up with some solution, it will have to go out on a limb by resorting to "out-of-the-box solution" - without the kind of legislative background that forced them to inaction thus far, in the first place. It seems logical, therefore, that FDA will seek some sort of a "consensus" that practically makes a lot of sense - except that FDA would either take the heat of legal challenges (forcing the Judiciary to become Experts in Genomics, a very unlikely scenario, or -much more likely-) make FDA conclusions just watered-down "recommendations" for a largely self-regulated industry. It is worthy of mentioning that e.g. software industry is "self-regulated" as the market screens out inferior software without any need (or possibility) of regulators plunging into source-codes. It is important to emphasize the difference between "regulation" and "punishment of criminal actions"; e.g. if a largely unregulated software is "pirated" or "falsely advertised", certainly such inadmissible act could and should be penalized. Careful observer may note that most conspicuous allegations by GAO of unnamed SNP-DTC may fall into the across-the-board category of "false advertisement" (by some marketing and sales people), and not at all pertaining to the underlying science- AJP]

^ back to top


GAO Studies Science Non-Scientifically

July 23, 2010
Published by 23andMe at 11:53 am

As we posted here on the Spittoon yesterday, the Subcommittee on Oversight and Investigations of the House of Representatives Committee on Energy and Commerce held a hearing on “Direct-To-Consumer Genetic Testing and the Consequences to the Public Health.”

Central to this hearing was an investigation by the Government Accountability Office (GAO) into 15 DTC genetic testing companies.

The GAO refused to discuss its concerns or its report with 23andMe, and now that the report is public and we have had a chance to review it, we are troubled and find the report is deeply flawed. We note that while such an exercise as conducted by GAO has the potential to raise questions, it does not provide the answers that a more rigorous scientific study would provide. This report raises questions, but leads to few conclusions because of its unscientific nature. The GAO itself recognizes this, writing, “It is important to emphasize that we did not conduct a rigorous scientific study.…"

...We are confident in our service’s accuracy and reliability. It is widely accepted that the technology we are using is sound. We understand that GAO did not find any problem with the underlying data that we provide – the As, Cs, Ts and Gs. What is at question is whether or not one part of the information about that data that we provide is of value, and we believe strongly that it is.

The GAO report focused only on disease risk probabilities. It did not focus on ancestry or the trait reports we offer. It also failed to address that we also provide information about carrier status for single gene diseases such as cystic fibrosis and Tay-Sachs disease, as well as the fact that we provide information about a customer’s likely response to certain prescription medications that have been shown in clinical trials to have differing effects and side effects depending on a person’s genetic make-up. This suggests that GAO found no problems with these parts of our service.

Carrier status and drug response information are clearly useful. In fact, Dr. James Evans, the Director of Adult Genetics Services at the University of North Carolina and the Editor-in-Chief of Genetics in Medicine, admitted during the Congressional hearing that drug response information would be of great interest to him as a physician. (He was specifically referring to results pertaining to a patient’s sensitivity to the anti-viral medication abacavir.) It should be noted that during the hearing it was not clear that Dr. Evans had been the primary consultant to GAO regarding the scientific and medical relevance of the results provided by DTC genetics testing companies.

The remarks made by 23andMe Co-Founder Anne Wojcicki and General Counsel Ashley Gould at the FDA public meeting on July 20, 2010 about laboratory developed tests demonstrate the importance of the work we are doing and our commitment to ensuring that members of the public are provided unfettered access to their DNA information in a responsible manner. We embrace the ideas that the FDA offered today about stepping in to provide a regulatory framework and help set scientific and transparency standards across the industry. We look forward to helping with this process.

Read on for discussion of some of the problems with the GAO report.

One of the most unfortunate parts of the GAO report is that it unfairly lumps together reputable and well known companies such as 23andMe with un-named companies making verifiably untrue endorsement claims, spurious scientific claims, and also selling potentially fraudulent supplements in addition to genetic testing services. Some of the most troubling of these interactions between the GAO and genetic testing companies can be found in a table on pages 15-16 of the GAO report, and in a Youtube video the Office has posted.

It must be noted however, that although the companies are not identified in the video or the report, at the hearing it was revealed that 23andMe is Company 1. Other than saying that we believe customers should consult with their physician or other healthcare professional when they have questions about their data, 23andMe/Company 1 is not implicated in any wrongdoing.

GAO seems to believe that directing consumers with questions about their genetic information to their health care professionals (a stance we continue to stand behind) is “misleading” because of a pronouncement by the Department of Health and Human Services’ Secretary’s Advisory Committee on Genetics, Health and Society that physicians “cannot keep up with the pace of genetic tests and are not adequately prepared to use test information to treat patients appropriately.”...

We agree with the idea that consumers should be able to compare the risk predictions they might receive from different test providers. This is an issue that deserves serious attention and one that we believe can be solved by the implementation of broad standards throughout the industry. We have approached both the NIH and FDA for assistance in this matter (see this letter sent to the heads of both agencies and posted on our blog, The Spittoon). Instead of constructively adding to these efforts, GAO has instead implied that because results differ between companies, they are simply wrong. Their report fails to provide all relevant information, and perpetuates the misunderstandings of genetics in particular and science in general that 23andMe has since the very beginning been dedicated to changing....

In conclusion, 23andMe is extremely disappointed that we did not have the opportunity to address all of these concerns at the Congressional hearing, or sooner, due to our lack of access to the report. These are serious issues that deserve serious and thoughtful discussion. Standards are needed in the genetic testing industry. We have been working towards these since the inception of our company, and we were pleased to hear FDA say that they are interested in developing a new type of regulatory framework that can deal with the many special aspects of direct-to-consumer genetic testing while still providing consumers with the protections they need and deserve. 23andMe is meeting with the FDA today and looks forward to fruitful discussion.

[Politics, while at face value won the "battle of SNP-DTC", by its glaring inability lost the war of suffocating US-political control over the emerging global genome industry. It is already leaked out that a "completely out-of-the-box" (legal?) "solution" is emerging, to save FDA from having to rely on non-scientific studies of science (see below).

The huge question therefore is, for SNP-DTC industry, how to mitigate the sizable political damage inflicted in the public's eye (made much worse by sensationalist parts of the media further distorting reality - since most of the public might not have the time/expertise to sort out the complex science issues). Since the SNP-DTC industry is unanimously committed to advance to include analytics of full human DNA sequences anyway, an acceleration might be an answer, to make SNP-DTC transitory to full DNA analytics, made interoperable with digitalized health-records, and all overridden by personal decisions. The damaged market-appeal of "SNP-DTC alone" could thus regain ground and even spectacularly increase market-demand by a "genome computing architecture" as featured on solid scientific ground of "recursive genome function" as early as in 2008, embraced by a panel in 2009, developed for the PMWC2010 this January, and in stealth-mode breaking through just at this time when SNP-DTC just suffered a major public relation set-back. - Pellionisz_at_JunkDNA.com]

^ back to top


FDA's 'Out-of-The-Box' Plans

...Some lawmakers, including Committee Chairman Bart Stupak (D- Mich.), asked Shuren if FDA decided it wanted to regulate DTC genetics companies how it would go about doing so.

Shuren said that, currently, FDA is considering taking "a completely out-of-the-box approach on genetic testing" that would carve out a unique space in its regulatory tableau. This approach could involve looking at subsets of validation data on genetic variants that would enable regulators to make assumptions to trust the industry partners about some of the others.

"What we're thinking about is FDA, along with [the National Institutes of Health], pulling in from the healthcare community, pulling in from patient groups, and actually sitting there and going through the science," Shuren said.

He explained that such a regulatory council could set standards based on the available science and on the breadth and scope and veracity of the claims that the company is making in its marketing.

"And when we set the standards of what's good enough and when it's ready, then [we would] allow those claims. Those companies then would not have to come back in the door with a new application. We'd say, 'You already have a validated test, you can now make this claim,'" he said.

Shuren said that such a regulatory approach would "allow for a lot of tests to be out there" and that it "would actually be a much less expensive way of doing it for these companies as well."...

^ back to top


DTC Genome Testing of SNP-s “Ready for Prime Time”?

Of course not. “What is the practical value of a baby?” Francis Collins quoted on his interview by Charlie Rose the “inventor of electricity” (Faraday) when he was naively asked by the British Prime Minister visiting his laboratories “is there any practical value of electricity?” Should Faraday have today’s hyper-hyped salesmanship and marketing-frenzy around, he could have (rightly) answered “the practical value of electricity can not be overestimated”. Instead, he gave the disarmingly smiley answer about the utility of a newborn baby…

As I commented in this column on the original assessment by Boonsri Dickinson, tested years ago by the three leading DTC genome testing SNP interrogation companies (DeCodeMe, 23andMe, Navigenics), her summary (that “they were not ready for Prime Time”) was absolutely correct.

Interrogation of up to about one million single nucleotide polymorphisms (SNP-s, out of 6.2 Bn nucleotides in the diploid full human DNA) ab ovo can not be “Prime Time News”, perhaps just the “First News of the Day at 5 A.M.” As HolGenTech, Inc. presents in the 7:44 minutes short YouTube, “Prime Time” will be a show for a future “Boonsri Dickinson” where “SNP interrogation” (presently done by DTC, using microarray technology) is interoperative with personal “full DNA sequences” (already in production at $5k per full genome, whole-sale by Complete Genomics, and PacBio is about to deluge the world with full DNA sequences with 6.2 Bn bases for about the price of “1 million bases in present DTC”), as well as with “digitalized health data” (on servers like Microsoft HealthVault and/or Google Health), all overriden by “personal preferences” (on Personal Genome Computers).

We’ll need some time/money to get there. PacBio alone absorbed about $350 M in funding, full digitalization of health data will cost billions of dollars, and Genome Computing will need substantial resources (HolGenTech already accomplished a major – stealth - breakthrough with its solution even since the YouTube was taken half a year ago).

Maybe it is 7’ Clock News in the Morning?

Sorry to say, we've just been set back. Those watching the Congressional Investigation of current DTC had to witness today, however, an ugly “wet blanket” thrown in the face of some testifying DTC company officials, dampening their smiles by an embargoed surprise-report by undercover agents of the Government Accountability Office (GAO).

Back to 5:30 A.M.?

Not really. The report admits (the obvious) in its opening “Highlights” that “GAO did not conduct a scientific study”. Normally, since the topic is the utility of genomics as a science in a nascent stage, a judge would through out right away such glaringly and admittedly “inadmissible evidence”. However, the Congressional Subcommittee is neither a Forum of Science, nor is part of the Judiciary. It can do politics (easy job) or create new legislation to update e.g. the FDA mandate (1976; hard job). Clearly, the entire set-up by GAO to frame DTC verbally as “not ready for prime time” (though this – copyrighted? -coinage, "borrowed" from Boonsri Dickinson's remark made years ago nowhere appears in the written Report) was not at all about science, but about (also well known) deficiencies of marketing and sales. It may be sobering to Congress to realize that their laws are also often misrepresented by “marketing and sales” forces (in their profession called “the press”) for political purposes.

The community of researchers may wonder if hard-working pioneer scientists deserve better; e.g. the use of (available, see below) peer review instead of questionable practices of non-scientist undercover agents of “Big Brother”, who not only admit but brag about their using fictitious profiles of users (and thus getting confusing answers; a well known effect in science; “garbage in – garbage out”). Are you surprised that a Caucasian (non-scientist) customer giving the deliberately false profile that she is "Asian" is getting "off the chart" results? I am not surprised. Nor is anyone who knows how different are the genomes of Chinese (at least 9 main tribes) and various European genomes (dozens).

It is too bad, that the general media, in their rush-reporting (not always scientist-checked) further distorts the misimpression: Bloomberg reports:

“Gene-Test Services Mislead Public, U.S. Report Says” [Would it not be a better title: "U.S. Report Misleads Gene-Test Services"? - AJP]

“Gregory Kutz led a probe for the Government Accountability Office by setting up customer accounts with Navigenics of Foster City, California, 23andMe Inc. of Mountain View, California, Pathway Genomics Corp. of San Diego and DeCode Genetics Inc. of Reykjavik, Iceland, and sending them DNA from five people. The companies’ reports assessing health risks were “medically unproven and so ambiguous as to be meaningless,” Kutz said in his report, presented today at a hearing in Washington”

This brutal error is easy to spot. The Report (both in its “Highlights” and on its Page 1.) clearly states that the findings of “medically unproven and so ambiguous as to be meaningless” were in the previous GEO Report four years ago (in 2006), referring to four “Nutrigenetic Testing” websites (not the same set as currently investigated) and states at the outset of the (2010) “Highlights” that “new companies have since been touted as being more reputable”. Still in total confusion, the general newsmedia attributes the obsolete statement of one set of companies four years ago to the present state of the art of a different set of companies!

A third glaring error is, that it was verbally stated at the hearing, as if an accusation, that DTC provides “medical advice”. Such statement actually never appears in writing in the Report – moreover all (good) mothers provide “medical advice” to their kids, e.g. “eat more vegetables”, “drink more water”, “exercise”, “use sunscreen lotion on the beach” (etc). A good mother might be keenly aware of the medical fact that a fair-skinned child might develop (potentially deadly) melanoma e.g. if exposed in a prolonged manner to the intense California or Arizona solar radiation. Should she refrain from transmitting “medical advice” to her loved ones? Clearly absurd. The web, too, is teaming with all kinds of advice and recommendations; often quite medical. This is not a problem. The problem would be if mothers, friends (etc) would provide medical advice while pretending that they are licenced medical doctors. To the contrary, however, all DTC websites clearly indicate that their recommendations are not to be confused with, and not instead of, prescriptions and/or diagnosis and/or therapy and/or cure by licenced medical doctors!

In the hearing, the available “peer review” was conspicuous by its absence. Neither Founder and CEO of the first ever DTC genome testing company DeCodeMe in Iceland (warned by a US-letter) Kari Stefansson (actually, not only a World-leader genomist but also a Medical Doctor and Ph.D…) was forgotten to have been invited to testify to Congress – in addition to (or instead of) non-scientist under-cover agents but the “peer” who literally wrote the exceptionally lucid yet scientifically rock-solid book on Genome Testing as a basis of Personalized Medicine, Francis Collins, M.D., Ph.D., Head of NIH, formerly head of the “Human Genome Project” was not called as witness, either. Though Dr. Collins, M.D., Ph.D., in his quality as Head of NIH, just weeks ago co-authored in the New England Journal of Medicine the “laying down of the path towards Personalized Medicine” with FDA Commissioner Margaret Hamburg, M.D.

We can all hope – the real witnesses have not been summoned yet. However, the damage inflicted upon the US innovation, jobs and the economy is outrageous and calls for an emergency repair.

[Further coverage is here here here -241 in all, before the end of the day - AJP]

^ back to top


Message arrived ... "the scientific community had to re-think long held beliefs"

Perhaps the best coverage of the 10th Anniversary of "The Human Genome Project" was the interview by Charlie Rose with skeptic NYT reporter Nicholas Wade, and two top scientists of the establishment, Drs. Francis Collins (NIH Chief) and Eric Lander (Director of the Broad Institute and Science Advisor to the President).

As seen from the screen-shot taken on July 16 eve (PT), there was a huge peak of close to a million hits for "recursive genome function". Apparently, the message of the Rose interview arrived; the Second Decade will be about "recursive genome function". To remind readers, a short clip from the copyrighted transcripts is here:

[Excerpts are available in full on Charlie Rose' website. Here, only the brief "conclusion" is reproduced from the copyrighted material - AJP]

"... CHARLIE ROSE: Have there been any operating hypotheses that have been proven to be not the best route to go?

FRANCIS COLLINS: Oh, goodness.

ERIC LANDER: That’s a good question. What do you think, Francis?

FRANCIS COLLINS: I think perhaps our original expectations about what was going on in the genome that was not coding for protein may have underestimated the complexities of what’s going on there. New discoveries about micro RNAs and something from Eric’s group called link RNAs, a whole new category of RNA transcripts that have really opened our eyes to regulation and sophisticated complexities. That’s been exciting and I don’t know that we anticipated that. [Some do know to have anticipated. Prior to even NIH asking Congress for money for ENCODE I publicly announced FractoGene (2002) - AJP]

ERIC LANDER: I think that’s an interesting and important point. The genome has a lot more secrets in it. But by laying out the whole sequence of the genome, we were able to find that, oh, one percent of the genome encoded for the proteins that we’d been focusing on before, the hemoglobins and collagens and things, and three or four times as much of the genome is devoted to other things. [Four times? Why not say forty-four times of the 1.3% of protein-coding "genes"? - AJP]

We knew this because in fact evolution conserved those sequences telling us they had to be functional. So shining a light for us on things we’d never known about, gene regulation ..."

^ back to top


A Proving Ground for P4

GenomeWeb
July/August 2010
By Matthew Dublin

Personalized medicine — the crossroads at which biotechnology, genomics, and medical treatment meet — is a concept that is often touted, though rarely seen in action. As with any radical idea, there needs to be a proving ground before it achieves wider acceptance among professionals and the public. The concept of a healthcare system that can someday provide predictive, preventive, personalized, and participatory medicine, or "P4 medicine" — a term coined by Leroy Hood of the Institute for Systems Biology back in 2003 — is being put to the test.

In early May, The Ohio State University Medical Center and ISB announced a partnership to establish the P4 Medicine Institute, a nonprofit consortium based in Seattle. The new institute's mission is the delivery of a healthcare model with the four-pronged "P" approach, through which patients can be treated proactively throughout their lifetime, instead of the current model of reactive clinical treatment.

While ISB is positioned to bring its biotechnology expertise to the table, OSU's clinical delivery infrastructure, including its own insurance company OSU Health Plan, will give the P4 a chance to try out lots of new ideas in a closed system with roughly 55,000 university employees enrolled in the campus health plan. By creating a matrix of genomics, protein metabolics, and molecular-based diagnostic information tailor-made for each patient, the leaders of the P4 Medicine Institute hope to map out individualized plans for health maintenance, wellness evaluation, and the diagnosis and treatment of illness.

"We're working on trying to come up with an understanding of how the healthcare system can be changed from one that is disease-based care, without an understanding of the real deep, precise biology underlying the health and wellness, to one which really looks at predicting and preventing disease by focusing on wellness in a very personal way," says Clay Marsh, executive director of Ohio State's Center for Personalized Healthcare. "We see the P4 Medicine Institute as a conduit for connecting the best people in the world, for really transforming how we do things. Our goal is to try and connect with the best people and, as a team, really define where the healthcare ecosystem is today — what elements are needed to create a tipping point or create a culture change that will transform medicine — and work as a group to do that."

Putting it together

Part of the new institute's strategy to bring genomics-based healthcare to the clinic is to connect researchers working in biology and medicine with those working in computer science and IT, as well as bringing in thought leaders from the legal, business, and medical device manufacturing sectors. "P4 is really about combining the systems biology-driven innovations and insights into human biology of ISB and the translational research and clinical delivery capabilities of one of the largest academic medical centers in the country," says Frederick Lee, the executive director of the P4 Medicine Institute. "It's really about that pragmatic and tangible bringing together of the things that we are learning about not only human biology itself, but really driving those into how the care is going to be delivered — health, wellness, and disease management — in a real P4 manner."

Lee says that ISB's decision to partner with OSU came only after a serious survey of the academic medical landscape of the United States. "ISB had spent some time evaluating and meeting with a lot of the top academic medical centers in the country, but we found that OSU had really embraced the concepts of personalized healthcare and the leadership was already very open to this approach to medicine as the way to really shine," Lee says. "Back in 2005, OSU established their own personalized healthcare center. They have also been having for the past few years a personalized medicine conference. All of these things were something that we just didn't see at any [of the] other universities."

The P4 Medicine Institute's first project will focus on using novel molecular modalities, including organ-specific proteomics, mRNA analysis, and deep sequencing of genomes in the context of families. They aim to combine that data with more conventional clinical data to establish an individual's base level of health and to determine when he is veering off that baseline into illness. A series of wellness clinics will also be established at OSU from a population base of their employees. The institute will then apply the molecular modalities and technologies from ISB along with the clinical delivery models that they will design together with OSU. "This is really interesting because OSU can and will be a closed-loop system — it's the payor for that large employee base, it's the provider of care, and it's also the patient population itself," says Lee. "We can really rewrite the rules for how care is paid for, provided, and received, and then draw out pragmatic things so that industry partners can develop the technical infrastructure to power this new type of healthcare."

The researchers at OSU have already set about one pilot project focused on wellness, a buzzword that is the cornerstone of any discussion about the P4 concept. The wellness project will include programs in exercise, nutrition, and biorhythms in order to better understand patient needs and how P4 can create "modules" to help the patient achieve optimal health. "We're very interested in looking to stratify people into groups according to exercise, nutrition, stress, sleep, and age, and merge that with genetic data and other molecular profiles so that we can then start to assign a predictive and preventive approach," OSU's Marsh says. "We'd like to give people feedback into how to improve and make changes to their lifestyle."

OSU will also initiate a chronic disease pilot project, which Marsh says will look at ways to build a team of clinicians around chronically ill patients using molecular profiling and other genomic elements to improve their quality of treatment.

Finding partners

Marsh says that one of the biggest challenges facing the P4 Medicine Institute at this early stage is making sure that additional partners are chosen carefully. Because this is still very much an experimental endeavor contained within the confines of the OSU healthcare system, those institutes looking to join in the hopes of making a profit from this new brand of healthcare are barking up the wrong the tree. "We want to produce wins for everyone involved, but we're looking for people who understand the long-term opportunity and also understand that if you're looking for some sort of committed return on an investment in a relatively short period of time, this is probably not the right partnership," he says. "We're spending a huge amount of time to make sure we're assessing the partners that are interested in what we're doing and making sure we have a level of compatibility with them so that we don't have any problems down the road because we failed to expressed our goals to each other clearly. That's a key element that we need to make this run smoothly. "

The Breakdown

Members: The Institute for Systems Biology and The Ohio State University Medical Center

Funding: ISB and OSUMC internal funding, with each institute contributing $500,000 annually for the next two years.

[LeRoy Hood' brainchild, the P4 Medicine (Participatory, Predictive Personalized Prevention) is both his "brand" - but he has gracefully consented to a generic use (like Mercedes Benz gives away all their safety patents, royalty-free, in the interest of the public at large). Thus, at the "P4 Conference in Silicon Valley" (December 9-10, 2010) Fred Lee will uphold the flag, while others like the Conference Chairman (AJP) will explore, particularly because of the location, the Personal (Holo)Genomics- and Information Technology aspects, much needed for P4. -Pellionisz_at_JunkDNA.com]

^ back to top


Ion Torrent, Stealthy Company Tied to Harvard’s George Church, Nabs $23M Venture Deal

Xconomy
Luke Timmerman 11/6/09

Ion Torrent Systems, a company advised by Harvard University genomics pioneer George Church, has raised $23 million in new capital to develop what it calls on its website “groundbreaking and highly disruptive technology” and to hire people who “want to do what it takes to put a dent in the universe.”

The company, which has a location near Yale University in Guilford, CT, and one in San Francisco, has raised $23 million in equity out of a financing round that could be worth as much as $26 million, according to a regulatory filing released today.

The document doesn’t say who invested, and Ion Torrent didn’t immediately respond to a request for comment. But the new company is associated with some big names, including Church and Stanford University’s Ron Davis, who serve on the company’s scientific advisory board, and CEO Jonathan Rothberg, who was the founding CEO of 454 Life Sciences before that company was sold to Roche two years ago for $140 million in cash.

Ion Torrent Systems website is pretty vague about what it is really up to, although its job postings offer some clues. It says it is looking to hire molecular biologists and biochemists to do the aforementioned universe denting, and that it offers that it offers the opportunity to work with top scientists “and have a profound impact.” It is also looking to hire software developers and “evangelists” who want to “create the biotech software platform of the future and share it with the world. Build powerful tools and create a tight-knit community that will use and develop them for years to come.”

GenomeWeb speculated back in March, based on a patent application filed by Ion Torrent Systems, that it is working on new DNA sequencing technologies, although the company wouldn’t confirm that. Major players in the field—such as Carlsbad, CA-based Life Technologies, San Diego-based Illumina, and Roche—have been in a competitive frenzy to lower the cost of sequencing full human genomes. One Mountain View, CA-based startup, Complete Genomics, raised $45 million in venture capital earlier this year to support its new model for sequencing entire genomes for as little as $5,000 apiece or less.

[While this piece of news is somewhat dated, it seems important to provide a perspective of the "lead group" of nanosequencing, especially from the news below on PacBio's F-Round - Pellionisz_at_JunkDNA.com]

^ back to top


PacBio Nabs $109M to Make Cheaper, Faster Gene Sequencing Tools

Luke Timmerman 7/14/10

The idea of sequencing entire human genomes for $1,000 or less, in a matter of minutes, has never been hotter, if the flow of venture capital to Pacific Biosciences is any indication. The Menlo Park, CA-based developer of super-fast gene sequencing machines has raised another $109 million in a Series F round of financing.

The company, known as PacBio for short, has now snagged a total of about $370 million in venture capital since it was founded in 2004. The company didn’t say who invested in the latest round other than San Diego-based Gen-Probe (NASDAQ: GPRO), which said it pumped in $50 million last month. But PacBio has a long list of existing investors that include Kleiner Perkins Caufield & Byers, Mohr Davidow Ventures, Alloy Ventures, and Intel Capital—as well as few names more familiar on the public-company circuit—Morgan Stanley, T. Rowe Price, Deerfield Management, and AllianceBernstein.

The vision at PacBio is to develop new gene sequencing instrument powerful enough to deliver the complete genome sequence from a human being in about 15 minutes and for a few hundred dollars. It’s the kind of technology that could enable basic researchers to run all sorts of experiments about the subtle variations in DNA that make people different, and which differences in genetic coding might be related to disease and wellness. San Diego-based Gen-Probe invested in PacBio’s technology with an eye toward using this sequencing technology as a tool doctors could use to diagnose disease.

“These funds will be used to support our operations as we begin ramping production capabilities for the commercial launch of our PacBio RS system,” said Hugh Martin, PacBio’s chairman and CEO, in a statement.

The company didn’t say in today’s statement when the machine would be commercially available.

PacBio is facing intense competition in the field of gene sequencing. Mountain View, CA-based Complete Genomics is pursuing a different business model, in which it asks researchers to send samples to a central facility instead of asking them to buy an expensive machine and run it themselves. Guilford, CT and San Francisco-based Ion Torrent Systems wowed researchers at a meeting in March when it unveiled a benchtop sequencing machine that can perform a lot of a basic experiments in a machine that only costs $50,000. The incumbents in the field whose machines generally cost 10 times that much—San Diego-based Illumina (NASDAQ: ILMN) and Carlsbad, CA-based Life Technologies (NASDAQ: LIFE)—have been launching pre-emptive strikes against the upstarts by continual improvements that have brought the cost of sequencing a human genome down to about $10,000 today.

[As the "Dreaded DNA Data Deluge" is upon us, perhaps it is worthy to look up the 2008 "Pellionisz" YouTube (about 8,333 views) if our theoretical (algorithmic) preparedness is up to the scientific challenge. My central argument was (and is) that "Big IT" will rise to the challenge, as it represents major business opportunities. However (as e.g. the Charlie Rose interviews show below) the scientific paradigm-shift towards recursive algorithms still calls for more support - Pellionisz_at_JunkDNA.com]

^ back to top


Recursive Genome Function at the crossroads - Charlie Rose Panel on Human Genome Anniversary

Human Genome Anniversary with Nicholas Wade of "The New York Times," Francis Collins of the National Institutes of Health, Eric Lander of MIT and the Broad Institute
Monday, July 12, 2010

Click here to play ...

[My FaceBook entry: The Panel is remarkable in not saying that without the theory of recursive genome function - where genomics and epigenomics are mathematically treated as the hologenome - we'll continue to be frustrated at looking at our medicine cabinet half-empty (or half full?) of genomic medicines. Every panelist is correct in what they do say, either from the skeptical viewpoint (Wade) implying that the meager results show that we overspent, since the paradigm failed that sequencing automatically translates into understanding (recursive) genome function (and diseases). No journalist can pinpoint what is missing, as it is up to advanced theory. Drs. Collins and Lander, to defend the establishment, dwell on the positives, how half-FULL is our "medicine cabinet" and how spectacular our technological (sequencing) and medical (genomic medicine) results are. True; the end alludes to “Junk DNA” (98.7%), acknowledging that it hides keys to regulation; yet none announce the paradigm-shift of the principle of recursive genome regulation. - AJP]

[A further comment contrasts the "stand-off" by pointing out that "recursive genome function" broke the 100,000 hits on Google (see in Table of Contents). There can be a debate if it is a fractal iterative recursion or some will publish a contending theory, but e.g. Eric Lander et al. (2009) featured on Science cover the fractal nature of the genome. - Pellionisz_at_JunkDNA.com]

^ back to top


The Sudden Death of Longevity

The Little Flaw in the Longevity-Gene Study That Could Be a Big Problem
Newsweek, 7/7/2010

How a faulty DNA chip [an Illumina 610-Quad microarray - AJP], lax editorial review, and a few skipped steps turned a good study into bad science.

Remember that Science study from last week linking a whole bunch of genes—including unexpectedly powerful ones—to extreme old age in centenarians? NEWSWEEK reported that a number of outside experts thought it sounded too good to be true, perhaps because of an error in the way the genes were identified that could cause false-positive results. Since last Thurday, they’ve been trying to figure out what might be lurking in the data, and now there’s a suspect: a DNA chip called the 610-Quad, which is used to identify and sequence the chemical letters of DNA [not really "sequencing" but interrogating SNP-s in oligos - AJP], and which has an apparent tendency to get some small but critical details wrong. The flaw with the chip and the way it was used could cast serious doubt on the study’s strongest results, suggesting that they stem from a lab mishap rather than a real link to long life.

The flaw in question could be easily addressed with a little follow-up research. In very simplified terms, all that’s needed is for someone to rerun the analysis using a single different DNA chip. But this should have been done already, before publication. The fact that it wasn’t raises the question of how a paper with a missing piece like this got approved and published by Science.

The paper—which identified 150 genetic variants that might increase a person’s chances of making it to age 100, apparently by protecting the body against disease—had as one of its two principal components a type of research called a genome-wide association study (GWAS). These surveys, the bread and butter of modern genomics, use chips to analyze large amounts of DNA in many people, looking for variants that are more common in “cases” (here, that means centenarians) than “controls” (regular people). The variants that turn up more often in “cases” are the ones linked to the trait the scientists are curious about. The studies are usually very thoughtfully designed and reliable. What happened with this one?

The first thing to know is that not all gene-identifying chips are created equal. They [microarrays - AJP] occasionally identify letters of DNA incorrectly, and—to complicate things further—each type of chip makes different errors at different points in the genome. That phenomenon can lead to false-positive results if it's not well-controlled by experimental design, says David Goldstein, the Duke University geneticist who first raised this issue here last week. “Unfortunately, different chips have their own little problems for specific [genetic variants],” he says. The key to keeping false positives at bay is to ensure that cases and control groups are analyzed using exactly the same techniques. If you use one type of chip to analyze your cases and a different type to analyze your control group, “you can see any [variants] that are genotyped differently on the different chips ‘lighting up’ as apparently associated with the trait,” says Goldstein, when in fact that pattern is just an experimental artifact.

All of the chips used in the Science study came from the same manufacturer, Illumina, but they weren’t identical. According to a brief description in the paper, the researchers used two different chips to look at their centenarians, analyzing most people with a 370 chip that examines 370,000 genetic variants and a smaller fraction of people with a different chip (the 610-Quad) that examines 610,000 variants. The reason, says Paola Sebastiani, the Boston University biostatistician who led the study, is that at one point the 370,000-variant chip went off the market and the 610-Quad “was the best option for us in terms of costs and coverage.” The controls involved an even more varied assortment of technology—some were analyzed with the 370, some with the 610, and some with two other types of chips.

Kári Stefánsson, the Icelandic geneticist who founded deCode Genetics, knows something about the 610-Quad—his company has used it too. He says it has a strange and relevant quirk regarding two of the strongest variants linked to aging in the BU study, called rs1036819 and rs1455311. For any given gene, a person will have two “alleles,” or forms of DNA. In the vast majority of people, at the rs1036819 and rs1455311 locations in the genome, these pairs of alleles consist of one “minor” form and one “major” form. But the 610-Quad chip tends to see the wrong thing at those particular locations. It always identifies the “minor” form but not the “major” form, says Stefánsson—even if the latter really is present in the DNA, which it usually is. If you use the error-prone chip in more of your case group than your control group—as the BU researchers did—you’re going to see more errors in those cases. And because what you’re searching for is unusual patterns in your cases, you could very well mistake all those errors (i.e., false positives) for a genetic link that doesn’t actually exist.

Stefánsson says he is “convinced that the reported association between exceptional longevity and most of the 33” variants found in the Science study, including all the variants that other scientists hadn’t already found, “is due to genotyping problems.” He has one more piece of evidence. Given what he knows about the 610-Quad, he says he can reverse-engineer the math in the BU study and estimate what fraction of the centenarians were analyzed with that chip. His estimate is about 8 percent. The actual fraction, which wasn’t initially provided in the Science paper, is 10 percent, the BU researchers tell NEWSWEEK. That’s close, given that Stefánsson’s calculations look at just two of the variants found in the study and there may be similar problems with others.

One of the oddest things about this potential error is how much it stands out in an otherwise carefully designed study. The BU researchers made a serious attempt to deal with confounding factors—a challenge given that centenarians are by definition different from any possible control group, because they were born earlier—and, Sebastiani says, the team “conducted extensive quality-control procedures and cleaning of the data.”

What the group apparently didn’t do, however, is obtain a third-party analysis of their centenarians’ and controls’ DNA using a single chip for everyone. There’s “nothing in the world simpler to do,” says Goldstein. “We would do this for any ‘discovery’ we had in this kind of a situation, but when the results themselves are a bit improbable, as the results are here with the exceptional genetic control, then there is all the more necessity for that quality-control step.” Goldstein adds that such a step is standard practice for most GWAS research. That's why you can trust many other GWAS papers while withholding judgment on this one. Yes, it’s tempting to look at this study and wonder what other flaws may be hiding in other GWAS papers, even those in top-flight publications. But this episode shouldn’t be read as evidence that genome-wide association studies are untrustworthy as a rule, because the rigor that seems to be missing from this study is almost always found in others that haven’t yielded such dramatic results.

It’s possible that when that replication study is done, the genetic associations in the longevity study will hold up. (At least a few of them, such as the link between long life and APOE—which is also linked to Alzheimer’s—surely ought to, since they have been found in other studies.) The BU paper’s critics aren’t out-and-out saying it’s wrong. They’re just saying it could be.

Still, one has to wonder how the paper wound up in Science, which, along with Nature, is the top basic-science journal in the world. Most laypeople would never catch a possible technical glitch like this—who reads the methods sections of papers this complicated, much less the supplemental material, where a lot of the clues to this mystery were?—but Science's reviewers should have. It’s clear that the journal—which hasn't yet responded to the concerns raised here—was excited to publish the paper, because it held a press conference last week and sent a representative to say as much.

The BU scientists are holding a public Web chat today at 1 p.m. ET. Most of the questions they take probably won’t concern highly technical stuff like this. Sebastiani would prefer that critics’ questions be addressed directly to her in journals rather than, say, relayed to her by NEWSWEEK writers: “So far we were not approached by any of these investigators directly, only by reporters,” she says, which is “rather surprising and disappointing to me.” The Web conference, Sebastiani adds, is being held primarily to address the fact that several companies are already thinking about selling a test based on the Science paper, a notion that the study’s authors abhor and are trying to prevent. “We strongly feel that results of such a test should continue to be for research use and that it is not at all ready for use in the public domain,” says Sebastiani. “There are just too many opportunities for misuse and misinterpretation at this very early point.” Not at all ready for use in the public domain: that’s the one thing that everyone involved with this paper does seem to agree on.

One more important thing: the BU researchers put together a model for predicting whether a given person would live to 100 or not, and it was widely reported that the model had 77 percent accuracy. That was true in the study, where the researchers were applying the model to people from groups of roughly equal size—they had about the same number of centenarians as they did controls. In reality, however, only 1 in every 6,000 people lives past 100, so the real-life sample sizes, if you will, are very different. Both Stefánsson and David Altshuler, a geneticist who leads GWAS research at the Broad Institute, say that fact renders the model much less useful than you might think, because it actually tells you only that your chance of living to 100 is either really small (much less than 1 percent) or really, really small (even less than that). “For most practical purposes,” Stefánsson notes, this “makes no difference for an individual.” It’s a good reason not to rush out and get your longevity genes tested, although at this point, you don’t need another one.

UPDATE: Within an hour of this story's publication, the Science study's authors released a statement which a BU spokeswoman described as appearing "because of your inquiry and a similar one from the New York Times concerning methodology used to test 2 of the 150 genetic variants." Here is what the statement says: "Since the publication of our study in Science, which was extensively peer-reviewed, a question has been raised about two elements of the findings. One has to do with two of the 150 genetic variants included in the prediction model, while the other is related to the criteria used to determine the significance of the individual variants. On the first concern, we have been made aware that there is a technical error in the lab test used on approximately 10% of the centenarian sample that involved the two of the 150 variants. Our preliminary analysis of this issue suggests that the apparent error would not effect the overall accuracy of the model. Because the issue has been raised since the publication of the paper, we are now closely re-examining the analysis. Another question that was raised concerns the criteria used to determine if an association between a genetic variant and exceptional longevity was statistically significant. We used standard criteria for the analysis, and we are confident that the appropriate threshold was used."

[I was greatly impressed by the sub-title of first report: "Scientists Discover the Fountain of Youth! Or Not." Apparently, Mary is just as skeptical about non-existent "cancer gene", "happiness gene", "fountain of youth gene" etc. as leading scientists like Kári Stefánsson are. "Genes" (with as many definitions as the number of scientists you ask) don't do any complex phenotypes as "longevity" - they just turn out the "basic building materials" (amino-acids for proteins) as called for by the design (in intergenic and intronic sequences). True, if e.g. the concrete is defective, the "longevity" of the architecture may be reduced to the sudden death of a quick collapse. It may be time to come to realize that "the genes failed us" in old-time genomics to explain recursive genome function (hologenomics) of today. It is hard to find imperfection in Mary's science writing. With the "DNA chip" of IBM (for sequencing) coming, perhaps we should distinguish Illumina's ChIP (bead array, otherwise called "microarray") and "chip" that is usually meant either a piece of semiconductor, or that of a potato - AJP]

^ back to top


23andMe Letter to Heads of FDA and NIH

The Spittoon
June 24, 2010

At 23andMe, our goal is to give people the best information possible about their personal genetic data. [This brilliant sentence focuses on the key issues, virtually assuring that both DTC Genome Testing and a renewed FDA will prevail. Hopefully, the process will be cut blazingly short, to avoid the US being deprived of one her few global competitive advantages that could generate jobs and rescue health care by prevention. Contrary to the the astonishing belief that some people should be protected against information about their bodies - Esther Dyson, a board member at the company, has even called the FDA’s position “appallingly paternalistic.”- NIH Chief Dr. Collins appears to concur with sharing the information with their owners (see below in this column): "I would be very uncomfortable with a system that says no, we know better than you do, you won't understand this information so we're not going to let you have it. There's something that doesn't feel right about that." Thus, the eventual outcome of a new legislation for FDA (only to safeguard quality, not to infringe with the citizens' civil rights of accessing [genomic] information about their bodies, updating the 1976 mandate of FDA), seems assured: "to give people the best information possible about their personal genetic data" - AJP]

We believe that this goal is shared by all genetic testing providers. [Another brilliant sentence, uniting the entire US DTC Business - AJP]

You may be aware that different personal genetic testing companies, while providing completely accurate test results, may provide differing risk estimates for some diseases and conditions. While there are valid scientific reasons for such different estimates, they might nevertheless cause confusion for some consumers and physicians.

23andMe has sent a letter to Dr. Margaret Hamburg, Commissioner of the Food and Drug Administration, and Dr. Francis Collins, Director of the National Institutes of Health, asking for their respective agencies’ help in developing broadly applicable standards and guidelines to achieve consensus regarding how to provide information on genetic test results and risk estimates. The contents of this letter are reprinted below (links to the referenced article and blog post are provided here instead of attachments):

June 24, 2010

Dr. Margaret Hamburg
Commissioner of Food and Drugs
Food and Drug Administration
10903 New Hampshire Ave
Silver Spring, MD 20993-0002

Dr. Francis Collins
Director
National Institutes of Health
9000 Rockville Pike
Bethesda, Maryland 20892

Dear Drs. Hamburg and Collins,

We are writing to ask your assistance in resolving an issue of concern to 23andMe and, likely, all genetic testing companies, whether they report their results to physicians or to consumers. As you are aware, though results from 23andMe and other genetic testing companies are typically consistent, there have been reports of inconsistencies (Ng et al. 2009). We believe that it is important to emphasize that different genetic testing companies can report inconsistent results even when based on tests with proven analytical validity. The reasons for this may include: companies employ slightly different statistical models for making risk estimates; companies establish different criteria for the inclusion of associations in their reports; new associations are being discovered at a faster rate than companies’ development cycles; companies may test for an imperfectly overlapping set of genetic variants for reasons including the ability of different genotyping technologies to assay certain variants.

Although inconsistent results may have a scientifically valid basis, we recognize that they may be confusing to physicians and consumers. However, we, as an individual company, cannot address this issue alone. Therefore, we are writing to ask your two agencies to engage with us on this issue, to work towards solutions that can be broadly applied. We offer the following ideas as a starting point for discussion. An organization or group of organizations could develop:

• guidelines for acceptable analytical validity;

• standards for the positive and negative predictive value of all tests;

• “best practices” for companies, for instance necessitating transparency in reporting the positive and negative predictive values of their tests, so that results could be readily compared across companies.

We note that any framework developed for genetic testing companies must consider the multiple high throughput technologies on the horizon, including genome, exome and transcriptome sequencing. For this reason, the set of ideas we present above does not include having an organization define a specific set of markers as an acceptable genetic test.

The issue of inconsistent results was one of several discussed in an Opinion piece by Pauline C. Ng, Sarah S. Murray, Samuel Levy and J. Craig Venter that appeared in the October 8, 2009 issue of Nature. A joint response from 23andMe and Navigenics submitted to Nature but not published by Nature was posted on our web site on November 19, 2009. We have attached our open letter to Nature, and a response from one of the authors for your consideration.

Our goal is to provide the best possible information to consumers and health care practitioners. We would appreciate the opportunity to work with you towards this goal and, more broadly, to promote innovation in personalized medicine. We will follow up with a call to your offices.

Sincerely yours,

Ashley Gould

on behalf of Anne Wojcicki

^ back to top


Amazon Sees the Future of Biology in the Cloud

By Luke Timmerman, Xconomy.com

July 6, 2010

The future of biology, if Amazon.com (Nasdaq: AMZN) has its way, will be in the cloud.

The Seattle-based online retailer has generated buzz the past few years with its foray into cloud computing through Amazon Web Services. This is the model in which customers rent server space on a pay-as-you-go basis, and get access to their data anytime via the Internet. It's supposed to allow small businesses, governments, and anybody else to save cash and hassles by not having to buy and maintain their own in-house servers. The model is credited with enabling a new generation of lean tech startups to build businesses using far less capital. [The easiest metaphor to understand what "cloud computing" is to think of it as "car rental" instead of buying/leasing your own automobile. It immediately follows that no car rental company will tell you where and how to go (might provide a map if you don't have a GPS) - but will certainly not drive the car for you. If you want to be actually driven to a destination, think of Limousine-service with driver included (or most simplistically a "taxi cab"). For genomics, they do not exist - and ultimately they are rather expensive solutions for the long haul - AJP]

Biological researchers haven't embraced the new model as quickly as their tech brethren, but the cloud computing wave is coming to life sciences, says one of Amazon's biotech liaisons, Deepak Singh. The trend is coming out of necessity. [Like when you fly abroad, car-rental or taking a cab are frequent option, based on necessity - AJP] Gene sequencing has been on a breakneck pace of innovation over the past few years, as instrument makers like San Diego-based Illumina (Nasdaq: ILMN) and Carlsbad, CA-based Life Technologies have lowered the cost of sequencing an entire human genome to as little as $10,000. Upstarts like Mountain View, CA-based Complete Genomics seek to sequence entire genomes for as little as $5,000, while a rival, Pacific Biosciences, is aiming to sequence genomes in 15 minutes. Since every human genome has 6 billion chemical units of DNA, this faster and cheaper form of sequencing is creating enormous datasets that somebody will need to store, analyze, compare, and visualize. Without that capability, it's just a vast pile of data that doesn't really lead to valuable new insights for medicine. [Moreover, the entire sequencing industry might become unsustainable and sequences worthless unless someone actually knows what to do with the data to transform them into understanding - "the cloud has similarly limited intelligence" as a cab driver; if you tell the address, might get you there (often purposefully not along the most economical path FOR YOU)" - AJP]

Computing challenges have become a "serious blocker" to people trying to make sense of the genomic wave, Singh says. And Amazon has made it a high priority over the past couple years to become the company that stores genomic data in a cheaper and more accessible way for researchers. Customers, Singh says, "have started looking at the cloud very seriously as a possible option. Over the last year or so, that curiosity has turned into serious adoption."

Amazon's pay-as-you-go, rented server model has attracted partners and customers all over the country. The Broad Institute of MIT and Harvard in Cambridge, MA, is a user, along with Harvard Medical School. Life Technologies, an instrument maker, and Seattle-based Geospiza, a bioinformatics software company, have a partnership to use Amazon's servers to store genomic data. Palo Alto, CA-based DNAnexus, an intriguing bioinformatics startup, has built its business model around using Amazon Web Services. And one of the leading evangelists for cloud computing in genomic research is C. Titus Brown, a computer science and microbiology professor at Michigan State University, who is teaching students how to use Amazon Web Services to store the data for their experiments.

Precisely how important this is to Amazon, a company with $24.5 billion in revenue in 2009, is hard to say. In keeping with Amazon's close-to-the-vest culture, Singh would only offer vague adjectives when I asked for specifics on the number of customers, the percentage of Amazon Web Services business that comes from life sciences customers, the number of employees devoted to this effort, and the size of the market that Amazon ultimately sees for cloud computing services in life sciences.

There are still major barriers to be cleared before this can become a real earnings driver for Amazon. Much sequencing is done at centralized labs around the country that already have invested in expensive servers to store their data in a secured place on campus. So there's incentive to keep using those tools to get the most value out of them. Some labs are generating so much data from the instruments that they aren't always sure they have enough bandwidth to transmit it all to Amazon's servers. The raw experimental data is so precious to a biologist's career that it can be hard to just send it away to a vendor for safe-keeping, rather than have it under lock and key on campus.

And many researchers struggle with how to analyze, visualize and interpret the data being spit out by the sequencing machines. The software that needs to run on top of Amazon's storage capacity and databases -- the bioinformatics piece of the puzzle -- is still a cottage industry with home-made programs, piecemeal open source alternatives, and a lot of researchers still using old-school spreadsheets like Microsoft Excel.

Yet even before companies like DNANexus achieve major market traction with simple and easy-to-use bioinformatics software, many researchers still feel compelled to store the data in anticipation of the day when it will be easier to sift through. And Amazon isn't the only company wooing them. Microsoft and Google have their own cloud computing services to offer. At least one competitor, Seattle-based Isilon Systems, is making visible inroads in the life sciences market by selling of clustered servers. Isilon now generates 15 to 16% of its revenues from life sciences customers, up from 2% in early 2008, CEO Sujal Patel said at a recent Xconomy forum. Isilon's customer roster includes a lot of heavy hitters, like Merck, Genentech, Sanofi-Aventis, Bristol-Myers Squibb, Illumina, Complete Genomics, the Broad Institute of MIT and Harvard, Stanford University, and Johns Hopkins University.

Amazon's Singh knows this terrain well himself. He got his doctorate in chemistry from Syracuse University, and spent eight years of his career in the biotech industry, including stints at San Diego-based Accelrys and Seattle's Rosetta Inpharmatics. The past two years, he's been working as a business development manager for Amazon Web Services, with a particular emphasis on getting to know what the life sciences market wants from cloud computing.

Amazon has done a number of things to ease the transition for customers to cloud-based storage, Singh says. It has worked to obtain public databases and make them available to researchers. One example last month came from three recently completed pilot projects for the 1,000 Genomes Project. Putting that data out there for researchers, and enabling them to share it, has generated a lot of interest. "There's been a lot of demand for 1,000 genomes pilot data in Amazon Web Services," Singh says.

Showing the scientific value is one part of the equation, but the business proposition is just as important. Amazon's case is a pretty simple one. A lab can make a big capital expenditure upfront, but they usually have peaks and valleys of computing power needs. That means the lab isn't really fully utilizing its server capacity all the time. Plus, the new breed of sequencing instruments are getting so cheap and fast that there's no way a lab can really anticipate its server capacity needs in the future. Instead of buying expensive equipment and risk getting it maxed out in a year or two, the argument goes, why not lean on Amazon and its basically limitless capacity and flexible pay-as-you-go pricing model?

That may make sense for a lot of labs, but Amazon has found it needs to tailor its product a bit more for life sciences customers. The feedback prompted Amazon to make one curious old-economy concession to help with cloud computing. For some customers who don't have the bandwidth to efficiently communicate with Amazon's cloud, the company has set up what it calls an "import/export" service, which allows the lab to save their data on disk and physically ship it to Amazon via FedEx. This helps with some customers who want to know they can get their data to and from Amazon in a reliable way, on a predictable time schedule, without hogging up too much bandwidth on campus.

As much as academic labs might be interested in what Amazon is offering, they can only do as much as their funding agencies really allow. And there are rumblings that the U.S. National Institutes of Health, the world's primary funding agency for biomedical research, might be facing budget cuts in not so distant future.

Budget cuts at NIH could actually benefit Amazon, Singh says. It could put pressure on labs to be more careful with their capital spending, think a little harder about whether to build their own server clusters, and look more closely at alternatives like cloud computing. Even though the cloud is supposed to be cheaper, Amazon has felt the need to offer customers discounts on price. The company has started offering a cloud infrastructure service with less backup capability for the data, or "redundancy." That offering is still a more durable backup option than a lab can build on its own, Singh says, and it comes at a lower price than Amazon's regular cloud offering. The "Reduced Redundancy" service makes sense, he says, for a dataset that can be quickly reproduced (like another sequencing run on a blood sample, if the data is lost, for example).

Most of the interest in Amazon's offering is from academic labs, rather than with biotech companies and Big Pharma, Singh says. If Amazon can gain a toehold first in academia, it will almost certainly look to continue the momentum in Big Pharma, which spent an estimated $45.8 billion on R&D last year. Big Pharma spends all that money, and is still living with an abysmal success rate, in which only one out of 10 drugs that enters testing ever becomes an approved product.

Sequencing of individual human genomes, and analysis of how they differ, is one of the ways researchers are hoping to someday lower the cost and increase the odds of success in developing new medicines. It's a long-term trend that Amazon wants to be in position to reap.

"Over the past 12 months, there's been significant interest. All you have to do is look at conference agendas," Singh says. "The nature of the conversation has shifted from 'What should we do?' to 'What are we doing, and how should we do it?'"

[If we'd know what we were doing, we would'nt call it "research" - Albert Einstein - AJP]

^ back to top


Calling GWAS Longevity Calls into Question [Gene(s)]

GenomeWeb
July 06, 2010

According to Nature's The Great Beyond blog, the GWAS published in Science last week that identified SNPs for "exceptional longevity" has generated some criticisms within the community. Critics are calling the findings of Paola Sebastiani et al. into question amid inquiries of "subtle biases" and the team's use of "different versions of the SNP chips" from Illumina, according to Nature. The Wellcome Trust Sanger Institute's Jeffery Barrett told the Guardian that "some of the genetic variants in this study are claimed to have much, much stronger effects on longevity than we've seen in similar studies of diabetes, heart disease, and cancer. For instance, the strongest single effect makes someone 10 times more likely than average to be extremely long-lived, compared to other complex diseases where typical variants only make someone, say, one and a half times as likely to be diabetic," highlighting his skepticism. According to Nature, Sebastiani says that the variants they've reported have larger effects since becoming a centenarian is a much rarer condition than having diabetes.

Last week, the New York Times' Nicholas Wade reported that DeCode Genetics' Kari Stefansson, who has run a similar experiment using a larger group of centenarians, did not find any of Sebastiani et al.'s 150 variants in his cohort.

Nature reports that Sebastiani and her team are considering "holding an online chat next Wednesday to answer questions" and to quell any confusions that their work has generated.

[We have spent dollar billions (probably trillions) and close to half a Century trying the find the "cancer gene", "epilepsy gene", etc., etc., - only to came to a realization when the recursive genome function pinpointed that complex phenotypes (such as longevity - or even cancer) are likely to be lurking in (holo)genome regulation (via epigenomic channels) - rather than to be tied to a single (or even small enough identifiable number of) "genes". Maybe time is ripe to spend monies on the much-neglected (holo)genome regulatory functions; and e.g. to research why the absence of hereditary- and/or epigenomic damages to DNA (not just to "genes" but mostly to derail recursive regulation) lead to accelerated aging process and untimely deaths. - Pellionisz_at_JunkDNA.com]

^ back to top


IBM setting up cloud for genome research
July 2, 2010 10:35 AM PDT
by Lance Whitney

IBM is looking to help genome experts further their research by providing a cloud where they can better share information with their colleagues.

IBM and the University of Missouri announced Friday a new initiative to develop a cloud-computing environment where universities and medical professionals could work together on genome research on a large-scale, regional basis.

Tapping into Big Blue's high-performance computers, the joint IBM-Missouri cloud would let researchers share their findings and discoveries with each other more quickly and efficiently than they do now. Such an advancement would push the university's current bioinformatics research even further, potentially improving people's lives, IBM said.

As one example, specific genetic changes in cancer cells help doctors decide how best to treat their patents for breast cancer, colon cancer, lung cancer, and leukemia. To detect those changes, DNA samples must currently be sent out to labs for sequencing and analysis, a process that can take weeks. But by accessing IBM's genomics cloud, medical staff could sequence and analyze those samples in just a few minutes, according to IBM. [Wait a minute ... there are actually three bottlenecks. Presently, as the statement acknowledges, the main bottleneck is "sequencing" ("can take weeks"). Presently, the second bottleneck of storage and transfer to the sites of "DNA Analysis and Interpretation" is usually not through the net (requires too much bandwith, too much time - UPS of physical hard drives, laughable as it is, still "a preferred choice"; takes very few days. The third bottleneck is the actual analysis and interpretation. Presently, some projects take forever, some limited analyses can be produced in weeks, using pre-genomics computing architectures. In the future (a rough estimate is 5 years), all three bottlenecks should add up to 30-60 minutes COMBINED, to be practical e.g. in a hospital environment to do biopsy, full DNA sequencing, full DNA analysis and interpretation (and personalized recommendation e.g. of the most suitable cancer-treatment, as predicted in my 2008 YouTube). Forget UPS. Forget even the Internet-2. Hospitals will demand e.g. Ion Torrent desktop-sequencers ($50k) with on-rig serial/parallel (FPGA) hybrids, to cut out storage and transfer, and accelerate analysis and interpretation of low-bit huge-scale strings (DNA); also forecast in the above YouTube. - Pellionisz_JunkDNA.com]

"This collaboration with IBM provides our researchers, and those being trained to become tomorrow's researchers and educators, access to critical high performance computing resources needed to process massive data sets and apply increasingly more sophisticated bioinformatics tools and technologies," Gordon Springer, scientific director of the University of Missouri Bioinformatics Consortium, said in a statement. "The availability of these resources will enable discoveries that will benefit mankind and the environment."

In the first phase of the cloud project, IBM said it will offer Missouri an iDataPlex high-performance computer and software that will tie in the university's existing computers and speed up the DNA sequencing and analysis of humans, plants, and animals. In the second phase, Big Blue and the university will work together to create a prototype of the cloud environment. The final phase should see the genomics cloud become fully operational and expand to a regional scale. [Sounds like 5 years to me - AJP]

No specific time frame was given as to when the project would formally get off the ground or how long it might take to reach the final phase.

This isn't Big Blue's first foray into the world of genomics research.

Last year IBM announced new research into technology that can quickly and relatively cheaply conduct genetic testing. In the past the company has also donated hardware and software to remote areas to further study human DNA around the world. And the original job of IBM's Blue Gene supercomputer was to predict how chains of biochemical building blocks described by DNA fold into proteins.

[The "IBM/Roche DNA Transistor" and the "IBM Genome Cloud Computing". One might believe (as most did think so when the "IBM CP and its OS/2" came out, that this time "IBM cornered genomics". The bottom line is that nobody can tell the future - it may or may not happen to IBM - but "the race is on" with IBM/Roche having produced the Big One earthquake (the tectonic plates of "Big IT" and "Big Pharma" piled upon one another (that I predicted half a Century ago). Indeed, the very same IBM already "almost did it" with the World's fastest supercomputer (at that time), Blue Gene under historical governance of Caroline Kovac - upon completion of The Human Genome Sequencing Project. Also, presently a Seoul-based microarray - followed by Full DNA Sequencing Institute is backed by SAMSUNG. In the USA, Microsoft HealthVault, Google Health, DELL Life Science Division, Oracle's foray into Personalized Medicine (etc) are also "Big Players". Not everybody is "sold on the Cloud", however (e.g. see Larry Ellison going ballistic in a 4:25 minute YouTube on "Cloud Computing" in an a Churchill Club's excerpt , see full 1:26 hour YouTube , yet you can also download "Oracle Cloud Computing"... Now take Christensen's book "The Innovator's Dilemma;- When New Technologies Cause Great Firms to Fail" and factor in what happened to "IBM PC/OS2". It looks like Genomics needs a Microsoft-type "pure-play genome software start-up" for the fastest growth possible ... Pellionisz_at_JunkDNA.com]

^ back to top


Scientists Discover the Fountain of Youth! Or Not.

Mary Carmichael
Newsweek, July 1, 2010

They say getting old is better than the alternative, and it's much better if you manage to do it like Florrie Baldwin, who died in May after 114 years of great health. (She was still climbing ladders at 104 and almost never took any medication.) Baldwin attributed her long, hale, and hearty life to a daily fried-egg breakfast, but her true advantage was probably in her DNA. Scientists have long suspected that she and others who grow very old have genetic variants that protect against the molecular ravages of age.

The trick, though, is finding those genes [and genetic variants, more often than not, in the intergenic and intronic regulatory regions - AJP]. Much of the research so far looks more clear-cut in mice or worms or fruit flies than it does in humans. And because the topic of old age and genes in general inspires so much excitement, it's often hard to tell where any given study falls on the continuum between brilliant (last year, the Nobel Prize for medicine went to three scientists who studied the relationship between DNA and aging) and ridiculous (this year, the cosmetics company Lancôme launched an eye cream that purports to "boost genes' activity and stimulate the production of youth proteins," which is about as believable as Baldwin's fried-egg theory).

What, then, to make of the new headline-grabbing paper that identifies somewhere between 33 and 70 genes (depending on how definitive you like your results) associated with extreme longevity—and also introduces a model that claims to predict with 77 percent accuracy whether you're going to be one of those ripe old folks? If the study's findings are correct, they are a very big deal, with the potential to overhaul how scientists think about aging and genetics. Indeed, they're so striking that some of the world's top geneticists think they can't possibly be right. More on that in a bit, but first, a point that even the study's authors have to concede: the research doesn't actually describe normal aging. It concerns only genes that may govern the process in people who make it to 100 or more. "The question is, of course, do the findings apply to the general population? Can we apply your model and predict the average lifespan?" says Paola Sebastiani, the Italian researcher who led the team. "And the answer is no, we can't." So what exactly can we learn from the new study?

The paper, published today in Science, has two basic parts. The first is what's called a "genome-wide association study," or GWAS. Researchers obtained genetic data (300,000 variants) from about 1,000 very old people (those over 100 years of age) enrolled in the New England Centenarian Study, and then compared the readouts to results from a same-size group of average people often used as a standard control in genetic studies. They found 70 genes that were more common in the centenarians. Then they repeated their study with smaller groups and confirmed 33 of those. They also looked at known disease-causing genes in both centenarians and regular subjects. It turned out that the centenarians had just as many dangerous variants as everyone else, which suggests that the 70 longevity genes (or 33, if you prefer the confirmed ones) were actively protecting them against illness. In other words, very long life isn't just about not having genes that make you sick—it's also about having genes that keep you well.

Next, the researchers employed some unusual math to build a model that described the combined effects of 150 variants they had found in the centenarians. (Some of those variants were related to the same genes, so the final number of suspected genes was still 70.) Then they applied the model to each of their study subjects, blinding themselves as to whether an individual was a centenarian or a member of the control group. Seventy-seven percent of the time, the model rightly predicted which group a given person belonged to—a success rate that is not only statistically significant but unprecedentedly high for a model that predicts a complex trait. It also pointed to 19 different genetic "signatures," or combinations, that seemed to confer long and healthy life in the centenarians.

Here's the weird thing: 15 percent of the control group had those signatures, too. That means that, genetically speaking, 15 percent of us should be living to 100 or more, when in reality only about 1 in 6,000 people do. What's happening to the rest of the would-be centenarians? Thomas Perls, a co-author of the paper and a geriatrics researcher at Boston University, says that maybe they're getting bumped off by things no gene can prevent. "Remember, this generation [in the New England Centenarian Study] lost a quarter of its population to childhood infectious diseases," he says. "And just because you've been handed the genetic blueprint for long life doesn't mean you're going to get there if you smoke a lot or you get killed in World War I or you're hit by a bus."

These are provocative ideas, and that may explain the wide variety of reactions that scientists had upon seeing the paper. Some said it was groundbreaking and could lead to drugs that mimic the protective genes in those who aren't blessed with them naturally. "What this paper answers, which I think is very important, is how many variants are there that can assure longevity," says Nir Barzilai, director of the Institute for Aging Research at the Albert Einstein College of Medicine in New York City. "Now scientists can start tracking those genes down and leading the findings toward drug development." Barzilai, who has collaborated with the study's authors on previous projects, has done some of that work already, probing a gene that influences how the body processes a hormone called IGF-1.

But other researchers were concerned about the new study's methods, calling the results everything from "somewhat surprising" to "preposterous." The problems start, they say, with the size of the group the researchers examined. The New England Centenarian Study is the largest cohort of very well-seasoned people in the world, but compared to the numbers typically used in GWAS-style research, it's actually quite small. Modern GWAS techniques usually involve groups of tens of thousands or hundreds of thousands of people. To attain statistical significance in a GWAS as small as the one in the Science paper, any gene would have to have a hugely strong effect in the body. It's odd that the Science paper finds not just one strong gene but a whole raft of them, especially since common diseases are usually caused not by strong genes but by weak genes acting together. (Why is a bit of a mystery, though it's possible that many strong genes have been weeded out over time by natural selection.) Aging, of course, is humanity's most common ailment. "I am very surprised that in a cohort of this size they have found 33 variants of genome-wide significance in extreme longevity," says Kári Stefánsson, the Icelandic researcher whose company, deCODE Genetics, has led many GWAS efforts. "We haven't seen any of them in our work with a different kind of cohort, but [a] much larger [one]." Other work at deCODE has also suggested that extreme old age is influenced by just a few genes—certainly not as many as 33 or 70.

The study's authors have a response to that. Yes, they admit, the sample is smaller than you'd need to do a GWAS of a common disease (which isn't really their fault, since there aren't many centenarians available for study in the first place). But common diseases, with their panoply of weak genes, aren't the right comparison to make, they say. Centenarian status isn't just an extreme form of the common condition known as aging, they argue; it's a rare condition of its own, and rare conditions are often caused by genes with powerful effects.

There are other potential problems with the new study. David Altshuler, a leading geneticist at the Broad Institute (a collaboration between MIT and Harvard), says that "one has to be cautious in interpretation, because the cases and controls were drawn from different times and places, and the DNA from cases and controls was measured using different technologies, which could lead to false apparent relationships." Duke University's David Goldstein, also a prominent geneticist, echoed those concerns. (And Altshuler and Goldstein are renowned for rarely agreeing on anything.) The control data in the Science study came from a standard set of numbers that Goldstein's lab has used, too. "We have found when we compare samples run at Duke to the [standard] control panel, there are a lot of [variants] that appear significant just because the samples were run in different ways," he says. Until the data has been replicated using identical technologies for the centenarians and controls, he adds, "I think we've got to hold judgment on this."

The authors of the Science paper are respectable researchers, and they're not trying to sell anyone genetically enhanced snake oil. (Tom Perls, in fact, has been such a vocal critic of anti-aging hype that he was once sued by the American Academy of Anti-Aging Medicine for defamation. They settled.) Sebastiani, Perls, et al. will be doing lots of follow-up research, including, they hope, a whole-genome sequencing project that will shed more light on their findings.

But for now, their research isn't ready to be translated from the lab to the clinic. You don't need to rush out and get tested for all the genes found in the Science study (although someone somewhere is surely making plans right now to sell you such a test). You can probably get a good idea of your risk just by looking through your family scrapbooks. "Using this technology might be helpful," says Robert Marion, a clinical geneticist at Montefiore Medical Center in New York. "But I would think that by just asking how old the person's parents and grandparents were at their time of death, the accuracy would probably be higher." So, if you want to know your chances of making it to 100, you should learn your family history and eat right and exercise while you're at it. Surely you didn't need a highly technical Science paper to tell you that.

[I got fairly technical in my Google Tech Talk YouTube - 8,230 views, rising steady since late 2008 - about aging; as the result of recursive genome function running out of auxiliary (regulatory) information by de novo methylation (rendering non-coding sequences "canceled" upon perusal). Growth is fueled by information - and since we all have a finite amount of it, eventually must run out. And, indeed, one can tell the average expected lifespan from the genome fairly precisely. (Yes, you can do, too). Ever wondered why a mouse, with 98% same set of genes only lives for 2-4 years, while we can last over twenty times longer? Just look at a genome (genes are nearly identical for species from lowly worms to homo sapiens) - but the regulatory mechanism (the amount of information, as well as the "clock speed" of recursion) can be rather different. The mosquito fish lives about 2 years, a catfish about 60. Look for recursive genome function in regulatory (non-coding) part of the DNA. One can not exclude it - but "genes" are unlikely to be the clue. Your "Junk DNA" can be the treasure chest of your longevity. - Pellionisz_at_JunkDNA.com]

^ back to top


IBM DNA Decoding Meets Roche for Personalized Medicine
eWeek.com
Brian T. Horowitz
2010-07-01

[IBM-Roche DNA Transistor - AJP]

IBM and Roche are working together to decode DNA more quickly and cheaply, potentially allowing patients to receive customized prescription drugs. In the future, this kind of health care IT could also allow patients to purchase their own DNA code information for as little as $100.

IBM and Roche, a pharmaceutical and diagnostics company based in Basel, Switzerland, are working together to fine-tune a DNA decoding process that could lead to faster and more affordable sequencing and personalized medication.

As part of the July 1 agreement, Roche's subsidiary, 454 Life Sciences, will market and distribute future products based on IBM's DNA Transistor technology. In addition, IBM will license the technology and continue to provide expertise and resources.

Roche, which describes itself as the largest biotech company in the world, holds "expertise in medical diagnostics and genome sequencing," IBM announced.

"Sequencing is an increasingly critical tool for personalized health care," Manfred Baier, head of applied science at Roche, said in a statement. "It can provide the individual genetic information necessary for the effective diagnosis and targeted treatment of diseases. We are confident that this powerful technology, plus the combined strengths of IBM and Roche, will make low-cost whole genome sequencing and its benefits available to the marketplace faster than previously thought possible."

Ajay Royyuru, senior manager of the IBM Research Computational Biology Center, explained that DNA sequencing has come a long way in 10 years, as originally genome sequencing was not yet possible. Now the technology is available but costly, he said.

"The next step we need to take is to make it faster and better in quality of readout and scale of operation. Once we reach that point, which could be [in] the next five years or 10 years, then I think we have the potential of being able to apply that routinely to the practice of medicine," Royyuru said in an interview with eWEEK.

The goal of the project is to read DNA quickly and efficiently at a low cost. If successful, this process would allow doctors to more effectively match medication to patients.

IBM's DNA Transistor technology, comprising a combination of metal and silicon insulation, uses electrodes to thread DNA molecules through a nanopore, a hole the size of a nanometer, or one-billionth of a meter.

Royyuru compared the creation of the nanopore to punching a small hole through a piece of paper with a pencil.

"We're able to drill a hole small enough and operate it electrically to put the DNA through the pore. All of that we have shown is workable," he said.

In the next phase of development, IBM and Roche will work on moving the DNA through the nanopore.

"Then we will have shown at that point that we can control the passage of DNA," Royyuru said.

Slowing down the DNA as it travels through the nanopore makes genetic data readable, he said.

Personalized medication can eliminate some adverse side effects of current drugs. Some preliminary cancer drugs based on the DNA Transistor technology have already reached the market, Royyuru noted.

"Today what everybody does with medicine is trial and error," Royyuru said. "They give a certain treatment because it worked on most people before. But they have no way of knowing if it will work on you or not. They have side effects that are worse than what you're trying to treat."

Ultimately, the technology has the potential to improve throughput and reduce costs, so human genome sequencing could be purchased for $100 to $1,000.

In addition to the announcement, IBM has posted a video of how the DNA Transistor technology works. [See above - AJP. This announcement, though may appear "earth-shaking" for the shere size and dominance of Big IT by IBM and Big Pharma by Roche, has been expected for a long time, and both the announcement and the video actually fall short of the expectations. "Nanopore Sequencing" is not lead by IBM, but Pacific Biosciences (with IntelVC $100 M investment is about to ship their equipment to R&D labs ($695,000 apiece), Oxford Nanopore (UK, backed by Illumina) is also ahead with the technology development. Last but not least, mastermind and developer of the original 454 Roche sequencer (now largely obsolete) Jonathan M. Rothberg, Ph.D. Founded and is CEO of Ion Torrent - with a desktop-printer-size nanosequencer priced at about $50k. The announcement is particularly disappointing since it erroneously refers to "decoding DNA" - while there is not a word anywhere that IBM would focus on the NEXT STEP (beyond "sequencing" that the DNA transistor will do who knows when), and target "recursive genome function". By far the most positive aspect is, truly making history, that "doing genomics" is now branded by the world's two largest monsters as "thinking physical". Now the question is that in addition to physics of the sequencing device, where is the theoretical physics that Schrodinger started in 1943 "What is Life" with his prediction of an aperiodical chrystal where covalent bondings of H ions encode life. We can now proceed to substitute "aperiodical' with the more precise "nonlinear dynamical, fractal & chaotic arrangement (and re-arrangement... through normal and derailed recursion) of the bonding sequence. That is the true "decoding" task. - Pellionisz_at_JunkDNA.com]

^ back to top


How to Build a Better DNA Search Engine

The techniques for indexing Chinese language websites could dramatically improve the speed of bioinformatic searches, according to research by SOSO, the third-largest Chinese search engine.

If there's one thing that Google has taught the current generation of web savvy surfers, it is that internet searches are quick. The small print at the top of every search it delivers stamps this idea into the culture of search.

Type the word "physics" into the search engine, for example, and it delivers 102,000,000 results in 0.21 seconds. That's mind-blowingingly fast.

That might sound like good news for researchers combing bioinformatic databases. These databases are huge and growing exponentially. They contain, for example, the rapidly increasing number of genomes from different species around the planet as well as the genomes of different individuals within the same species.

Given our experience with web search, it's easy to imagine that finding a gene that is common to more than one organism or individual ought to be as quick as searching Google. But it isn't.

The reason according to Wang Liang, a computer scientist at SOSO.com, one of the big three search engines in China, is that bioinformatics has failed to exploit the basic search techniques that have made search engines like Google so quick.

Most bioinformatic searches use either the BLAST or FASTA algorithms. These essentially compare the data from one genome with the data from another, then with another and so on. That's satisfactory when there are a relatively small number of genomes but it quickly becomes unmanageable as the number of genomes increases exponentially.

Search engines faced exactly this problem 20 years ago with the growth of the world wide web. Search engines initially indexed the web by recording the words that each document contained. Searching for a specific word then meant looking for it in one web page, then in another and another and so on. This approach becomes increasingly slow as the number of documents grows..

So the engines took another approach: they turned the indexing process on its head creating what is known as an inverted index. "The idea of an inverted index is very simple," says Liang.

Instead of creating a list of web pages and the words on each page, the indexing process records for each word, a list of webpages where it appears.

So a search now looks only through the list of words that a search engine has indexed. When it finds the word, that entry also records the webpages where it appears. In other words, instead of searching an index of webpages to find a specific word, you search though an index of words to find the webpages where it appears.

That dramatically simplifies things but there are various complexities that making the indexing process tricky. For example, in English, the spaces between words show clearly where each word starts and finishes. That isn't the case in genetic data. So one important questions is what constitutes a word. [This venerable question was poised by the classic approach by Stanley et al., 1994 to "arbitrarily assume that a 'word' is a random 3-8 nucleotide string - AJP]

Liang says that an important clue comes from the way search engines index languages like Chinese where there are no spaces between words either. One way to index a Chinese document is to segment the text into n-grams, words that are n-letters long. So you start by segmenting it into 1-grams, one letter words, then 2-grams, two letter words. A search for a 3 letter word, such as ABC, can then be done by searching for the 2-grams AB and BC.

In fact, some Chinese search engines work in exactly this way, by indexing all the 2-grams.

But how many letters are there in a genetic word, what n-grams should a search engine index? A 1-gram segmentation gives only four words, the base nucleotides A, T, C and G. But that's no good because the combined searches needed for longer words are then unmanageable.

The answer comes from the statistical distribution of words in DNA sequences which Liang says follows Zipf's law. This essentially states that in any long document, 50 per cent of the words appear only once. This can be used to find a kind of average length length of DNA words.

In Chinese for example, the percentage of 1-gram words that appear only once is less than 50 per cent, the percentage of 2-gram words that appear only once is about 50 percent and the percentage of 3-gram words is less than 50 per cent. So 2-gram words are a good average.

Liang applies the same criteria to find the average length of words in the genomes of arabidopsis, aspergillus, the fruit fly and the mouse. And he finds that a good average word length is about 12 letters. So the best way to index genome data is to look for 12-grams, he says.

None of this needs any new technology to complete. Liang says that the open source search engine Lucene is the perfect forum in which to do the work and, impressively, has even used it to build a rudimentary bioinformatics search engine himself.

It makes sense that the huge improvements in search that have been made by commercial search engines ought to find application in the bioinformatics world. Perhaps there's even a decent business model in such a plan, for example by serving ads targeted at the kind of people who do bioinformatics search.

The only question is who will lead the way in this area. And if this work is anything to go by, it looks as if the Chinese search engine SOSO has the lead.

[The idea that the DNA is a "language" (certainly not English, not even a "Russian novel" as Esther Dyson likes to allude to, and while wishing good luck to the Chinese, the FractoGene approach suggests that it is no human language at all - goes back at least to Eugene Stanley et al, 1994, as elaborated by AJP in Simons and Pellionisz, 2006. However, as it was presented by Pellionisz in Cold Sping Harbor 2009 , the early approach by was tainted by (totally arbitrarily, as the authors admit) assigning a "DNA word" randomly, as a 3-8 nucleotide string. The seminal work by Rigoutsos (2006) is referenced there as a method of identifying "words" of DNA - and on that basis Pellionisz' FractoGene shows that a full DNA sequence, when broken up to "repetitive words" follows the Zipf-Mandelbrot Parabolic Fractal Distribution curve. If Dr. Collins is right that "the DNA, like mathematics, is God's language" - it is unlikely that it is English or Chinese (or any human language) - but Nature's Geometry speaks Fractal. This, actually, may be rather fortunate for creating the "universal search engine" - not optimized for either English or Chinese, but for the MEANING of concepts - AJP]

^ back to top


'Jumping genes' make up roughly half of the human genome

Wired
By Brandon Keim
2010-06-25

 Recursion Life Cycle of a Retrotransposon

Geneticists have revealed that Transposons or 'jumping genes', which create genomic instability and are implicated in cancer and other diseases, make up roughly half of the human genome.

"Now it looks like every person might have a new insertion somewhere," says senior author Scott Devine, associate professor of medicine at the University of Maryland School of Medicine's Institute for Genome Sciences.

Transposons are like small self-replicating sequences that transfer themselves from one generation to another. But the scientists faced the overwhelming problem of finding a new insertion within three billion base pairs.

Their study indicated transposons are jumping in tumours and are generating a new kind of genomic instability. They are already known to interrupt genes and cause human diseases, including neurofibromatosis, hemophilia and breast cancer.

Scientists believe a process called methylation, which silences genes during differentiation also shuts off transposons' ability to jump. Analysing the patterns of mutations in the lung tumours suggested that during tumour formation, modified methylation patterns may be allowing transposons to re-awaken, Devine says.

The results are published in the June 25, 2010 issue of Cell. (ANI)

Reference:

R.C. Iskow, M.T. McCabe, R.E. Mills, S. Torene, E.G. Van Meir, P.M. Vertino and S.E. Devine. Natural mutagenesis of human genomes by endogenous retrotransposons. Cell (2010).

[The First Decade after The Human Genome Project was a painful transition. The Second Decade will be the Revelation of "Recursive Genome Function". While Barbara McClintock was ridiculed as a "Kook" for her "Jumping Genes" notion, and 35 years after her publication received the Nobel Prize in 1983, "The Human Genome Project" still (erroneously) assumed that the DNA is "static" (i.e. the sequence of 6.2 Bn A,C,T,G-s explains everything. In 2002, the FractoGene notion suggested nonlinear dynamics (fractal and chaotic properties) of genome function, but the "double heresy" of running against both Crick's "Central Dogma" and the "Junk DNA" axioms made it possible for the "old school" to hold the line till "The Decade of Recursion" (now "recursive genome function" stands at 69,300 hits on Google). Cancer, as the epitome of regulation-derailment, is likely to become a central target of the now liberated new decade of research, analysis, interpretation - and up to personalized therapy and cure. - AJP]

^ back to top


A coding-independent function of gene and pseudogene mRNAs regulates tumour biology

Laura Poliseno1,4,5, Leonardo Salmena1,4, Jiangwen Zhang2, Brett Carver3, William J. Haveman1 & Pier Paolo Pandolfi1

Nature 465, 1033-1038 (24 June 2010) | doi:10.1038/nature09144; Received 21 September 2009; Accepted 22 April 2010

Cancer Genetics Program, Beth Israel Deaconess Cancer Center, Departments of Medicine and Pathology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts 02215, USA

FAS Research Computing & FAS Center for Systems Biology, Harvard University, Cambridge, Massachusetts 02138, USA

Human Oncology and Pathogenesis Program, Department of Surgery, Memorial Sloan-Kettering Cancer Center, 1275 York Avenue, New York, New York 10021, USA

Abstract

The canonical role of messenger RNA (mRNA) is to deliver protein-coding information to sites of protein synthesis. However, given that microRNAs bind to RNAs, we hypothesized that RNAs could possess a regulatory role that relies on their ability to compete for microRNA binding, independently of their protein-coding function. As a model for the protein-coding-independent role of RNAs, we describe the functional relationship between the mRNAs produced by the PTEN tumour suppressor gene and its pseudogene PTENP1 and the critical consequences of this interaction. We find that PTENP1 is biologically active as it can regulate cellular levels of PTEN and exert a growth-suppressive role. We also show that the PTENP1 locus is selectively lost in human cancer. We extended our analysis to other cancer-related genes that possess pseudogenes, such as oncogenic KRAS. We also demonstrate that the transcripts of protein-coding genes such as PTEN are biologically active. These findings attribute a novel biological role to expressed pseudogenes, as they can regulate coding gene expression, and reveal a non-coding function for mRNAs.

[Bye Junk DNA - one more time... - AJP]

^ back to top


Business Models for the Coming Decade of Genome-Based Economy - the past and transition

June 26, 2010
Andras J. Pellionisz,
Ph.D. in Computer Engineering, Ph.D. in Biology, Ph.D. in Physics
Former Research Professor of New York University,
Founder of International Hologenomics Society
Founder of HolGenTech, Inc.
Founder of HelixoMetry
Silicon Valley, California, USA

Genomics is a Science of Heredity, right? Nuclear Physics is a Science of Fission and Fusion of Atoms, is that correct? Yes -- both Nuclear- and Genome-Experimentation and Theory started with Pure Science, before Technology Applications cut in with full force. Both “pure science approaches” quickly became mostly government-supported R&D projects of significant size and potential. After the initial paradigm-shifts of Atoms splitting, or Genomics becoming integrated with Epigenomics and transformed into Informatics (HoloGenomics), a Nuclear Industry and likewise, a new phase in Genome-Based Economy resulted. It may not grab the attention of headline-makers that the true success or failure of (untrue) “Decoding the Human Genome” really lies in the soundness of underlying business models.

With the 10th Anniversary of the “Completion” of the “Human Genome Project” (HGP) upon us by the 26th of June, 2010, there is an increasing, and increasingly controversial debate over its origin and aftermath through its First Decade. The controversial origins, as they are amply documented, will not be belabored here. Let’s take a look on the future from the perspective the sheer force of business models.

In the aftermath, the First Decade was a turbulent transition. The coming second is re-shaping the World, and some picture the US on a potential brink of a decline.

Juan Enriquez, now director of Excel Venture Management (Boston) sums it up this way: (also in his book: The Untied States of America):

“Countries that fail to commercialize their research discoveries remain diminished,” Enriquez points out. “Take the U.K., for example: They discovered penicillin and DNA, but preferred to let the knowledge sit in a college lab somewhere, rather than let the professor, god forbid, benefit from the discovery. The moment we start adopting those same attitudes in the United States, we will begin to decay.”

This perilous inflection point, leading us to the second Decade after HGP, did not come without an almost decade-old warning by Excel Venture Manager Juan Enriquez. The assessment of the past decade was actually predicted by Harvard Center of Economics director Juan Enriquez (2001) in his epoch-making bestseller “As the Future Catches You”. He established the historical comparison of “Genomics” and “Digital” for their role in Empires rising or falling.

His prediction was based the core Business Model of Genome-Based Economy-Round One, that has happened in the 1970-s - although it is not much talked about these days. Dr. Enriquez refers to Norman Burlag’s “Green Revolution” that elevated tenfold yields of crops such as rice, to a large extent by means of genetic improvements, thereby saving the lives of at least 2 Billion people from starvation. Lives saved were mostly Chinese and Indian – thus is the appreciation of about 3 Billion of presently 7 Billion people of the World. Norman Borlag’s ouvre, selling seeds of high-yield crops was based on a rock-solid simple business model for the World, driving it by feeding the starving – and as corollary making him not only the Nobel Peace Prize Laureate in 1970 but also the most decorated person of all times - second only to Mother Theresa.

The Science of Genomics, by recombinant DNA, was “consolidated” in 1972 (Ohno) into an oversimplified view of the structure of the Genome, derailing it from underlying business models. The “all-important Genes” (fairly short sequences of DNA producing RNA and then proteins), while those parts not obviously coding for proteins (as turned out later, 98.7% of human DNA) were to be "safely ignored" as “Junk DNA”, based on Crick’s “Central Dogma” (originated in 1956 that Protein information never recurses into DNA – why bother with the “Junk”?).

Based on research and technology, recombinant DNA focused therefore on genes, they industrialization was based on research by Boyer and Norman (1973). GenenTech generated a business model for Genomics, entering Medicine through Big Pharma: by the “One gene, one disease, one billion dollar pill”. Almost at once, the FDA, regulating the US business model was Chartered in 1976 based on the (false) premises to the effect that “since we all have the same genes, criteria of clear benefits overriding acceptable risks should apply to everyone; ‘one size fits all’”. The first and most significant Gene-business (Genentech, expressing a human gene in bacteria) produced hormone somatostatin in 1977, and synthetic human insulin in 1978.

In a parallel line of developments, 1975, Fredrick Sanger (the only living double Nobel-Laureate) announced that he had developed an efficient method to determine the order of base pairs of DNA. Alan Maxam and Walter Gilbert (Harvard) independently developed a completely different method. This method was announced to molecular geneticists in the summer of 1975 at scientific conferences and circulated as recipes among molecular geneticists until formal publication in 1977. Half a decade later, many groups began successfully to automate the process, in North America, Europe and Japan. The first practical prototype was produced by a team at the California Institute of Technology in 1986, under the direction of Lloyd Smith, as part of a large team under Leroy Hood. This prototype was quickly converted to a commercial instrument by Applied Biosystems, Inc., and reached the market in 1987.

Independent of business models (if any), inherent in genome sequencing, the remote possibility of sequencing the entire human genome caught the fancy of Big Science of the US Government. With the Office of Management and Budget's approval, the DOE committed its first funds for human genome research in October 1986. After the NIH started its own genome effort the next year, a coordinated project was formally launched. In 1988, the DOE and the NIH signed a Memorandum of Understanding that committed the two agencies to work together by coordinating activities and leveraging their respective strengths as assets. The official "clock" on the project was started on October 1, 1990 with Dr. Watson behind the wheel. Assumably, with a major (publicly known) motivation that his son, Rufus, is affected by schizophrenia, and a complete catalogue of all human DNA would hopefully cast the net big enough to catch “the missing gene” for this and other feared diseases. Thus the NIH-DOE led, multi-agency and multi-national (multi-governmental) HGP became perhaps overly “one gene – one disease” oriented. Since governments don’t tend to think in terms of “business model(s)”, HGP was not directed by business model(s), nor did it proceed with the diligence for speedy accomplishments. (Government-sponsored R&D projects are usually wished to be kept alive forever, understandibly with well thought-of cost-overruns, enough to justify an ever-increasing budget from year to year, yet not too big to raise eyebrows resulting in a shut-down of the government project).

Oh yes, a huge business opportunity in DNA sequencing did not escape the attention of those not so much medical research- but business-oriented. Head of the US NIH, Bernadine Healy wished that the US goes down on the pathway of patenting human genes – only to collide “head-to-head” with Jim Watson who, as an epitome of “pure research scientist” wanted Genomics remain a “free for all”. As a result, in 1992 Watson resigned and Dr. Francis Collins, M.D., Ph.D., already “with a claim to fame to his name” (having identified the gene of Cystic Fibrosis) took over governance of The Human Genome Project. With Francis Collins at the helm, the US government (with a number of countries in tow) proceeded at a snail-pace, “but free for all”, and even more “gene-focused”, with Dr. Collins, as a formerly practicing M.D. focusing on the possibility of eventually transforming medicine.

Enter Craig Venter, Ph.D., the maverick doer, turned-off by both by “medicine” as applied to some of the 70,000 Vietnam-victims with hundreds dying in his arms, and also (as someone who experienced how NIH worked, by working for it), disenchanted e.g. when one of his NIH proposals was not only rejected, but pontificated as “un-doable” (thus he did the multi-year project in a few months, with your taxpayer's money spent on who knows what).

Craig Venter threw his hat into the ring, with the much-ridiculed idea that he would sequence the human DNA by applying the “shotgun sequencing” (“even a monkey can do that…”) and would patent the found genes (“he wants to own the genome like Hitler wanted to own the World”) – and all would be based on private enterprise money (“he can’t raise that kind of money, anyway”). Craig did raise the money, deployed his “super fast monkeys” (computers) to assemble the shotgunned fragments, and while officially it was a tie exactly ten years ago, Craig Venter clearly won the race for sequencing – but lost the opportunity to patent any of it, since that business model he was banking on was invalidated. (Thus he turned later to the business model of patenting modified DNA, to be synthesized for energy and new materials business).

However, as the Human Genome Project neared its “first draft” in 2000 – with the 140,000 expected genes were nowhere to be found, the Business Model of Big Pharma with “one gene, one disease, one billion dollar pill” was dead on arrival on the news of “finishing” the Human Genome Project. As a long-shot result, as of 2009 the Swiss pharmaceutical conglomerate Hoffman-La Roche now completely owns the US pioneer-business based on gene-technology (Genentech).

Sequencing Business, however, took off as now the third generations of private enterprise-built sequencers – based on the dubious business model that Big Science US government projects (e.g. ENCODE) and foreign governments (e.g. China) did and would go ahead and sequence DNA “no matter what” (both China and Russia claims “national security” in the integrity of the Chinese and Russian genome…, Koreans are sequencing the Korean genome, the Arabs the Arab sequence, Jews the Jewish sequence) – and the general public would just get themselves sequenced once the price dips below the magic number of $1,000 per "affordable full human DNA sequence". (For what? Detroit must have been more worried about providing the network of gas-stations for their assembly-line affordable Ford T vehicles, useless without the other half of the business-model…)

There is a lot to be read these days how the First Decade spectacularly fell short of declared expectations that it would “revolutionize medicine” – that just did not happen as expected. Why don't we talk about the Business Models?

Genomics is Science and Technology -- while Medicine is a Business (in the US – in other civilized countries it is a government service). One can not turn apples into oranges. Therefore is the “surprise of the first decade” – that rather than finding cures for diseases (the much hoped for and hyped up “medical revolution”) – postmodern genomics found Business Models for Prevention faster than Business Models, while “Personalized Medicine” has to develop its verified Business Models to work.

“Medical Genomics” is to collide with the Medical Establishment; will be the strictly regulated, meticulous but high-yield "slow track". “Consumer Genomics” will be the “fast track”, since it empowers by user-friendly automated those daily decisions based on consumers’ freedom of choice that could be done “the old fashioned way” (see the much-belabored YouTube "Shop for your Life!" business model by HolGenTech, Inc.to proceed on the “fast track”

[Other emerging business models will be reviewed as this series continues - AJP]

^ back to top


Business Models for the Coming Decade of Genome-Based Economy - the transition and future

Representative samples (beyond the Consumer Genetics Business Model shown above) are as follows - all based on the key question of postmodern genomics; (holo)genome regulation:

INDUSTRIALIZATION OF GENOME REGULATION RESEARCH

At a first glance, this appears to be a contradiction. How can "research” be advanced by “industrialization”? A highly visible precedent provides an explanation that it is not an oxymoron. Nuclear Physics faced (and still faces) several scientific challenges – and the most notable how energy can be harvested from nuclear fission and fusion was (and is) driven by the massive force of "industrialization" (including defense industry) - rather than the relatively meager resources available for "pure research".

Likewise – as acknowledged by the new generation (over 127,000 views of YouTube "Regulatin' Genes") “regulating genomes is crucial for development”. However, though a peer-reviewed science article that opened the way by surpassing two obsolete axioms (Crick’s “Central Dogma” and Ohno’s “Junk DNA”) that together blocked genome informatics for over half a Century (“The Principle of Recursive Genome Function”) appeared in 2008 and Google yields for "Recursive Genome Function" 69,100 hits - there seems to be no program by any US government Agency that would invite spearheading a focused effort in advanced fractal & neural network theory to resolve the crucial problem of genome regulation. At least two Business Models, however, will fuel and assure progress; since both “Synthetic Genomics” and DTC of "Structural Variants" require an algorithmic (software enabling) understanding of (holo)genome regulation..

INDUSTRIALIZATION OF SYNTHETIC GENOMICS

As mentioned in the overview of the past, the Business Model by Craig Venter; patenting industrially useful genomic applications was invalidated by government decree of 2000. While patenting “genes", therefore, is not viable, the Venter Institute (with Craig Venter, Ham O’ Smith, Clyde Hutchison, John Glass et al.) invented the Business Model of modifying gene-sets of organisms with small enough DNA to synthesize them – and since a modified gene-set is not an existing product of Nature, it is patentable. However - until and unless the regulatory mechanism is understood (even in the Mycoplasma Genitalium with 7% non-coding DNA), any modification of the gene-set must be “stealth” to the largely retained regulatory sequences. Thus the 15 years to find the combination of small enough modifications to "boot the synthetic DNA". While the enormous potential in Synthetic Genomics was never questioned, and now with an "existence proof“ delivered, this Business Model will be a major industrial driver towards understanding genome regulation.

INDUSTRIALIZATION OF DTC STRUCTURAL VARIANTS

Never believing in either Crick’s Central Dogma or Ohno’s JunkDNA mistaken axioms, in 2005 I established the International PostGenetics Society, which became the first international organization to officially abandon the "Junk DNA" misnomer on its European Inaugural, October 16, 2006 (8 months before the US government’ ENCODE program concluded in the same). At the establishment of the Society (in 2008 becoming International HoloGenomics Society), I posted (and updated) the site of “Junkdna diseases” – since with the notion that 98.7% of the human DNA is not there (as Ohno stated, p.367) for “the importance of doing nothing” it followed that structural variants (to systematically describe the common patterns of human genetic variation revealed in the HAPMAP project, with the original span from 2002-2005, costing $138 Million, extended twice, 2007, 2009 with the overall cost exceeding $500 M) even in the "non-coding DNA” are associated with identifiable diseases. The fact of all kinds of “structural variants” raises both a scientific and practical dilemma. Scientifically, it seems obvious that “structural variants” could either represent harmless “human diversity”, or “genomic defects” with an impact on hologenome function. The brute force approach would have to plough through all variants and parse them according to variant genotypes if associated with harmful phenotypes or innocent to the function. In contrast, any algorithmic (software enabling) understanding of genome function would show parametric variants, separated by defects that affect the required syntax. (For instance, in a Mandelbrot set, the Z=Z^2+C, with different numbers of the C constant all yield “parametric variants”, while glitches in squaring through some of the recursions the complex number Z result in syntax errors; fractal defects.) Thus I proposed the FractoGene (2002) algorithmic approach (cost was shouldered and thus is now unencumbered by the IP-developer, AJP), and FractoSoft is now acquired by HolGenTech, Inc., with IP held under HelixoMetry.

Since the practical tool of interrogating DNA was hitherto available by Affymetrix/Illumina microarrays, yielding Single Nucleotide Polymorphisms (SNP-s), the "Industrialization of Structural Variants" started by DTC genome testing companies (DeCodeMe, 23andMe, Navigenics, etc). As reported in this column, 23andMe started a veritable revolution to correlate, in a "data-driven model", patterns of SNP genotypes with (patterns) of phenotypes. Lately, however, a great assortment of "structural variants" emerged, e.g. “copy number variations” (different numbers of repeats of sequences containing many thousands of nucleotides), etc. Structural variants with complexities beyond SNP-s require tools beyond microarrays - and call for "affordable full DNA sequences".

The Business Model of mining for, and establishing correlation of complex structural variants and pathological phenotypes will be a major driver for algorithmic (software enabling) understanding of (holo)genome regulation. An extremely complex, and socio-economically towering example is cancer research, therapy and cure, where the massive rearrangement of the genome (that a layperson might describe with the colloquial meaning of “chaotic”) – indeed (in a mathematical sense) a fractal and fractured alteration of DNA sequence – widely acknowledged to be a disease (derailment) of genome regulation (with fractal defects, “jumping genes” of retrotransposons, copy number variations, fractures of entire chromosomes). Successes of algorithmic approaches will put an enormous economic pressure both on privately held genome informatics (genome computing) companies, as well as on Big Pharma, to build the required Business Model.

INDUSTRIALIZATION OF GENOME REGULATION IN BIG PHARMA

Perhaps the biggest "disappointment" of the First Decade after The Human Genome Project was that it has not developed "cures" for some of the most dreaded hereditary diseases. In my opinion, it was (and is) entirely unrealistic to expect cures e.g. for "regulatory diseases" (like cancers) without a hard-core understanding of genome regulation. While this topic was mentioned above, another class of medications, with a historical precedent, can also illuminate the same point. Alexander Fleming did not quite understood in 1928 how the fungus Penicillium notatum stopped the growth of bacteria - could still create a revolution of antibiotics. There is already a modern precedent (Merck's aquisition of SiRNA for $1.1 Bn) to show how small interefering RNA-s could be applied - but it is also obvious that the "Big Pharma Business Models" (e.g. of the use of microRNA-s) will be fiercely proprietary. - AJP

^ back to top


The Genome and the Economy

June 18, 2010
Genomeweb

Mike Mandel, from the Mandel on Innovation and Growth blog, says the most "significant economic event of the past decade" is the "failure of the Human Genome Project to deliver medically significant results."

Ballooning healthcare spending, low job growth and a big trade deficit are strangling the US economy, he says, and "the Human Genome Project had … the potential to be a powerful antidote to all three of these problems." But Mandel says there's reason to be optimistic. The US has invested heavily in biotech, new technology and health, and "the research has gone great." All we need to do to capitalize on this investment is to bridge the gap between research and commercialization, which has been done before, Mandel adds. "So I'd say that the odds are good that the Human Genome Project will have a significant economic impact over the next 5 to10 years," he says....

[I agree with Mandel - but see some need to spell out how the Industrialization of Genomics is to be implemented; see the news on 23andMe below, and my ensuing essays - AJP]

^ back to top


23andMe Publishes Web-Based GWAS Using Self-Reported Trait Data

June 25, 2010
By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) - In a paper appearing online last night in PLoS Genetics, Web-Based, Participant-Driven Studies Yield Novel Genetic Associations for Common Traits researchers from 23andMe and Columbia University reported the first genome-wide association findings stemming from 23andMe's web-based, participant-driven research program.

The study, which involved more than 9,000 individuals genotyped through 23andMe's direct-to-consumer testing service, looked for genetic associations related to nearly two-dozen common traits. Using this web-based survey approach, the researchers verified previously reported associations for five traits and identified new SNPs linked to four of the traits, including hair curl, freckling, sneezing in response to light, and the ability to detect asparagus metabolite odors in urine.

"[W]e confirm that self-reported data from our customers has the potential to yield data of comparable quality as data gathered using traditional research methods," co-author Anne Wojcicki, president and co-founder of 23andMe, said in a statement.

For the study, the team drew from 23andMe's direct-to-consumer genetic testing community, bringing together SNP and survey data for thousands of customers. As such, they explained, the researchers were able to put together "a single, continually expanding cohort, containing a self-selected set of individuals who participate in multiple studies in parallel."

"Our ability to contact individuals multiple times and ask follow-up questions puts us in a position to zero in on associations that could be the building blocks for future research aimed at prevention, better treatments, and potentially cures for a multitude of diseases and conditions," lead author Nicholas Eriksson, a statistical geneticist at 23andMe, said in a statement.

Each of the 9,126 participants provided information on at least one of the 22 common traits. These traits were selected, in part, based on heritability and the feasibility of collecting related phenotype data easily in a web-based setting. The traits included everything from hair and eye color to "photic sneezing," a predisposition for sneezing when looking at the sun or other bright light.

The team then integrated this data with genotype data for 535,076 SNPs assessed using the Illumina HumanHap550+ BeadChip. At least 1,500 unrelated individuals of northern European ancestry were evaluated for each of the 22 traits.

Using this approach, the researchers found associations for eight of the 22 traits. Among them: previously reported associations for traits such as eye color, hair color, and freckling and new associations for four of the traits.

For instance, a new freckling-associated SNP turned up in an intron of the zinc finger gene BNC2, while hair curl was associated with SNPs near the TCHH, LCE3E, WNT10A, and OFCC1 genes.

In another new association, the team found that the tendency to sneeze when looking into light was linked to two SNPs: one near the ZEB2 gene and the PABPCP2 pseudogene and another near the NR2F2 gene.

The researchers also found that individuals who can smell an asparagus metabolite called methanethiol in urine tend to carry SNPs in a linkage disequilibrium block in and around 10 olfactory receptor genes. Of these, the most significant SNP fell upstream of the olfactory receptor gene OR2M7.

In an editorial appearing in the same issue of PLoS Genetics, the journal's deputy editor-in-chief Gregory Copenhaver and its gene expression profiling and natural variation section editor Greg Gibson, who are affiliated with the University of North Carolina at Chapel Hill and the Georgia Institute of Technology, respectively, addressed ethics, consent, and data access concerns related to the 23andMe study.

The pair noted that the study's publication was delayed because the journal wanted to deal with several such issues — for instance, ensuring that individuals included in the study were not pressured into partaking in the study and understood that they were participating in genetic research. The editorial also addressed institutional review board questions arising from the study.

"After considering all of the evidence, we decided that publication, accompanied by an editorial providing transparent documentation of the process of consideration was the most appropriate course," Gibson and Copenhaver wrote. "[W]e have had extensive discussion with the authors of this study to address our concerns and to update their processes, but we anticipate broad evolution of GWAS consent and review in the near future."

Meanwhile, the 23andMe team is setting its sights on additional web-based GWAS studies. In their current paper, they noted that the approach makes it possible to ask research questions using data from an ever-expanding group of individuals.

"Our research model makes possible studies that might be infeasible otherwise due to the low marginal cost of asking additional questions over the web and the speed of broadcasting recruitment messages in parallel online," the team wrote, adding that "providing participants with well-explained descriptions of their genetic data can substantially benefit genetic research as a whole."

In addition, 23andMe said yesterday that it has now secured IRB approval for its web-based research protocol. According to the company's blog, The Spittoon, 23andMe customers will now be given more leeway over how their genetic information is used and must "explicitly choose to allow their genetic and survey data … to be used in published research."

[What 23andMe started is an Industrial Revolution of conducting scientific research in the future; a Google-type "data-driven" approach that will minimize the "brute force" necessary for handling masssive amounts of data. Of course, one may ask why was the effort focused on "frivolous traits" like freckles or smelling asparagus in the urine - when it is obvious that the methodology eminently applies to severe phenotypes with massive medical impact. It is noteworthy that the paper was delayed over a year since among others it alters the way how volunteers provide information. This is almost nothing (a technicality) compared to how the methods revolutionize possibly the entire medical research. Since this collides "head on" with "medicine as is", the approach carefully avoided hard-core "medical" questions - still 23andMe is one of the 5 DTC genome testing companies that will be investigated by FDA and the Congressional Committee (on Energy and Commerce...)- AJP]

^ back to top


Francis Collins: the extended genome anniversary interview

The Times
06/24/2010

Ten years ago on Saturday, Francis Collins and Craig Venter walked onto the White House lawn with Bill Clinton, to announce the completion of the first draft of the human genome. I had the opportunity to sit down this week with Dr Collins, who's been visiting the UK ahead of the anniversary, and my interview with him appears in the paper today.

My piece has focused on Collins's prediction that most of us, at least in the developed world, will probably have had our complete genetic codes sequenced by the time the reference genome is 20 years old. But we spoke about much more than that, and I've included some of the highlights of our conversation here.

Perhaps of greatest current interest, given the FDA's current clampdown on direct-to-consumer genomics companies, were his thoughts on how this fledgling industry should properly be regulated. He accepts that there is a case for some regulation, which should focus on risk and accurate test results. But he is very wary of over-zealous regulation that might stifle innovation, or unreasonably restrict individuals' access to the contents of their DNA. And he doesn't think such access needs always to be mediated by a doctor.

The direct quotes follow after the jump...

On regulation

I started by asking Collins what he thought of FDA's intervention on DTC genomics. He said:

"The FDA is being fairly reasonable about their approach, in the sense that I think they are sensitive in not wanting to shut down a set of scientific advances that are potentially going to become a valuable commercial enterprise.

"I think the approach they're going to take is to focus on those kinds of test that are associated with risk, and have a risk-based oversight system rather than a knee-jerk system of 'oh, it's a genetic test, we need to review it no matter what'. That's clearly what's needed, and some groups have been arguing for that for 10 years.

"I do think the time has come, when you look at some of things that are out there on the web that are quite unsubstantiated scientifically, to pay some attention to that. Peggy [Hamburg, the FDA Commissioner] and Josh Sharpstein [Hamburg's deputy] are aware of the need to do this on a rational basis and not to slam the door. But I do think the public is increasingly concerned about whether this occurring in a completely unregulated way is going to be of benefit to them.

"I'm not sure exactly where this will go over the next year or two, or what the implications will be for access to genetic information. I am both a strong proponent of the need for quality of what's offered, but I also believe in patient empowerment, and the opportunity to find out something about themselves if they want it seems like something we should be reluctant to get in the way of, so long as the information is scientifically valid.

"Regulators have a tough job. They need to be sensitive to not stifling innovation, but they also need to protect the public against really unscrupulous use of new technologies. And in that regard, by preventing the real misuse of technology, they're certainly protecting the innovators from seeing a complete meltdown and a distrust of technology that might persist for years to come.

"So in many ways the regulators are also the guardians of science, in the sense that they keep it from slipping into snake oil, which ultimately does a lot of damage to science. But there is a tough balance, which is one of the reasons the FDA has had such a hard time figuring out how to regulate genetic technology. I give Peggy and Josh a lot of credit for being willing to take this on and do something."

Collins has himself been tested by several of the DTC genomics companies, and I asked how his experience as a consumer had influenced his thoughts on regulation:

"My own experience with this did inspire me to take some action which I probably should have taken anyway. No-one should generalise from your own personal experience to make federal policy, but there's plenty of other data to show that when individuals are given predictive information about the future, at least some of them do find that useful and do modify their health behaviours, and are able to understand that these are not yes-no black-white answers, but risk factors they might want to pay attention to, just as they pay attention to their cholesterol."

Medical supervision of DTC genetic tests

Next we moved onto the sometimes vexed issue of medical supervision. Should genetic testing always be conducted in concert with a doctor or genetic counsellor? Collins said:

"That of course is one of the hot potatoes. Even within the field of medical genetics there are strong differences. The American Society of Human Genetics thinks that having a medical professional involved is not required. The American College of Medical Genetics thinks that oh yes, there are great dangers if a medical professional is not involved.

"I think my view is that people are in many instances capable of absorbing this educational information without the need for a professional to walk them through, but if they want that they should have access to it quite easily, and should not be hit with information without a medical professional to assist them.

"I've been impressed by the way in which the direct-to-consumer companies have worked pretty hard at this, in terms of providing information and what it does and doesn't mean. But there is probably no substitute for having the opportunity to ask questions of someone who is an expert in the field after you have begun to absorb your own results, and I think people ought to have a chance to do that if they want to.

"I would be very uncomfortable with a system that says no, we know better than you do, you won't understand this information so we're not going to let you have it. There's something that doesn't feel right about that."

Validity and utility

To what extent should test be regulated for scientific validity and clinical utility? Collins said:

"A lot of this debate relates to what you call clinical utility. If you're going to have a test that's marketed to the public, it should have analytical validity, that is the lab should be able to prove that they can do a DNA test and get the sequence right. And it should have clinical validity, that is the test if it says it means something, that should be clearly true. If it says your risk for this SNP goes up for diabetes, then there ought to be evidence to back that up.

"But what about clinical utility? That's so much in the mind of the beholder. I mean you may say knowing your Alzheimer's risk from APOE has no clinical validity because you can do nothing to change your risk by changing your diet or taking drugs or doing Sudoku or whatever. But for somebody who wants to know that information for purposes of planning, and that was shown very clearly by Bob Green's REVEAL study, that is useful to them. So don't tell these people this is not clinically useful. It is clinically useful from their perspective, and we shouldn't be paternalistic about it."

So, I asked, is a light touch what's called for? Collins replied:

"A light touch but a touch that is also focused on risk. For instance, if a lab is offering individuals BRCA1 testing, to individuals with a strong family history of breast cancer, with no pre-test or post-test counselling, that's a problem. If you're talking about highly penetrant conditions where the test result has high medical significance, that's probably not something you should get at Walgreens. For example."

Also important, he said, are the interpretation services that are offered:

"I gather Ozzy Osbourne has had his genome sequenced. Ozzy's probably going to need a little help figuring out what this means."

Cancer genomics

Like many observers, Collins feels that cancer will be a standard-bearer for genomic medicine:

“Cancer is I think going to be right out in front here. And I think we would all expect that within the next decade every cancer identified, at least in countries that have the resources, a full enumeration of what is going wrong in that tumour, and then an ability to match that up with the available therapeutics so you are doing the best possible job of throwing smart bombs, instead of the carpet bombing of traditional chemotherapy.”

"The question is will that be helpful because we have a nice menu of therapies, and we'll be able to pick just one or two that are going tobe active against that patient's malignancy. We have a lot of work to do.

"Could I tell you that in ten years most patients will have both a complete genomic analysis and a choice of compounds that should be clinically active? I don't know, I would hope we will be pretty close to that, and certainly well beyond the menu we have now of targeted therapeutics, which is still a pretty short list. It's going to grow."

Pharmacogenomics as a driver of widespread sequencing

Collins was clear that he thought the potential of pharmacogenomics would be the main driver of widespread sequencing.

"I think it's another great payoff over the fairly near future, and certainly over the next decade. 10 per cent of FDA approved drugs have a label saying something about genetics and its ability to predict a side effect, or something about whether that person is going to respond or have a dose adjustment.

"The problem with pharmacogenomics now, in terms of real mainstream application, is in a certain way logistic. We already know enough variations that could be used in dose adjustments, in terms of the CYP genes, but most of the time the physician doesn't know that much about the science anyway, and there's the problem of sending off the drug sample know, and where do I send it to, and I want the sample now. But, how's it going to change?

"Well, when the sequencing of your genome or mine really does drop into the affordable range of less than 1,000 dollars, it will become very compelling for pharmacogenomics in particular to do it just once, to do it right, to get it into the medical record. There would be no need to take more blood samples, it’s just a click of your mouse to know whether that drug dose ought to be adjusted, or whether there’s a risk of a nasty side-effect you want to avoid. That will be a moment when a lot of the barriers to pharmacogenomics go down." [This is hitherto the clearest endorsement of the need to automate, in a user-friendly manner, choices that by hand would require unacceptable time requirement - AJP]

"When you see the cost of sequencing dropping much faster than Moore's law for computers, we're now down to about $10,000 for a reasonably complete sequence. I can't believe it won't drop to less than 1000 within five years. Will that become the moment then, where at least to people in some kinds of health plans, when it becomes compelling to do it? Good heavens, we spend a lot more than that on all sorts of unnecessary scans.

"Why not just make the case for each of us that this is information that will only gain in value over time, and if it's possible to do it accurately it would be cost-effective, both in terms of prevention and pharmacogenomics and so on? I think that will get pretty compelling.

"What will the timetable be from when it's possible to when it actually happens? That's the key question. A lot will depend on reimbursement and who's going to pay for it? Will third parties see it as a key investment? Where's the capacity going to come from to do all these genomes? Will we have that kind of throughput abilities? I don't know.

"But certainly within ten years I will be very surprised and very disappointed if most people in the developed world will not have their genomes sequenced as part of their medical record, and I would hope it will come even sooner."

Complex disease and the missing heritability

How confident, I asked, was Collins that the "missing heritability" of complex, common diseases would be unlocked over the coming years? He said:

"Nobody knew what the structure would be of genetic variation that contributes to heritability until we started to look for it. But there's certainly a lot more there hiding in the dark matter of the genome. Presumably, that will start to come to light as we are able to look for less common variants with sequencing of lots and lots of genomes from lots and lots of studies. I'm not despondent about that at all. It's just that the structure is not what we thought it might be. [This part of high science would need much elaboration - not possible in a popular science article. Clearly, the "structure we thought it might be (Genes and JunkDNA) are officially out. Francis Collins, at the 10th Anniversary is clear about the obsolete notions on gene/junk structure of the genome, but at this time-stamp he is not yet at the (confessed) realization that the DNA is fractal - AJP]

"It doesn't seem we were wrong about heritability and its contribution to diabetes and heart disease, all of those common diseases, it looks as if heritability is about 50 per cent, which means it's somewhere in there. We just need a finer lens to discover it. That probably means that within the next decade, our ability to make finer predictions about disease risk are going to get pretty good. Then it will be I think increasingly compelling to use programmes for prevention. It's something you can do right now, so long as you're aware you're not dealing with all the information that's lurking in the genome."

Sequencing children

In April, I broke the story that the children of the Solexa entrepreneur John West, Anne and Paul, had become the first minors to be sequenced for non medical reasons. I asked Collins how he felt about that:

"Frankly, that one did make me a little bit uneasy, because here were two kids who were promoting the idea that they were glad to have this, but you have to wonder how could they say otherwise when their dad was the company founder. There are statements out there in the literature, that genetic data on minors should not be obtained unless the data needs to be known, but those may start to look a little dated.

"This leads us to the question about newborn screening, because we do after all already determine a fair amount of genetic information about every newborn. Newborn screening is recommended in the US for 29 diseases. The time will come when it's cheaper to do that by sequencing than by 29 different tests, and at that point wouldn't it make more sense just to do it, and then to have a plan to release that information in a graded fashion as it becomes actionable. I do think we need to preserve the right not to know. That's a substantial component of our resonsibilities. If every newborn is being sequenced, those newborns should have the opportunity to say at 18: 'the rest of that sequence, forget about it.'"

[This is a shining example of popular science reporting (that was obviously proofed by Dr. Collins) - thus devoid of the usual journalistic mistakes that a decade ago "The Genome was deciphered, etc, etc."). Also, the interview is extremely reassuring, especially in view of the almost inevitability that Dr. Collins will testify for the Congressional Committee investigating legislative aspects of "DTC regulators". While in the joint article in NEMJ of Drs. Hamburg (FDA) and Collins (NIH) it appeared that a multi-agency advisorship might be needed for proposing new legislation to update the 1976 mandate of FDA, presently Dr. Collins mentions YEARS, and also points out a turf-war within the Medical Establishment (about the civil rights question if the American people should have all the information about their bodies, including DNA). Thus, the prediction that a) the DTC regulatory issues will likely be parsed into "risk/no risk" subsets, and b) FDA might not have the legal mandate to "shut down" DTC and thus (especially if Dr. Collins leans towards "letting the private industry figure it"), an FDA "moratorium" on existing California legislation on DTC may be the best solution - AJP]

^ back to top


The Big Surprise of the First Decade - The Genome Affects You to Prevent Diseases, Before it Cures Diseases

[Cover of GEO International - with Specials on Genomics/EpiGenomics - AJP]

The "Human Genome Project" will be a Decade Old in two days. Here, in a series of articles, I present the public acceptance of the significance of HoloGenomics (YouTube, 2008, today with 8,171 views) what may become the quickest route to build a business model on results of a Big Science Government Research Project ("The Human Genome Project" - that had fiercely debated a possible "business model" - patenting "140,000 genes"; but ended up finding only about 24,500 genes, patenting none and ending with no business model at all). Francis Collins (see his YouTube at lower left, today with 1,529 views) outlines what the Genome Era may practically mean to us - but as a government administrator not in terms of business models (though his haunch of mobile devices catapulting into lead-role is visually obvious). Genome Computing applications for both genome-testing and for commercial applications were debated (YouTube [not shown], today with 2,093 views) - but the "killer app" of postmodern genomics may well become the user-friendly automation by your smart phone how to shop for goods befitting your genome (see YouTube at lower right, today with 2,100 views). At times when "Home Computers" became available, many asked the question "what it means to me to have a computer in my home?" The trump-card was provided by the software app "Visicalc" (predecessor to Excel), developed in Sunnyvale, CA; that did the same that anybody could do by hand (re-calculating values of cells in a huge spreadsheet) - but did it in an automated, user-friendly manner; blazingly fast. HolGenTech, Inc. "Your Genome; there's an app for that" does very much the same. A huge number of ingredients in myriads of goods can be cross-referenced by hand (in theory...) with known dietary- and environmental impacts of "genomic glitches". It is just not very practical if you have difficulties in memorizing e.g. the 60,000 pages of numbers (listed e.g. in your "23andMe Raw SNP file") and hold in your head that exploding knowledge-base how their dietary/environmental impacts affect your genome. HolGenTech' "killer app" is to automate it for you - AJP


Sergey Brin’s Search for a Parkinson’s Cure

By Thomas Goetz

June 22, 2010 |
Wired July 2010

Several evenings a week, after a day’s work at Google headquarters in Mountain View, California, Sergey Brin drives up the road to a local pool. There, he changes into swim trunks, steps out on a 3-meter springboard, looks at the water below, and dives.

Brin is competent at all four types of springboard diving—forward, back, reverse, and inward. Recently, he’s been working on his twists, which have been something of a struggle. But overall, he’s not bad; in 2006 he competed in the master’s division world championships. (He’s quick to point out he placed sixth out of six in his event.)

The diving is the sort of challenge that Brin, who has also dabbled in yoga, gymnastics, and acrobatics, is drawn to: equal parts physical and mental exertion. “The dive itself is brief but intense,” he says. “You push off really hard and then have to twist right away. It does get your heart rate going.”

There’s another benefit as well: With every dive, Brin gains a little bit of leverage—leverage against a risk, looming somewhere out there, that someday he may develop the neurodegenerative disorder Parkinson’s disease. Buried deep within each cell in Brin’s body—in a gene called LRRK2, which sits on the 12th chromosome—is a genetic mutation that has been associated with higher rates of Parkinson’s.

Not everyone with Parkinson’s has an LRRK2 mutation; nor will everyone with the mutation get the disease. But it does increase the chance that Parkinson’s will emerge sometime in the carrier’s life to between 30 and 75 percent. (By comparison, the risk for an average American is about 1 percent.) Brin himself splits the difference and figures his DNA gives him about 50-50 odds.

That’s where exercise comes in. Parkinson’s is a poorly understood disease, but research has associated a handful of behaviors with lower rates of disease, starting with exercise. One study found that young men who work out have a 60 percent lower risk. Coffee, likewise, has been linked to a reduced risk. For a time, Brin drank a cup or two a day, but he can’t stand the taste of the stuff, so he switched to green tea. (“Most researchers think it’s the caffeine, though they don’t know for sure,” he says. [And in my Google Tech Talk YouTube 2008 where at 7:55 I referred to Sergey's public blog and at 39:36 showed a 2003 science paper asserting modern evidence to the ancient Chinese medicine that ingredients in green tea might help mitigate predilection to Parkinson's - AJP] Cigarette smokers also seem to have a lower chance of developing Parkinson’s, but Brin has not opted to take up the habit. With every pool workout and every cup of tea, he hopes to diminish his odds, to adjust his algorithm by counteracting his DNA with environmental factors.

“This is all off the cuff,” he says, “but let’s say that based on diet, exercise, and so forth, I can get my risk down by half, to about 25 percent.” The steady progress of neuroscience, Brin figures, will cut his risk by around another half—bringing his overall chance of getting Parkinson’s to about 13 percent. It’s all guesswork, mind you, but the way he delivers the numbers and explains his rationale, he is utterly convincing.

Brin, of course, is no ordinary 36-year-old. As half of the duo that founded Google, he’s worth about $15 billion. That bounty provides additional leverage: Since learning that he carries a LRRK2 mutation, Brin has contributed some $50 million to Parkinson’s research, enough, he figures, to “really move the needle.” In light of the uptick in research into drug treatments and possible cures, Brin adjusts his overall risk again, down to “somewhere under 10 percent.” That’s still 10 times the average, but it goes a long way to counterbalancing his genetic predisposition.

It sounds so pragmatic, so obvious, that you can almost miss a striking fact: Many philanthropists have funded research into diseases they themselves have been diagnosed with. But Brin is likely the first who, based on a genetic test, began funding scientific research in the hope of escaping a disease in the first place. [Watch for the fireworks, China-style, if he is told by anyone that his software, helping eliminate any chance of him ever developing Parkinson's has to be censored ... AJP]

His approach is notable for another reason. This isn’t just another variation on venture philanthropy—the voguish application of business school practices to scientific research. Brin is after a different kind of science altogether. Most Parkinson’s research, like much of medical research, relies on the classic scientific method: hypothesis, analysis, peer review, publication. Brin proposes a different approach, one driven by computational muscle and staggeringly large data sets. It’s a method that draws on his algorithmic sensibility—and Google’s storied faith in computing power—with the aim of accelerating the pace and increasing the potential of scientific research. “Generally the pace of medical research is glacial compared to what I’m used to in the Internet,” Brin says. “We could be looking lots of places and collecting lots of information. And if we see a pattern, that could lead somewhere.”

In other words, Brin is proposing to bypass centuries of scientific epistemology in favor of a more Googley kind of science. He wants to collect data first, then hypothesize, and then find the patterns that lead to answers. And he has the money and the algorithms to do it.

[When I presentated my FractoGene (fractal algorithmic) approach in Cold Spring Harbor Personal Genome-2 meeting last September, a leading cancer-research center director elaborated on the newly found facts (since now full DNA sequences of several cancerous humans are available), that their informaticians found all kinds of "patterns" - presently hard to interpret. There, I mentioned that in my first paradigm-shift (from Artificial Intelligence to Neural Networks) "algorithmic pattern recognition" was a core of industrialization of the new science, since sound-patterns of Soviet submarines had to be discerned from the mostly harmless underwater cacophony - and suggested the re-deployment of available technology; this time against cancer. The director, whose forte was not in Informatics, asked "are you looking for a job?" Well, my HolGenTech, Inc. is open for business... - AJP]

^ back to top


Data-Driven Discovery Research at 23andMe

Genomeweb
June 23, 2010

In the July 2010 issue of Wired, on newsstands now, Thomas Goetz details Google co-founder Sergey Brin's investments in Parkinson's disease research. Brin, whose wife Ann Wojcicki co-founded the DTC genetic testing firm 23andMe, is pouring money into a data-driven approach to find the causes of — and potential cures for — the neurodegenerative disease that affects his mother, and that he has learned he carries a genetic disposition for. "Brin is likely the first who, based on a genetic test, began funding scientific research in the hope of escaping a disease in the first place," Goetz reports. Goetz outlines 23andMe's ambitious Parkinson's Disease Genetics Initiative, in which the company plans to mine data from 10,000 individuals "who are willing to pour all sorts of personal information into a database," he writes. So far, Brin has contributed $4 million to the initiative, which has acquired nearly 4,000 participants. "Brin proposes a different approach, one driven by computational muscle and staggeringly large data sets," Goetz writes. "In other words, Brin is proposing to bypass centuries of scientific epistemology in favor of a more Googley kind of science. He wants to collect data first, then hypothesize, then find the patterns that lead to answers. And he has the money and the algorithms to do it."

... Brin calls the effort and "information-rich opportunity," and tells Goetz that 23andMe plans to publish "several new associations that arose out of the main database, which now includes 50,000 individuals, that hint at the power of this new method." Brin also says that he is "in line to have his whole genome sequenced," and that "23andMe is considering offering whole-genome tests" at a yet-undisclosed price.

[The point at this historical announcement ("leak"- rather...) is, that speculation has been rampant that 23andMe was nearing to close business on the news of "a co-Founder having left" (for personal reasons), or that "sales were sagging", or that "regulatory pressures may become unbearable". Back to Mark Twain: "Reports on the death of 23andMe are greatly exaggerated" - AJP]

^ back to top


ACI Personalized Medicine Congress in Silicon Valley postponed from June 23-25 to December 9-10, 2010

The First Silicon Valley Conference on Health Care affected by Personalized Genomics, in view of impending Congressional Investigation of DTC Genomic Testing and possible new legislation proposed by NIH/FDA for updating the 1976 FDA Charter is postponed from 23-25 June, 2010 to 9-10 December. Some of the highlights from the new website:

[The 5 months advantage given away to DTC in Soul, Korea (backed by SAMSUNG) and Barcode shopping screening for allergens to Deakin University in Melbourne, Australia (backed by NESTLE) is not yet a strategical failure. Should the US be bogged down in the kind of "legislatory renovation" that took for the National Superhighways 37 years from formulating concepts (1919) to signing them into law (1956) with the project still not completed with 91 years and counting and in 1966 dollars with over a tenfold overrun is not a very rosy picture - AJP]

^ back to top


The Genome, 10 Years Later
EDITORIAL of The New York Times
Published: June 20, 2010

On June 26, 2000, two scientific teams announced at the White House that they had deciphered virtually the entire human genome, a prodigious feat that involved determining the exact sequence of chemical units in human genetic material. An enthusiastic President Clinton predicted a revolution in “the diagnosis, prevention and treatment of most, if not all, human diseases.”

Now, 10 years later, a sobering realization has set in. Decoding the genome has led to stunning advances in scientific knowledge and DNA-processing technologies but it has done relatively little to improve medical treatments or human health.

To be fair, many scientists at the time were warning that it would be a long, slow slog to reap clinical benefits.

And there have been some important advances, such as powerful new drugs for a few cancers and genetic tests that can predict whether people with breast cancer need chemotherapy. But the original hope that close study of the genome would identify mutations or variants that cause diseases like cancer, Alzheimer’s and heart ailments — and generate treatments for them — has given way to realization that the causes of most diseases are enormously complex and not easily traced to a simple mutation or two.

The difficulties were made clear in articles by Nicholas Wade and Andrew Pollack in The Times this month. One recent study found that some 100 genetic variants that had been statistically linked to heart disease had no value in predicting who would get the disease among 19,000 women who had been followed for 12 years. The old-fashioned method of taking a family history was a better guide. Meanwhile, the drug industry has yet to find the cornucopia of new drugs once predicted and is bogged down in a surfeit of information about potential targets for their medicines.

In the long run, it seems likely that the genomic revolution will pay off. But no one can be sure. Even if the genetic roots of some major diseases are identified, there is no guarantee that treatments can be found. The task facing science and industry in coming decades is as at least as challenging as the original deciphering of the human genome.

[Some "mainstream" US journals (e.g. Newsweek) excel by consulting actual scientists when attempting to write about science - without some gross errors that characterize e.g. the above "Editorial". There are 6 more days to correct errors... - more on my FaceBook wall as follows:

Two cardinal mistakes in the opening one sentence: (1) The code of the genome was not "decoded" - but only the sequence was (approximately) established (as Esther Dyson puts it, resulting in "The Big Russian Novel with a 100-word dictionary"...). More importantly, (2) The Human Genome Project" did deliver "scientific knowledge" (the "first draft" of the sequence...), but two further requirements were skipped (or went unnoticed by some): (a) knowledge of the genome had to be transformed into an algorithmic (not "statistical") understanding of how through epigenomic channels the fractal recursive iteration of the hologenome can be interacted with, towards equilibrium (health). (b) Even "scientific understanding" is totally inadequate if it did not come with a business model and actual business, since "medical treatments or human health" in the USA is not a science, but a business (as Francis Collins puts it "Sick Care"). - AJP]

^ back to top


The Path to Personalized Medicine

This article (10.1056/NEJMp1006304) was published on June 15, 2010, at NEJM.org

By Margaret A. Hamburg, M.D., and Francis S. Collins, M.D., Ph.D.
Dr. Hamburg is the commissioner of the Food and Drug Administration, Silver Spring, and Dr. Collins is the director of the National Institutes of Health, Bethesda - both in Maryland.

Major investments in basic science have created an opportunity for significant progress in clinical medicine. Researchers have discovered hundreds of genes that harbor variations contributing to human illness, identified genetic variability in patients' responses to dozens of treatments, and begun to target the molecular causes of some diseases. In addition, scientists are developing and using diagnostic tests based on genetics or other molecular mechanisms to better predict patients' responses to targeted therapy.

The challenge is to deliver the benefits of this work to patients. As the leaders of the National Institutes of Health (NIH) and the Food and Drug Administration (FDA), we have a shared vision of personalized medicine and the scientific and regulatory structure needed to support its growth. Together, we have been focusing on the best ways to develop new therapies and optimize prescribing by steering patients to the right drug at the right dose at the right time.

We recognize that myriad obstacles must be overcome to achieve these goals. These include scientific challenges, such as determining which genetic markers have the most clinical significance, limiting the off-target effects of gene-based therapies, and conducting clinical studies to identify genetic variants that are correlated with a drug response. There are also policy challenges, such as finding a level of regulation for genetic tests that both protects patients and encourages innovation. To make progress, the NIH and the FDA will invest in advancing translational and regulatory science, better define regulatory pathways for coordinated approval of codeveloped diagnostics and therapeutics, develop risk-based approaches for appropriate review of diagnostics to more accurately assess their validity and clinical utility, and make information about tests readily available.

Moving from concept to clinical use requires basic, translational, and regulatory science. On the basic-science front, studies are identifying many genetic variations underlying the risks of both rare and common diseases. These newly discovered genes, proteins, and pathways can represent powerful new drug targets, but currently there is insufficient evidence of a downstream market to entice the private sector to explore most of them. To fill that void, the NIH and the FDA will develop a more integrated pathway that connects all the steps between the identification of a potential therapeutic target by academic researchers and the approval of a therapy for clinical use. This pathway will include NIH-supported centers where researchers can screen thousands of chemicals to find potential drug candidates, as well as public–private partnerships to help move candidate compounds into commercial development.

The NIH will implement this strategy through such efforts as the Therapeutics for Rare and Neglected Diseases (TRND) program. With an open environment, permitting the involvement of all the world's top experts on a given disease, the TRND program will enable certain promising compounds to be taken through the preclinical development phase — a time-consuming, high-risk phase that pharmaceutical firms call "the valley of death." Besides accelerating the development of drugs to treat rare and neglected diseases, the TRND program may also help to identify molecularly distinct subtypes of some common diseases, which may lead to new therapeutic possibilities, either through the development of targeted drugs or the salvaging of abandoned or failed drugs by identifying subgroups of patients likely to benefit from them.

Another important step will be expanding efforts to develop tissue banks containing specimens along with information linking them to clinical outcomes. Such a resource will allow for a much broader assessment of the clinical importance of genetic variation across a range of conditions. For example, the NIH is now supporting genome analysis in participants in the Framingham Heart Study, obtaining biologic specimens from babies enrolled in the National Children's Study, and performing detailed genetic analysis of 20 types of tumors to improve our understanding of their molecular basis.

As for translational science, the NIH is harnessing the talents and strengths of its Clinical and Translational Sciences Award program, which currently funds 46 centers and has awardees in 26 states, and its Mark O. Hatfield Clinical Research Center (the country's largest research hospital, in Bethesda, MD) to translate basic research findings into clinical applications. Just as the NIH served as an initial home for human gene therapy, the Hatfield Center can provide specialized diagnostic services for rare and neglected diseases, offer a state-of-the-art manufacturing facility for novel therapies, and pioneer clinical trials of other innovative biologic therapies, such as those using human embryonic stem cells or induced pluripotent stem cells.

As genetics researchers generate enormous amounts of new information, the FDA is developing the regulatory science standards and evidence needed to use genetic information in drug and device development and clinical decision making. The agency's Critical Path Initiative aims to develop better evaluation tools, such as biomarkers and new assays. Under the Voluntary Genomic Data Submission program, companies can discuss genetic information with the FDA in a forum separate from the product-review process. These discussions give the agency and companies a better understanding of the scientific issues involved in applying pharmacogenomic information to drug development and offer an opportunity for early, informal feedback that may assist companies in reaching important strategic decisions. The goal is to help companies integrate genomics into their clinical-development plans.

Today, about 10% of labels for FDA-approved drugs contain pharmacogenomic information — a substantial increase since the 1990s but hardly the limit of the possibilities for this aspect of personalized medicine.1 There has been an explosion in the number of validated markers but relatively little independent analysis of the validity of the tests used to identify them in biologic specimens.

The success of personalized medicine depends on having accurate diagnostic tests that identify patients who can benefit from targeted therapies. For example, clinicians now commonly use diagnostics to determine which breast tumors overexpress the human epidermal growth factor receptor type 2 (HER2), which is associated with a worse prognosis but also predicts a better response to the medication trastuzumab. A test for HER2 was approved along with the drug (as a "companion diagnostic") so that clinicians can better target patients' treatment (see table).

Increasingly, however, the use of therapeutic innovations for a specific patient is contingent on or guided by the results from a diagnostic test that has not been independently reviewed for accuracy and reliability by the FDA. For example, in 2006, the FDA granted approval to rituximab (Rituxan) for use as part of first-line treatment in patients with certain cancers. Since then, a laboratory has marketed a test with the claim that it can distinguish the approximately 20% of patients who will not have a response to the drug from those who will. The FDA has not reviewed the scientific justification for this claim, but health care providers may use the test results to guide therapy. This undermines the approval process that has been established to protect patients, fails to ensure that physicians have accurate information on which to make treatment decisions, and decreases the chances that physicians will adopt a new therapeutic–diagnostic approach. The FDA is coordinating and clarifying the process that manufacturers must follow regarding their claims, including defining the times when a companion diagnostic must be approved or cleared before or concurrently with approval of the therapy. The agency will ensure that claims that a test will improve the care of patients are based on solid evidence, and developers will get straightforward, consistent advice about the standards for review and the best way to demonstrate that the combination works as intended.

Genetic tests are not perfect, in part because most gene mutations do not perfectly predict outcomes. Clinicians will need to understand the specificity and sensitivity of new diagnostics. The agency's goal is an efficient review process that produces diagnostic–therapeutic approaches that clinicians can rely on and allows companies that invest in establishing the validity and usefulness of tests to make specific, FDA-backed claims about benefits.

Patients should be confident that diagnostic tests reliably give correct results — especially when test results are used in making major medical decisions. The FDA has long taken a risk-based approach to the oversight of diagnostic tests, historically focusing on test kits that are broadly marketed to laboratories or the public (e.g., pregnancy tests or blood glucose tests); such kits are sold only if the FDA has determined that they accurately provide clinically significant information. But recently, many laboratories have begun performing and broadly marketing laboratory-developed tests, including complicated genetic tests. The results of these tests can be quite challenging to interpret. Because clinicians may order a genetic test only once, getting the results right the first time is crucial.

There are reports of problems with laboratory tests that have not had FDA oversight: women were erroneously told they were negative for a mutation conferring a very high risk of breast cancer; an ovarian cancer test, marketed before the completion of an NIH-funded study,2 gave false readings that reportedly led to the unnecessary removal of women's ovaries; and flawed, mishandled data underlying a test for Down's syndrome were discovered only days before the test was to go on the market. Through a process that includes opportunities for public input, the FDA will work to ensure the quality of key diagnostic tests, helping to protect patients and giving clinicians confidence that personalized medicine will lead to real health improvements.

In addition, the NIH will address the fact that there is no single public source of comprehensive information about the more than 2000 genetic tests that are available through clinical laboratories. On the recommendation of a federal advisory committee,3,4 the NIH — with advice from the FDA, other Department of Health and Human Services agencies, and diverse stakeholders — is creating a voluntary genetic testing registry to address key information gaps.5 Readily available information about these tests, including whether they were cleared or approved by the FDA, will help clinicians and consumers make informed decisions about using the tests to optimize health care. The registry will also support scientific discoveries by facilitating the sharing of data about genetic variants.

In February, the NIH and the FDA announced a new collaboration on regulatory and translational science to accelerate the translation of research into medical products and therapies; this effort includes a joint funding opportunity for regulatory science. Working with academic experts, companies, doctors, patients, and the public, we intend to help make personalized medicine a reality. A recent example of this collaboration is an effort to identify new investigational agents to which certain tumors, identified by their genetic signatures, are responsive.

Real progress will come when clinically beneficial new products and approaches are incorporated into clinical practice. As the field advances, we expect to see more efficient clinical trials based on a more thorough understanding of the genetic basis of disease. We also anticipate that some previously failed medications will be recognized as safe and effective and will be approved for subgroups of patients with specific genetic markers.

When the federal government created the national highway system, it did not tell people where to drive — it built the roads and set the standards for safety. Those investments supported a revolution in transportation, commerce, and personal mobility. We are now building a national highway system for personalized medicine, with substantial investments in infrastructure and standards. We look forward to doctors' and patients' navigating these roads to better outcomes and better health.

References

1. Frueh FW, Amur S, Mummaneni P, et al. Pharmacogenomic biomarker information in drug labels approved by the United States Food and Drug Administration: prevalence of related drug use. Pharmacotherapy 2008;28:992-998. [CrossRef][Web of Science][Medline]

2. Ovarian cancer research results from the Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial: fact sheet. Bethesda, MD: National Cancer Institute. (Accessed June 11, 2010, at http://www.cancer.gov/cancertopics/cancertopics/factsheet/detection/plco-ovarian.)

3. Secretary's Advisory Committee on Genetic Testing. Enhancing the oversight of genetic tests: recommendations of the SACGT. Bethesda, MD: National Institutes of Health, 2000. (Accessed June 11, 2010, at http://oba.od.nih.gov/oba/sacgt/reports/oversight_report.pdf.)

4. Secretary's Advisory Committee on Genetics, Health, and Society. U.S. system of oversight of genetic testing: a response to the charge of the Secretary of Health and Human Services. Bethesda, MD: National Institutes of Health, 2008. (Accessed June 11, 2010, at http://oba.od.nih.gov/oba/SACGHS/reports/SACGHS_oversight_report.pdf.)

5. Genetic Testing Registry. Bethesda, MD: National Center for Biotechnology, National Library of Medicine; 2010. (Accessed June 11, 2010, at http://www.ncbi.nlm.nih.gov/gtr.)

[The Plan by NIH/NSF is likely to be proposed to the Congressional Investigation, as the Guiterrez-video referred to the NSF Commissioner, by authorities above him, Drs. Collins and Hamburger quite certain to be in the lead roles. For the historical parallel to the National Interstate, see Wikipedia] - AJP]

^ back to top


FDA Cracks Down on DTC Genetic Testing

GenomeWeb
June 14, 2010

[Excerpts]

...

Pharmacogenomics Reporter, one of the Daily Scan's sister publications, reported that these letters are what the FDA calls "untitled" letters, meaning the companies have been made aware of the FDA's concerns and have a chance to address them and make whatever changes the FDA deems necessary. Based on their responses, the FDA could upgrade to "warning" letters, which are more severe. PGx Reporter's Turna Ray also reported that several presenters at the Consumer Genetics Conference in Boston in the beginning of June said they thought federal regulation of DTC tests was "imminent."

Genomics Law Report's Dan Vorhaus says the letters may not be as significant to the five companies involved - 23andMe, Navigenics, Decode Genetics, Knome, and Illumina - as everyone thinks, especially since the FDA hasn't demanded that the companies remove the products from the market pending review. "So, at least for the moment, we may see little or no immediate change while these companies weigh their options internally and through discussions with the FDA," Vorhaus writes. He suggests the companies' best option would be to change the tests in such a way that would convince the FDA they no longer qualify as medical devices - "for instance by removing the ability of consumers to purchase the product without the participation of a healthcare provider."

Daniel MacArthur at Genetic Future thinks this turn of events could spell disaster for the personal genomics industry. "Excessive regulation would negatively impact on innovation in the field by increasing the barrier to entry for new products, as well as increasing costs for consumers," he says, adding that this move looks like it's motivated by publicity on the FDA's part - in the wake of the 23andMe test results mix-up - rather than by a genuine drive to protect consumers.

...

Reply (2)
Andras [Pellionisz_at_JunkDNA.com]

FDA Did Not Crack Down on DTC Genetic Testing

Juan Enriquez predicted in 2001 that "genomics" will compare to "digital" in global impact. No doubt, but on the way to the genome revolution isn't it hard to keep track of who is doing what and what is significant? The US FDA's actions last week represent just the kind of fluctuation that shifts the scene and changes the balance. Whether that's just for now or forever remains to be seen.

The revolution's drivers have long been aware that the many subfields of genomics: consumer genomics, educational genomics, recreational genomics, genomics of ancestry, etcetera must be distinguished from medical genomics. While each category may be regulated (or not) by government, legal, economic or medical agencies in countries worldwide, I believe there is a way to cut to the chase and get on with the genome revolution. That's why yours truly focused on developing the commercial business model for genome applications, as illustrated in the YouTube Shop for Your Life!. The commercial business model offers the fastest growth and the most immediate consumer adoption, based on consumers' freedom of choice, with the least interference from lawyers, regulators and governments. ...

[Shop for your Life! - HolGenTech]

but back to the US fracas and what is happening, or better said, not happening. I admire the legally precise analysis of Dan Vorhaus, who suggests that it may be a misunderstanding to think that "The FDA Cracks Down on DTC Genetic Testing". As he says, the Gutierrez letters (G-5) "may not be as significant". In fact, did FDA's Alberto Gutierrez "crack down" at all on DTC? There is no "cease and desist" order, no deadlines, no specific documentation to submit, but rather just the suggestion that there be a long brewing and largely ongoing dialog with the FDA. --doesn't sound like an enthusiastic endorsement of the genome revolution, but neither does it indicate the end of the U.S. version of the genome revolution. As he points out [at the end of] in his video, even the agency's director [Commissioner] could re-think and update somewhat blurred definitions; i.e., what "medical device" may mean in the genomic age - a giant leap away from the 1976 mandate of FDA:

Alberto Guiterrez (FDA)

Is it possible that the G-5 letters are just setting the stage for the widely heralded Congressional Investigation on DTC Genome Testing? Surely when Francis Collins, M.D., Ph.D., and Director of NIH, who wrote the book on Personalized Medicine, [now, within 6 months also in paperback, kindle and audio ...] is called to testify before the Congressional Committee on Energy and Commerce, he can be expected to offer his specific recommendations from the NIH Genetic Testing Registry and could well suggest that registration be made mandatory, pursuant to an endorsement of the Congressional Committee. Note his initiation comments this March NIH Genetic Testing Registry.

In the same spirit of educating the public through the political stage, the G-5 letters to Knome and Illumina may amount to invitations to two of the world's pre-eminent genome R&D and industrial experts, Harvard Genomics Professor George Church, co-Founder of Knome, and Illumina CEO Jay Flatley to deliver Congressional Testimonies.

With the nation watching, they could guide the Congressional Committee to bring the FDA into the genome revolution, or find/shape/create the agency or entity that will embrace it to the maximum benefit of the American public. Is it that the FDA has been remarkably passive for 3 years and now, with a Congressional Investigation imminent, feels the need to protect itself from all criticism that could suggest "it never flexed its muscles"? Sure, there was some muscle flexing when earlier G-3 letters seem to have scared Pathway Genomics away from DTC without resorting to anything that could be labeled "inappropriate regulation". The earlier Gutierrez salvo, G-3, was just a scary demand for a lot of documentation with a deadline so short there was no time for the Congressional Committee to act. Now, the G-5 may be entrée for participation in the genome revolution a la U.S. style with Congress dominant and the FDA retaining a scary innocence. So, let us watch the Congressional Committee conduct its eminently predictable hearings, and recommend appropriate legislation; through which "medical genomics" and "off-the-shelf genomics" (commercial and other non-medical utilization of information) should be clearly distinguished. We can be sure that if FDA is left to regulate Medical Genomics, it won't be the same FDA with the 1976 mandate Alberto Gutierrez notes in his video.

This said, it is still possible that forces in the U.S. may be getting ready for the big "crack down", or even planning to kill DTC in the U.S, and diminish the country's status in our Genomic Age. In fact, they could manage a big set-back for the U.S. just by imposing a cumbersome legal agenda that takes so long the U.S. could miss its chance just by having to wait, all while DTC in Asia soars. We can hope that Congress will be well aware that there isn't enough money to address escalating health care (sick care) and listen to "We the People" demanding genome-based prevention. The Congressional Committee would be well advised to keep DTC business open during legal renovations through a moratorium on any further regulation of DTC until legislation puts the regulatory houses in order.

If the U.S. does bow out or just misses the boat, I look at Asia as particularly conducive to the kind of commercial genomics I promote, which are based on a genome computing architecture that applies smart phones to empower consumers to exercise their freedom of choice using genome-based recommendations. Asia is advanced in both mobile computing and genomics already, and backed by their Big IT. The really good news is that the number of lawyers per capita is a fraction of the U.S.'s, and so is the cost of labor... more for the U.S. Congress to consider, and they had better do so quickly.

[Genomeweb comments can not be burdened by overly detailed documentation - see many more hyperlinks here - AJP]

^ back to top


Why the FDA Is Cracking Down on Do-It-Yourself Genetic Tests: An Exclusive Q&A

Newsweek, June 11, 2010
Mary Carmichael

Fresh off sending stern letters to five consumer-genomics companies indicating that, as currently marketed, the companies’ tests will require clearance by the FDA, Alberto Gutierrez - the agency’s director of the Office of In Vitro Diagnostics in the Center for Devices and Radiological Health - spoke to NEWSWEEK. Among the revelations: Pathway Genomics, the company that started the controversy by planning to sell its test in drugstores, will be withdrawing from the direct-to-consumer market. Gutierrez also clarified the agency’s reasoning and timing. Excerpts:

Just to clarify, it sounds like these consumer-genomics tests-as they’re marketed now - will require pre-market clearance because they qualify as medical devices under current law. Is that correct?

That’s correct. [That is; "the impression that it sounds like" is correct - AJP]

Why is this happening now instead of three years ago, when direct-to-consumer genomics tests first came to market?

Well, the claims [made by the companies] have changed constantly. The original claims from three years ago were very, very vague. For example, the claims they’re making now for the different drugs and how they’re metabolized, those weren’t being made previously. Even some of the health claims in terms of risk of chronic disease, those just started coming online about a year ago.

So the problem is that the companies are testing for genetic variants that might affect the way consumers make medical decisions?

That’s correct … If you’re making a claim about [a genetic variant that affects the metabolism of the anticoagulant drug] warfarin, and somebody decides based on the result they get that they want to change their dosing, that is a fairly risky decision. That could affect their health. If they’re not feeling well on their current dose and the drug is expensive, we don’t know what they would do.

Illumina and Knome are different from the other three companies, but they were also sent letters. Can you explain to me the thinking behind the letters to each of them?

Well, some of the other companies are buying the chip that Illumina is making. That chip is sold as for “research use only.” As such, Illumina has a responsibility. If the chip is being used for diagnostic tests instead, Illumina has to follow the law, and they are aware that the chips are not being used for research only.

Would Illumina still have to obtain pre-market clearance if it sold that chip to medical laboratories and doctors only?

Yes, if Illumina is selling their chips to clinical labs that are using those chips to provide results back to either physicians or patients, and if they know that is how the chips are being used. However, they could [sell the chips without pre-market clearance] to academic labs or clinical laboratories which are clearly in the research space - meaning they’re beginning to develop a new test and just looking what they can do with it. That’s different than providing clinical results to patients.

What Knome sells is more of a service than a device. It’s basically a software program that explains genetic data that consumers can have generated elsewhere. Can you explain to me why it requires pre-market clearance?

Software is a medical device, and they’re making medical claims. They’re taking results and making medical claims that come out of those results.

Is there a reason Pathway Genomics was not included in this round of letters?

As you’re aware, we sent Pathway a letter not too long ago. They have responded, and in that response they are noting that they are planning to move away from direct-to-consumer testing at this point. I believe they’re planning to change their business model.

What about other companies that sell their tests to doctors, rather than directly to consumers, such as Counsyl?

Counsyl actually used to be a direct-to-consumer company until we sent Pathway the “it has come to our attention” letter. Then they changed their business model. They’re going through clinics or doctors. In that case, it will depend on whether they fit under the model of a laboratory-developed test. If they don’t, they will have to come in and get their test cleared.

Is there any reason to think this action by the FDA will preempt congressional hearings?

Well, I can’t predict whether Congress will have hearings and whether it will make any difference.

[It is an interesting question if the above is an informal interview or an ex officio statement - AJP]

^ back to top


Breaking: FDA Likely to Require Pre-Market Clearance for DTC Personal Genomics Tests

Newsweek, June 11, 2010
Mary Carmichael

The FDA has just sent letters to five personal genomics companies outlining its intentions for regulation of direct-to-consumer tests, and if 23andMe thought it was having a bad week before, it's sure not going to be happy now. In the letters, the agency says that its test and four others, as currently marketed, will need pre-market clearance because they qualify as medical devices "intended for use in the diagnosis of disease or other conditions or in the cure, mitigation, treatment, or prevention of disease."

Requiring pre-market clearance is a drastic measure, and it's precisely the one personal-genomics companies hoped to avoid by casting their results as "educational" or recreational instead of medical. That's not all; the letters do "not necessarily address other obligations" the companies may have in selling their tests directly to consumers.

Oddly, the FDA hasn't yet sent a letter to Pathway Genomics, the company that started this whole fracas by making a deal (quickly rescinded when it went public) to sell its genomic test in drugstores. The agency tells NEWSWEEK its discussion with Pathway is still ongoing. Here's what it's telling the other five companies:

23andMe. This is the big, Google-backed player in the industry. The company was initially more focused on "fun" DNA results (do you have wet or dry earwax?), but since its launch in 2007 it's been adding medically relevant genes to the list of variants it identifies, including variants in BRCA1 and BRCA2 (breast-cancer genes) and several pharmacogenomic tests that assess patients' responses to medications such as warfarin and clopidogrel. That seems to be what got the FDA's attention: the letter to 23andMe mentions the pharmacogenomic tests and also points out that "the data generated from the 23andMe Odds Calculator, a feature of the 23andMe Personal Genome Service, includes the contribution of single-nucleotide polymorphisms (SNPs) to disease risk. Consumers may make medical decisions in reliance on this information." In other words, laypeople will think of this as a medical device, and we're going to treat it accordingly, which means you'll need to get it officially approved before selling it directly to the public. It's also not a laboratory-developed test, apparently, "because the 23andMe Personal Genome Service™ is not developed by and used in a single laboratory." That matters, because these tests aren't regulated as strictly as other medical devices and kits.

Navigenics. The letter is almost a direct facsimile of what's been sent to 23andMe-the warfarin and clopidogrel tests are notable, and so is a proprietary feature that "provides patients with genetic predispositions for important health conditions and medication sensitivities."

DeCode Genetics. Same basic deal here: consumers may use the test to make medical decisions, ergo, it's a medical device. Here, it's not pharmacogenomic tests that bother the FDA, but tests for 12 genes linked to breast cancer and a statement on the company's Web site that although "the deCODEme Cancer Scan cannot tell you whether you are going to develop cancer, it can alert you to your possible genetic risk and lead to early detection."

Illumina. This is a different kind of company. Although it has a personal-genomics component (it will sequence your entire genome, more or less, for about $20,000), that's not what has the FDA concerned at the moment. Illumina is also the company that provides 23andMe and DeCode with the genetic arrays used to scan 550,000 variants in the DNA. Those, too, may require pre-market clearance as long as they're used in direct-to-consumer genomic tests, says the FDA: "Although Illumina, Inc. has received FDA clearance or approval for several of its devices, we note that the Illumina Infinium HumanHap550 array is not one of them and is labeled 'For Research Use Only.' Yet Illumina is knowingly providing the HumanHap550 array to 23andMe and deCODE Genetics for clinical diagnostic use without FDA clearance or approval."

Knome. This, too, is a different kind of company. Knome doesn't sequence the genome so much as try to make sense of it for consumers who get their data generated elsewhere. Or, as the FDA puts it, Knome offers "a software program that analyzes genetic test results that are generated by an external laboratory in order to generate a patient specific test report." This apparently qualifies as a medical device as well.

There's still a lot more that could happen with regulation of consumer genomics. All five letters offer the companies the chance "to meet with us to discuss whether there are tests you are promoting that do not require review by FDA" and to establish what kind of information the companies need to give the agency. "We're basically telling them they need to come discuss with us whether they're marketing these legally," says FDA spokeswoman Erica Jefferson.

Congressional action may still be in the works, too. [It is not "may still be" but "will certainly be" - AJP] Congress sent letters to 23andMe, DeCode, and Pathway in late May, and the companies were due to provide an enormous amount of documents to the government last weekend. So stay tuned: this debate is far from over.

[Newsweek' perspective brings to the surface some legal definitions that are aptly analyzed by Dan Vorhaus - AJP]

^ back to top


The Gutierrez Letters from FDA to DTC Genome Testing Companies

Letter to Navigenics Concerning the NaviGenics Health Compass (PDF - 85KB)

Letter to Illumina, Inc. Concerning the Illumina Infinium HumanHap550 array (PDF - 88KB)

Letter to 23andMe, Inc. Concerning the 23andMe Personal Genome Service (PDF - 103KB)

Letter to Knome, Inc. Concering the KnomeCOMPLETE (PDF - 91KB)

Letter to deCODE Genetics Concerning the deCODEme Complete Scan (PDF - 96KB)

Letter to Pathway Genomics Corporation Concerning the Pathway Genomics Genetic Health Report

[For hyperlinks to facsimile-s click on headline of this entry - AJP]

^ back to top


What Five FDA Letters Mean for the Future of DTC Genetic Testing

Posted by Dan Vorhaus on June 11, 2010

[Likely to be the most professional legal analysis of the US-aspects of the FDA-letters - AJP]

^ back to top


Silicon Valley' Genome-Based Personalized Medicine Meeting Postponed to Dec 9-10

As Silicon Valley flexes its muscles with two of the leading DNA Sequencing companies (Complete Genomics and Pacific Biosciences), two of the leading DTC Genome Testing Companies (23andMe and Navigenics), two of the leading serial computer chip makers (Intel and AMD) and two of the leading parallel FPGA chip makers (Xilinx and Altera), several traditional computer integrator companies (HP, Apple) and the hybrid computer integrator (DRCcomputer), major health-care providers (Stanford just announcing its Genomic and Personal Medicine Center, El Camino Hospital and Palo Alto Medical Foundation already running theirs), with a provider excelling in digitalization of health records (Kaiser Permanente), and software giants engaged in "health data repositories" (Google Health and Microsoft HealthVault, Oracle), and HolGenTech positioned to become a "Center for Genome Analysis and Interpretation" (with every $3 investment in the USA $1 commanded from Silicon Valley VC-s), along came the news of the Congressional Investigation of DTC genome testing. Thus, the meeting originally planned for June 23-25 was postponed to the time when the dust will clear: December 9-10, 2010 - AJP]

^ back to top


Would Regulation Kill Genetic Testing?

It could—but the FDA and Congress also could make the burgeoning biotech industry stronger.

Newsweek, June 4
Mary Carmichael

At the Consumer Genetics Conference in Boston this week, it was nearly impossible to go an hour without hearing the words “Pathway” or “Walgreens.” That wouldn’t have been the case had the meeting been held before May 11, when Pathway Genomics of San Diego made a deal to sell its genetic testing service in the nationwide drug chain. The product lets consumers spit in a $20 test tube, send the results to a lab, pay $250 or more, and find out some of what’s lurking in their DNA.

Easy, over-the-counter access to tests that consumers may not fully understand is spurring regulators to action. But as with any emerging industry, there’s concern that stringent or misguided government rules could hamper growth and innovation. Certainly, after going unregulated for three years, direct-to-consumer (DTC) genetic tests are about to face their biggest challenge.

Apparently blindsided by the Pathway-Walgreens news, the Food and Drug Administration signaled that it might, for the first time, regulate DTC genetic tests. Congress quickly got involved, sending letters to Pathway and two similar firms, Google-backed 23andMe and Navigenics, asking for documentation on almost everything the companies do. The deadline for the companies’ response is this weekend, and Capitol Hill hearings are probably on the horizon.

DTC genomics companies are already regulated in New York and California, and most experts agree that some federal oversight is needed. If done right, FDA rules could be good for the industry, consumers, and pretty much everyone, except perhaps the random firm promoting the genetic equivalent of snake oil. But if regulation is done wrong or overdone, it could harm the industry or send genomics startups packing for countries with less stringent laws.

“The complexity of some of these tests is such that it is really hard to come to consensus” on what to do about them, says Joann Boughman, executive vice president of the American Society of Human Genetics. “But it’s time we just had to grapple with this and understand that no matter what happens, not everybody is going to be happy.”

The FDA’s reluctance to regulate health-related DTC genomic tests so far has frustrated some in the industry—and critics outside of it—who would have preferred more guidance from the beginning. But the agency has not been ignoring the tests since 2007, when they first appeared. It has been gathering information on them all along. Even in 2000 the FDA was aware of some of the concerns it now has to confront; they were invoked in a report from the secretary’s Advisory Committee on Genetic Testing that year.

Some of the report’s conclusions were reflected in the FDA’s draft guidelines for regulating in vitro diagnostic multivariate index assays that were released in 2006 and again, in a second draft, in 2007. The industry reacted badly to those two proposals. “They were vague, and it would have been expensive for independent labs to comply with them,” says Dan Vorhaus, an attorney at Robinson Bradshaw & Hinson who focuses on genomics. “There was a lot of ambiguity in what the FDA was proposing. It wasn’t clear how it would be applied.”

Recently, FDA officials have said they hope to revamp those guidelines yet again. The current rules do not cover DTC genetic tests, but theoretically, they could be expanded to do so. That might mean that DTC companies would need premarket clearance from the FDA to sell their tests—a rule that would raise the barriers to entry, possibly discouraging startups or driving them overseas. China, in particular, might look appealing: institutes there have lately been buying genetic sequencing machines in droves.

But many observers think the FDA won’t go so far as to require premarket clearance for the tests because the agency itself may not want to deal with the “insane and unsupportable burden” that would present, says Paul Kim, an attorney at Foley Hoag who briefed the Consumer Genetics Conference audience on the issue. “Does the FDA really want to have an obligation to clear thousands of new tests every year?” he says. “It’s unsustainable, and I can’t imagine the FDA would welcome or look for that kind of responsibility.”

Perhaps an easier solution would be to piggyback onto the genetic-test registry that the National Institutes of Health is already planning to build so that consumers will have a way to compare different testing services side by side. So far, the NIH’s plan is for the registry to be voluntary, but Vorhaus argues it could be mandatory instead, with companies required to explain what genes they test for, how they do it, and how they interpret and aggregate the results for consumers. “The companies may already show you the [basic science] studies they’re using,” says Vorhaus, “but all those algorithms that go into producing their reports—those are the kind of things that are going to concern the FDA.”

At the very least, says Boughman, under new regulations, DTC genetics companies should have to start proving some basic cred: that their labs are CLIA-certified (most already are) and that they can correctly identify the variants for which they’re testing. Boughman would like to see the labs regularly checked by an outside agency such as the College of American Pathologists. “We think there should be a way to confirm a result in another laboratory, either with people flipping samples or sharing a [sample with a disguised identity] once in a while,” she says.

The FDA has not yet revealed its intentions; the agency told NEWSWEEK on Thursday that it “continues to look at tests being marketed directly to consumers, and will take appropriate steps as necessary to make sure that public health needs are met in a safe and effective manner.”

But Kim says there’s at least one concrete indication of what may be in store—at least from the Hill. Last week, Reps. Patrick Kennedy (D-R.I.) and Anna Eshoo (D-Calif.) reintroduced a personalized medicine bill that first surfaced three years ago under the sponsorship of a certain then-senator from Illinois. (Yes, that one.) The new, Kennedy-Eshoo version of the bill has several tweaks, among them the creation of a new office focused on personalized medicine; a proposal for a registry like the NIH’s; and a call for the FDA, the Federal Trade Commission, and the Centers for Disease Control to evaluate DTC genomic tests. The FDA is not involved with the legislation, but the bill may be “a good bellwether” for future regulation, says Kim, who has advised Kennedy’s office.

“What people like most of all with regulation is certainty and clarity,” Kim adds. “If you have a pathway laid out, even if it’s stringent, you know what you’re dealing with.”

[Theragen of Seoul, Korea - backed by SAMSUNG would be a primary beneficiary, should US regulation decide to throw out the baby with the bathwater - AJP]

^ back to top


Stanford School of Medicine Launches Center for Genomics and Personalized Medicine

June 04, 2010

By a GenomeWeb staff reporter

[Michael Snyder, Stanford]

NEW YORK (GenomeWeb News) — Stanford University's School of Medicine this week announced the creation of a new Center for Genomics and Personalized Medicine designed to integrate genomics information with every aspect of medicine, as well as draw on collaborations between Stanford's basic scientists and clinical researchers, and on technologies developed in Silicon Valley.

Stanford says the center will promote personalized medicine by building on research from the sequencing of the genome of Stephen Quake, the Lee Otterson Professor of Bioengineering and co-chair of Stanford's bioengineering department. Quake made news last August by using a technology he helped invent — Helicos BioSciences' Heliscope single molecule sequencer — to sequence and publish his own genome for less than $50,000. Researchers published results from their study of Quake's genome in the May 1 issue of the Lancet.

"The center blends highly efficient, rapid sequencing technology with the research and clinical efforts of experts in genomics, bioinformatics, molecular genetic pathology and even ethics and genetic counseling to bring advances from the laboratory to the patient," Stanford said in its announcement.

The center's director is Michael Snyder, chair of the medical school's Department of Genetics. In the statement, Snyder said the center's sequencing facility is already operating with new equipment estimated to increase its sequencing capacity by about fivefold while also "significantly" reducing the cost.

Earlier this year, Snyder led a team of researchers in sequencing the transcriptomes of human embryonic stem cells in various stages of their differentiation into neural cells, using short- and paired-end reads generated with Illumina sequencing and long reads generated with the Roche 454 FLX and Titanium platforms. They identified both known and previously unannotated transcripts as well as spliced isoforms specific to the differentiation steps.

The center's equipment also includes a Single Molecule Real Time, or SMRT, DNA sequencing system purchased from Pacific Biosciences. Stanford was one of 10 institutions that purchased the system as part of Pac Bio's early access program in North America. The company has said it expects to launch commercial sales of the system in the second half of this year.

[This was only a matter of time! "The second half of this year" starts in less than 4 weeks... Pellionisz_at_JunkDNA.com]

^ back to top


Your Genome Is Coming [to where? - AJP]

Forbes
June 3, 2010 - 5:32 pm
Matthew Herper, senior editor at Forbes

[Your Affordable Genome is coming to WHERE? - AJP]

Just keep waiting, and soon you'll be able to afford that genome sequence you've always wanted. Makers of DNA sequencers are dropping costs and increasing speed at a rate that would make microchip manufacturers blush.

Illumina, the maker of DNA decoders, today lowered the cost of its consumer gene-sequencing product from $48,000 to $19,500, with a cost of $14,500 for people in groups of five or more that use the same physician. It also introduced a new category of customer: for patients who might get actionable medical information, such as cancer patients who could use genomic information to pick medicines, the service will be available for $9,500. The announcement was made at the Consumer Genetics Conference in Boston.

That is a stunning drop in price. Sequencing the first human genome cost $3 billion. Knome, a company that also sells consumers access to their genome and analysis of it, launched its service for $350,000 in December 2007 . Now it will sequence your genome for $39,500. Earlier this year, Illumina announced a new DNA sequencer that would decode all the genes in the human genome for just $10,000, and said it expected prices to drop further; the $9,500 price tag for people who might have a medical reason to get sequenced indicates that costs have already dropped below that level.

It's a new notion that sequencing every DNA base pair could be more useful that just doing a targeted read of several genes, a cheaper and older technique. Last October, scientists at Yale diagnosed a baby with a genetic form of diarrhea by sequencing all of its protein-coding genes. Other examples of diagnosis-by-sequencing were published earlier this year in the New England Journal of Medicine.

Cancer patients are likely to be among the first to benefit from these dropping prices. The idea is that knowing the gene sequences of a patient will help patients pick targeted cancer drugs. At $9,500, the sequencing already costs less than a course of treatment with newer cancer medicines sold by Novartis and Eli Lilly.

Hoping to accelerate this trend, Life Technologies, Illumina's biggest competitor, today announced the creation of an alliance of cancer centers that will use gene sequencing to help patients pick treatments. The founding partners, including Fox Chase Cancer Center, Scripps Genomic Medicine and the Translational Genomics Research Institute (TGen), are also launching a pilot study aimed at determining whether whole genome sequencing can improve the management of hard-to-treat cancers. The announcement was also made at the conference.

Gregory Lucier, Life's chief executive, put up the inset graphic during his presentation. It shows the price drops in gene sequencing technology over the past decade, compared to Moore's Law, the axiom about increasing microprocessor speed coined by Intel founder Gordon Moore. What's amazing is that the gene sequencers now seem to be outpacing the microchips.

[The Affordable Personal DNA Is Coming - but "Is IT Ready for the Dreaded DNA Data Deluge?" - I asked and answered in my Google Tech Talk YouTube (2008). By 2009, it was obvious that the "DTC Genome Testing Business Model" (as it was, see my Churchill Club YouTube panel) was not complete, without "Personal Genome Computer" and "Personal Genome Assistant" genome computing architecture (see my 2010 PMWC2010 YouTube):

[Need for PGC & PGA Pellionisz_at_JunkDNA.com]

^ back to top


Illumina Drops Personal Genome Sequencing Price to Below $20,000

BioIT World
June 3, 2010
By Kevin Davies

BOSTON – One year after Illumina introduced its personal genome sequencing service, CEO Jay Flatley announced a significant price drop to below $20,000, and potentially half that if there is clinical relevance.

Illumina’s Individual Genome Sequencing service that Flatley debuted at the Consumer Genetics Show last year launched with a price of $48,000 for a whole genome sequence at 30-fold coverage. The service has to be ordered by a physician, and the results are also delivered back to the physician to discuss with the consumer. [This will open up the related questions; a) who pays for the sequencing, and "whose genome is it, anyway", b) In this construction, there will be a very limited number of physicians to understand the full DNA (since they have never been trained for an art non-existent at the time of their schooling) let alone "discuss" it with the "consumer" (since the the time of medical doctors is far too high for "consumer discussions"). The viable business model is that of HolGenTech (see below), where consumers are empowered by interoperable health- and genomic data to make a difference in their daily life; shopping by their genome - AJP]

With the introduction of the HiSeq instrument earlier this year, Illumina said the reagent cost of sequencing a full human genome had dropped to the $10,000 mark, which made the original IGS price tag of $48,000 appear a little steep by comparison. The new pricing that Flatley introduced today reflects the dramatic reduction in sequencing cost enabled by the HiSeq instrument.

The new cost of an individual genome sequence is $19,500. For groups of five people or more, the price drops to $14,500. Flatley also said that for a physician ordering a sequence for genuine clinical relevance, the price falls further to $9,500.

The only catch with the new pricing is that the sequence is no longer delivered on an iMac. “A little less elegant, a little less cool,” Flatley admitted.

Flatley disclosed that the IGS has sequenced at least 14 individuals to date. These include Flatley,venture capitalist Hermann Hauser, Henry “Skip” Gates and his father; Glenn Close; John West (former Solexa CEO) and his family of four; a cancer patient, two centenarians, and a severely ill child.

Goes to 11

Flatley briefly discussed analysis of his own genome sequence, illustrated with a live demo of his genome on an iPad app that had members of the audience drooling. Illumina shelved an earlier genome browser app for the iPhone after concluding the device didn’t have sufficient power to run the app. [This we knew from the outset - just a few hours before Illumina's show of iPhone "business model" I presented in last years' Consumer Genetics Conference the demo of empowerment of consumers to shop by their genome using their generic smart phone ("Personal Genome Assistant"). The architecture does call, however, for a robust enough "Personal Genome Computer" to be synced with; see photo from YouTube in the article above - AJP]

Flatley said that detailed analysis of his genome, searching for known variants in mutation databases such as HBMD and PharmKB, revealed 16 candidate homozygous and 48 heterozygous ‘disease-causing’ variants potentially associated with known genetic diseases. The accuracy of some of these annotations left a lot to be desired -- much to Flatley’s relief. In some cases, the mutations were annotated as “… death in early infancy highly likely.”

After further review, Illumina eliminated all 16 of the homozygous mutations as disease-related, and 37 of the 48 heterozygous variants. That leaves Flatley as a carrier of 11 confirmed ‘disease-causing’ alleles, for six recessive disorders and five dominant disorders, most of which Flatley admitted he had never heard of. Further analysis is being conducted.

Flatley presented a live demo of his genome using a custom-built app for the iPad. (The app is not yet publicly available.) The app presented a host of features, including a list of the disorders linked to Flatley’s known variants; an “about me” tab to build a family tree and enter health information, which is linked to Microsoft’s Health Vault.

A “Favorites” tab provides access to favorite genes (Flatley gave the example of “athletic tendency”) extracting SNPs in real time. There was also a genome browser, providing the ability to drill down from the whole chromosome level to the nucleotide level; a pathways tab; and a sharing tool that could provide access to a physician.

[The significance is NOT that Illumina dropped its full DNA sequencing price by a staggering 80% - since even their lowest price is already undercut by Complete Genomics, and release of Pacific Biosciences' sequencer, providing full human DNA "for a price of a meal in a restaurant" (in maybe half an hour or so). The significance is that with Illumina dropping the smart phone application, HolGenTech is now spearheading genome computing architecture. The business model, technology and IP is available by HolGenTech - email also to Pellionisz_at_JunkDNA.com]

^ back to top


The Genome Project is 10 Years Old - Where is the Health Care Revolution?

Singularityhub.com
May 25th, 2010
Drew Halley

It is fair to say that the Human Genome Project has not yet directly affected the health care of most individuals.” - Francis Collins, April 2010, Nature.

What’s in a genome? Ten years ago, the completion of the Human Genome Project promised to usher in a whole new era of heath care. Revolutionary gene therapies would soon conquer everything from cancer and heart disease to diabetes and autoimmunity. A roll-call of our genes would unlock the causes (and the solutions) to death and disease. But a decade on, most of these hopes have failed to materialize, and most of our lives haven’t changed. So where’s the revolution?

A recent retrospective in Nature includes some sobering reviews by such genetic gurus as Craig Venter and Francis Collins. Sure, there have been some significant gains. In vitro genetic screening has greatly reduced the risk of many common genetic diseases at the pre-birth stage. Risk factors for a range of adult diseases (including cancer) are coming into focus, and a host of new drugs have been developed. But as scientists expected to find common genetic determinants underlying common diseases, they quickly discovered that the genome was anything but straightforward. Instead, the genes behind disease have been shown to be highly complex and individually variable, even for widespread disorders. There isn’t a SNP for cancer.

The problem is that currently, the field of genomics is data-rich and application-poor. Thanks to companies like Complete Genomics, there is a flood of new genetic data and even more on the way - but we still don’t know how it works. So far, the primary focus of interest (and funding) has been the most easily quantifiable advances, such as sequencing speed and costs. Accomplishments in this arena have been impressive, but a complementary push for clinical applications is needed to sort through all of this genomic data that we still don’t understand.

The fate of commercial genetics hangs in the balance. Companies like deCODE and 23andMe were born on the hope that laypeople might be willing to pay for a glimpse at their own DNA. The bankruptcy of deCODE and troubling rumors about 23andMe raise the question of whether personal genomics is an industry born premature. So far, their products feed a curiosity niche, not a utilitarian one. When a genome points to little more than SNP-based correlations, few people can justify spending their recession-hit income on what remains a biotech novelty.

As Collins, Venter and others have suggested, a health care revolution requires bridging the gap between genomic data and its clinical utility. Any disappointments of the past decade point to the directions of the next. We’re learning that so-called “junk DNA” isn’t really junk, but can regulate the expression of other, coding sequences of the genome. Untangling the various networks of gene regulation will illuminate the pathways which result in a given phenotype, pathological or not. The roles of epigenetic processes are also undoubtedly complicating factors which will need to be better understood.

Most approaches over the past decade have used SNP chip analysis to identify mutations associated with particular phenotypes. This type of analysis only looks at small parts of the genome, and has largely failed to identify the genetic determinants of most diseases. The SNP chip approach will be phased out as whole-genome scans become faster and more affordable (costs should drop below $1000 within the next three years). Complete Genomics aims to sequence 1 million human genomes within the next five years, and that’s a lot of data to crunch. Venter is calling for two ways of making better sense of this flood of whole-genome scans: more detailed phenotype analyses, and the development of computational tools that can link them to their genetic counterparts.

It’s interesting to note the parallel between difficulties encountered in genomics and neuroscience. Recent years have seen an increasing shift in brain science from localization (areas of the brain that “do” things) towards neural-network approaches. Just as we’ll unlikely find a single gene that causes cancer, we’re not going to find the “irony zone” of the brain anytime soon. Reconceptualizing both genomics and the brain as complex, interactive networks remains a necessary step to significant advances in either field (e.g. a health care revolution or AI, respectively). And despite these setbacks, we can expect big things on the way.

Genetics has already revolutionized our health care in certain respects. Preimplantation genetic diagnosis (PGD) has already made huge progress towards eradicating genetic disease before birth, a significant but often overlooked accomplishment. But more lies ahead. Coming decades will see the creation of genetic therapies based around the specific molecular details of a given disorder. Diseases such as pediatric cancer are already the target of multi-year genomic research, and more diseases will benefit from genomic research as costs come down. And as the genetic underpinnings of disease come into focus, personal genetics will also undoubtedly enjoy a second life - regardless of whether today’s companies survive to see it.

[This is a remarkable overview of the "Decade since 'Genome Project'" - but is seriously flawed. It is trivial that the "Decade" was not that of any "Genome Project", but of the "Human Genome Project". More of a nuance is that the Decade since "the completion of the FIRST DRAFT" of a non-existent "human genome" (a composite of five individual donors...) WILL be a decade-old on June 25, 2010. Thus, we have some time for an unfolding national-global introspection, that is actually correct. Below, we'll draw some "talking points" that have been masked by waaaaay to much politics and ego-battles over the years. Perhaps the significant difference in viewpoints is, that IMHO one must separate "breakthroughs of science" from "industrialization of scientific achievements" - Pellionisz_at_JunkDNA.com]

^ back to top


Scientist: 'We didn't create life from scratch'

From CNN reports

May 21, 2010 4:45 p.m. EDT

CNN) -- Genetics pioneer J. Craig Venter announced Thursday that he and his team have created artificial life for the first time.

Using sequences of genetic code created on a computer, the team assembled a complete DNA of a bacterium, then inserted it in another bacterium and initiated synthesis, or in Venter's words "booted up" the cell.

In a statement, Venter called the results "the proof of principle that genomes can be designed in the computer, chemically made in the laboratory and transplanted into a recipient cell to produce a new self-replicating cell," controlled only by the synthetic genome.

Venter answered questions Thursday about the achievement.

CNN: What exactly have you done?

J. Craig Venter: We announced the first cell that is totally controlled by a synthetic chromosome, that we designed in a computer based on an existing chromosome.

We built it from four bottles of chemicals.. that's over a million base pairs [of chromosomes]. We assembled that and transplanted it into a recipient cell and that new chromosome started being read by the machinery in the cell, producing new proteins, and totally transformed that cell into a new species coded by the synthetic chromosome.

So it's the first living self-replicating cell that we have on the planet whose DNA was made chemically and designed in the computer.

So it has no genetic ancestors. Its parent is a computer.

Time.com: Scientist creates life. That's a good thing, right?

CNN: What's its name?

Venter: "It is software.. It's DNA software."

CNN: How big a deal is this?

Venter: It's for others to describe. It's a big deal for us. We've been working on it for 15 years. It gives us tools to work with that haven't existed before. And we have some huge challenges.

We need new tools in science. Allowing us, for example, new organisms that more efficiently can capture C02 and convert it into fuel so we can get weaned off of oil.

We can create new food substances. ... We can create new ways to create clean water. We are already going to create new vaccines to treat diseases that emerge each year like the flu, so it's a new tool for scientists to work with.

But it's also a change conceptually. This is the first time we had a life form whose genetic code was made chemically. It tells us about the dynamic nature of life...That it changes second to second.

You take away the DNA, we're dead very quickly. You can't have life without the genetic code.

CNN: What does this mean for the average person?

Venter: This is a basic science breakthrough that now takes us from what was a hypothetical possibility -- that we could have synthetic life in these tools -- to rapidly advance to get some breakthroughs.

This is the first baby step that allows us to do that.

But it's a conceptual change... because we know it's possible. It should give people hope that we have new tools to tackle these problems.

CNN: Did you create new life?

Venter: We created a new cell. It's alive. But we didn't create life from scratch.

We created. as all life on this planet is. out of a living cell.

CNN: Some critics suggest you shouldn't make life from a computer.

Venter: People have been discussing this for the past decade since we've made incremental steps trying to get to this point. This is the fourth scientific publication in a series since 2003, so you can find 100,000 blogs out there discussing philosophically what this means, where does it take us, can we build things based on our imagination. So I think this will stimulate a lot of thinking, a lot of discussion.

CNN: Could you build an actual living organism - Frankenstein like?

Venter: Well these are very small cells. They are living. They are self-replicating. But if you're trying to advance life forms like you and me, I think that's still in the realm of science fiction.

CNN: This is a big deal ... What's next?

Venter: People keep asking me that at various dates. I was asked that 10 years ago after sequencing the human genome - you couldn't possibly top that - so we consider this a more important accomplishment than sequencing the human genome. So, following through on that is what's next for us, and see if we can create some of these cures for the planet

CNN: How excited were you? Did you pop champagne?

Venter: We did, but our initial emotion was more one of relief that it finally worked. You can imagine 99 percent of your experiments fail for one reason or another. This, when it finally worked, we were more relieved than excited.

CNN: What does it mean to you?

Venter: When you work on something for 15 years, it's a great sense of accomplishment. This is a demonstration of what new multidisciplinary team science is about. and I couldn't be prouder of our team.

^ back to top


The Journal Science Interviews J. Craig Venter About the first "Synthetic Cell"


^ back to top


Get Your Genotype Tests Now Before Congress Makes Them Illegal

Ronald Bailey | May 26, 2010

A couple of weeks ago, we saw the sorry saga of the Food and Drug Administration stomping on the effort by the direct-to-consumer (DTC) genotype screening company, Pathway Genomics, to offer its tests through drugstores. Pathway had reached an agreement with Walgreens to sell its test kits over-the-counter in its 6,000 or so stores. The FDA sent a threatening letter asking Pathway to justify the unregulated sale of a "medical device" to the public, and Walgreens backed away from its deal with the company.

Now the Congressional nanny-in-chief and head of the House Energy and Commerce Committee, Rep. Henry Waxman (D-Calif.) is demanding information by June 4 from three DTC companies, Pathway Genomics, Navigenics, and 23andMe. As Bloomberg reports:

The lawmakers gave the companies until June 4 to submit documents on the ability of the tests to identify consumers’ risks for illnesses. The legislators also requested information on the proficiency of the companies’ lab testing, policies on consumer privacy and whether the kits comply with FDA rules.

Some purchasers of the screening tests may be dissatisfied with their experiences, but I would suggest that most are first-adopter types who recognize the current limitations of genetic screening science. The way that consumers learn about the upsides and downsides of new products is to try them out; just as the way companies learn how to improve their products is through customer feedback. As often occurs, the "I'm-from-the-government-and-I'm-here-to-help" types are eager interfere with this kind of speedy social learning.

If you've been thinking about buying a gene screening test, you might want to go ahead now before Congress and the FDA make it illegal for you to get this kind of information. Just saying.

Disclosure: I [Ronald Baiely] am a happy customer of 23andMe (though I really wish their test had screened for the APOE4 allele associated with a much higher risk of Alzheimer's disease). Given this news, I am going to order a new test from another company today. I own no stocks in any gene screening companies. Finally, my article on the joys of DTC gene screening and the exaggerated concerns over genetic privacy has at last been submitted to my editors at Reason who are now busy making improvements to it.

[Just as when 23andMe offered a stunning discount of $99 for their usually $499 DTC service (on DNA day), the looming Congressional Investigation may trigger another avalanche of consumers who wish to ascertain their unelianable right to know their genome. I also encourage people to do that, with the specific note that leading DTC Genome Testing companies (like 23andMe and Navigenics) include in their price that the consumer can download from the secure site their own "raw SNP data file" - which is theirs, not only because it is their own characterization of individual diversity, but also because they have already paid for it. It is important to know that NOT all DTC Genome Testing companies provide you with the raw SNP data file - maybe since according to some unofficial statements of those companies that do provide this capability, very few customers actually download the "raw SNP data file". While for average customers it looks like just a bunch of numbers, it is a treasure, since e.g. the "Shop for your life" YouTube was made with Ms. Boonsri Dickinson kindly downloading her file from a DTC Genome Testing company and under a confidentiality agreement that HolGenTech will only use those results that she herself went public with, kindly sent us her "raw SNP data file" to make it eminently usable in daily life. - Pellionisz_at_JunkDNA.com]

^ back to top


Who Should Control Knowledge of Your Genome?

SmartPlanet
Dana Blankenhorn | May 26

A Congressional investigation has brought into sharp relief some of the political implications of genetic testing.

Rep. Henry Waxman and his House Committee on Energy and Commerce have sent letters to the leading makers of genetic testing kits — Pathway Genomics, Navigenic, and 23andme — as the first step in an investigation of the direct to consumer (DTC) genetic testing industry.

This followed the FDA’s decision to send Pathway a letter demanding it show it either FDA approval of its tests or its reasons why such approval is not necessary.

This was prompted by Walgreens and CVS plans to offer the Pathway test in their stores. Pathway already sells its test online.

The Pathway test costs just $30 and involves collecting a bit of saliva, then sending it to the company’s lab.

(And to think just last month our Boonsri Dickenson was writing about $99 tests and her own experience with the $999 version, which said she was at risk for macular degeneration later in life.)

The investigations have sparked a hair on fire moment for the industry, with people like Andras Pellionisz (above) of HolGenTech ...

“Consumers must ask, ‘whose genome is it anyway?’” Pellionisz says.

His concern is that any restriction on Direct To Consumer (DTC) or Over The Counter (OTC) genetic testing will give foreign competitors like DeCodeMe of Iceland and Korea’s DTC Genome Testing institution (backed by Samsung) a leg-up on a lucrative market.

That may be true. But there are also some serious questions to be asked, questions that have not been asked yet, before we make genetic tests as ubiquitous as home pregnancy kits:

How useful are they, really – All of us have DNA which shows how we might die. [Not true, see e.g. George Church - AJP]. Are these kits just creating needless panic?

How accurate are they – There are reports of cancer-free women ordering mastectomies because there is breast cancer in their family history already.

Are we ready yet – Scientists don’t yet know what a complete genetic test means. Given that reality most of what a test delivers will be as useful as a palm reading.

It’s true that you can sell anything you want to people if you don’t claim it’s medicine. But genetic tests are medicine. [Not true, see below - AJP]

Even if the terms of service for 23andme prohibit sharing the data with your doctor, people will share the data with their doctors, as Steve Murphy of GeneSherpas recently noted. What other reason is there for getting the test?

Pellionisz is right about one thing. It may be way too late to put this one back in the box. Al Gore helped launch Navigenics. And 23andme co-founder Anne Wojcicki is also Mrs. Sergey Brin, as in Google co-founder Sergey Brin.

The industry has also been getting on TV, with celebrities like Larry David getting their DNA tests read out on the George Lopez Show.

It’s not the “consumer wanting to know what’s going to kill me” market that should be the political issue in any case. It’s the identification market.

This month the full U.S. House approved legislation that will pay states to collect DNA samples on all those people arrested for any crime, as a crime-fighting measure.

That’s where the money is. That’s where the politics is.

So where do you stand on the issue. Want your DNA tested? Want to be able to resist having it tested? It’s your DNA — who should know about it?

---Comment (1) by Pellionisz

05/26/10

RE: Who should control knowledge of your genome?

While I largely agree with Dana, there are some fine points to make. Perhaps the most significant disagreement is that in his opinion genome testing can be boxed into "Medical". IMHO there is plenty to think "outside the box". Indeed, establishing some A,C,T,G letters, in DTC up to 1.6 million, or in full DNA sequencing obtaining all the12.4 Megabits of information of the diploid DNA is just a mapping out of individual human diversity.

Humans differ from one-another in about 4% of their A,C,T,G bases. Such differences can [be] just individual traits with your eye color different from mine, or many Chinese friends of mine unable to metabolize lactose when they grow out of childhood while most Northern Europeans continue thrive on it. My YouTube-s ("Pellionisz"), especially the "Shop for your Life!" put an emphasis on the immediate impact on consumerism by using genomic information that is not medical, at all. Also, a genomic test that reveals your ancestry-tree is not medical.

I congratulate and support with my tax dollars those governments that well serve their taxpayers. Governments wasting resources or using them to try to prevent knowledge [for people] of their bodies are attempts that I disagree with. The US government has just poured an enormous amount of money into genomics (also because of an interest in bioenergy - nothing to do with "medical"...) and also re-vamped the health-care system.

It goes unmentioned in the posting that the US health-care will simply be unsustainable if genome-based prevention will not save trillions by preventing or delaying some of the most expensive and (not only individually, but also socially) devastating diseases (neurological disorders, cancers).

[This blog (and reply) was prompted by Dr. Pellionisz' posting in HolGenTech Blog "Justifying DTC Genome Testing with Consumerism" that (tries to...) make it clear that the fulcrum of the wild media coverage of "DTC debate" truly is that DTC Genome Testing is much bigger than "The Box of US Medicine" (and thus is a global issue, not even limited to USA). The debate is already sizable, and escalates rapidly. Thus far (as the above blog by Dana Blankenhorn also illustrates) the debate is extremely diffuse with all kinds of issues mentioned (and unmentioned...). Some of them, like "All of us have DNA which shows how we might die" are fairly common remnants of an obsolete gloomy (mis)understanding of genomics - already superseded by new knowledge supplied by epigenomics, giving us hope, that "The Genome is NOT Your Destiny". - Pellionisz_at_JunkDNA.com]

^ back to top


'Junk' DNA behind cancer growth

ANI, May 21, 2010, 12.00am IST

Scientists have discovered a new driving force behind cancer growth.

Researchers from the University of Leeds, UK, the Charite University Medical School and the Max Delbruck Centre for Molecular Medicine (MDC) in Berlin, Germany, have identified how 'junk' DNA promotes the growth of cancer cells in patients with Hodgkin's lymphoma.

Professor Constanze Bonifer (University of Leeds) and Dr Stephan Mathas (Charite, MDC) who co-led the study suspect that these pieces of 'junk' DNA, called 'long terminal repeats', can play a role in other forms of cancer as well.

The researchers uncovered the process by which this 'junk DNA' is made active, promoting cancer growth.

"We have shown this is the case in Hodgkin's lymphoma, but the exact same mechanism could be involved in the development of other forms of blood cancer. This would have implications for diagnosis, prognosis, and therapy of these diseases," said Bonifer.

'Long terminal repeats' (LTRs) are a form of 'junk DNA' - genetic material that has accumulated in the human genome over millions of years.

Although LTRs originate from viruses and are potentially harmful, they are usually made inactive when embryos are developing in the womb.

If this process of inactivation doesn't work, then the LTRs could activate cancer genes, a possibility that was suggested in previous animal studies.

This latest study has now demonstrated for the first time that these 'rogue' active LTRs can drive the growth of cancer in humans.

The work focused on cancerous cells of Hodgkin's lymphoma that originate from white blood cells (antibody-producing B cells).

Unusually, this type of lymphoma cell does not contain a so-called 'growth factor receptor' that normally controls the growth of other B-cells.

They found that the lymphoma cells' growth was dependent on a receptor that normally regulates the growth of other immune cells, but it is not usually found in B-cells.

However in this case, the Hodgkin-/Reed Sternberg cells 'hijacked' this receptor for their own purposes by activating some of the 'junk DNA'.

In fact the lymphoma cells activated hundreds, if not thousands, of LTRs all over the genome, not just one.

Hodgkin-/Reed Sternberg cells may not be the only cells that use this method to subvert normal controls of cell growth.

The researchers found evidence of the same LTRs activating the same growth receptor in anaplastic large cell lymphoma, another blood cancer.

The consequences of such widespread LTR activation are currently still unclear, according to the study's authors.

Such processes could potentially activate other genes involved in tumour development. It could also affect the stability of chromosomes of lymphoma cells, a factor that may explain why Hodgkin-/Reed Sternberg cells gain many chromosomal abnormalities over time and become more and more malignant.

The study has been published in Nature Medicine.

[Excerpt from Nature Medicine article]

During evolution, mammalian genomes have accumulated many LTRs derived from ancient retroviral infections1. LTRs in the human genome, of which several thousand copies belong to ‘mammalian apparent LTR retrotransposon’ (MaLR)-like sequences2, contain functional promoter and enhancer elements1. As the insertion of an active LTR can interfere with gene regulation, the mammalian organism has devised a number of surveillance mechanisms to silence these elements early in development, usually by DNA methylation3. Despite this, genome-wide analysis of the human transcriptome has revealed that an unexpectedly high proportion of transcripts initiate within repetitive elements4. However, the initiation of gene transcription from repeat elements has been documented in detail for only a few human genes, where LTRs function, for example, as alternative promoters regulating cell type-specific gene expression1. Although it has long been speculated that the aberrant activation of repeat elements could contribute to the development of human diseases and malignancies1,5, the pathogenetic relevance of repeat activation is unclear1,6. In mice, deletion of the lymphoidspecific helicase (Lsh) gene or a hypomorphic DNA methyltransferase-1 (Dnmt1) allele causes activation and transposition of endogenous retroviral elements with concomitant chromosomal instability and induction of erythroleukemias or T cell lymphomas, respectively7–10.

["Repeats" have long been considered "the Junkiest parts of "Junk DNA". Indeed, so-called "repeat maskers" got rid of them, BEFORE most DNA sequences were analyzed. This article is an experimental support that genome regulation to a large extent is based on correct (in FractoGene's terms, "free of fractal defects") action of genome regulation, based on the Principle of Recursive Genome Function (an algorithmic, thus software-enabling approach. With the article below, demonstrating that such software can be made extremely effective (fast and green) by hybrids of parallel and serial processors, we have entered the new era when the oncoming full DNA sequences will be "syntax-checked" for a slew of "structural variants" both based by the brute force of serendipituous seach, and targeted ultra-fast search of fractal defects, directed by the new axiomatic science of genome informatics. - Pellionisz_at_JunkDNA.com]

^ back to top


Convey Computer Hails Genomics Search Record

May 24, 2010

[Traditional serial processing -Convey is great conveying ideas ... - AJP]

[Parallel processing increases throughput - AJP]

[Hybrid architecture with quad serial and one parallel processor - AJP]

Richardson, Texas-based Convey Computer, a developer of hybrid-core computers which used hardware to accelerate software processing, reported today that it has demonstrated an implementation of a genomics analysis algorithm, the Smith-Waterman algorithm, which is a 172 times faster than conventional software. Smith-Waterman is an algorithm used to analyze DNA and protein sequences for similarity matching. Convey's hardware uses Intel Xeon processor and Xilinx FPGAs (Field Programmable Gate Arrays) to implement and speed up software operations. The firm said its hardware has started to be adopted by the life science research industry, and is being used at the University of South Carolina and Virginia Bioinformatics Institute (VBI) at Virginia Tech. Convey is backed by Braemar Energy Ventures, CenterPoint Ventures, Intel Capital, InterWest Partners, Rho Ventures, and Xilinx.

[Convey is not the first to demonstrate (the obvious) that the genome (essentially, a parallel processor with 2-bit bases of A,C,T,G-s) is ideally suited for FPGA (parallel, low-bit manipulations). Both SGI and Mitrionics has done so, earlier (also, Danish CLCbio attempted) and Dr. Pellionisz as Director of Genome Informatics to Mitrionics lent some of his energies to see if a European-based small Swedish company was ambitious and resourceful enough to break into the Genomics market in the early part of 2008 - before the major recession hit . In a rather different ecosystem of today, with the tsunami of full DNA sequences are hitting the flabbergasted genomics community, operating from the heart of "Think Big Texas", it is almost certain that time has come to turning early pioneering into major lucrative business - Pellionisz_at_JunkDNA.com]

^ back to top


Transparency First: A Proposal for DTC Genetic Testing Regulation

Posted by Dan Vorhaus on May 24, 2010

These are hectic days for the field of direct-to-consumer (DTC) genetic testing. Every week, and sometimes every day, seems to bring a new development. Two weeks ago it was pharmacy giants Walgreens and CVS unveiling agreements with Pathway Genomics to offer Pathway’s genetic testing kits in drugstores nationwide, to which the FDA responded first by declaring such a strategy illegal and, shortly thereafter, launching an investigation. Last week, on the same day that the University of California, Berkeley announced it would be offering genetic tests to all incoming freshmen, a House of Representatives committee announced it was launching its own investigation into three prominent DTC genetic testing companies.

These developments reflect an uncertainty about the regulatory status of DTC genetic testing that is dramatic, although it is not new. In the summer of 2008, public health officials in New York and California sent warning letters to a number of DTC companies, including 23andMe and Navigenics (both targets of the current Congressional investigation). These state regulatory activities prompted concern that other states might follow suit, potentially subjecting DTC companies to the nightmare scenario of inconsistent state-by-state regulation. Nearly two years later, those particular concerns appear to be unfounded.

An Inevitable Regulatory Response. But as the DTC genetic testing industry expanded, state and federal regulators grew increasingly conspicuous by their silence. The possibility of regulatory activity has been the elephant in the DTC room for some time now (at the Genomics Law Report we have been writing about the possibility of DTC regulation since our inception) and, indeed, many DTC companies have long indicated that they would welcome more definitive federal regulation.

The specific trigger for this recent flurry of activity by the FDA and Congress is something of a puzzle- the distinction between DTC genetic tests offered on Walgreens’ shelves as opposed to online at Amazon.com is difficult to parse, and the FDA’s initial comments, delivered through the media by a variety of spokesmen, have frequently confused rather than clarified. But the simple fact is that a regulatory response to DTC genetic testing was overdue. That it happened to be Pathway’s attempt at creative product placement will prove to be, ultimately, nothing more than a footnote to a larger ongoing discussion about the proper place of DTC genetic testing in this country. [Dan's brilliant and law-expert treatise falls short of the global issue - that DTC is already pursued - in fact, started - off-shore to US, in Iceland, and just recently a DTC genome testing institution started in Korea, with the backing of SAMSUNG - AJP]

For the remainder of this post, rather than speculate about what manner of regulatory response will be forthcoming from the FDA, Congress and elsewhere, we ask (and answer) the 550,000 SNP question instead: if a regulatory response to DTC genetic testing is inevitable, what should it look like?

A Transparent Solution. More than anything else, what the DTC genetic testing industry needs right now is enhanced transparency, and not necessarily in the form of traditional direct regulation by the FDA. Rather than driving the regulation of DTC genetic tests through traditional channels, such as the FDA’s premarket review and approval regime for medical devices, regulators should focus instead on shining a bright light on DTC genetic testing, improving their own and the public’s understanding of what information is available to consumers and how that information is actually used.

As it happens, creating greater DTC transparency can be most efficiently accomplished without the application of regulations that would be onerous for early-stage DTC companies and their investors, restrictive for consumers interested in the broadest access to their genetic information and expensive and time-consuming for regulators to enforce.

Over the next 6-9 months, the DTC genetic testing industry and regulators, working together, should take three key steps to enhance transparency industry-wide, ensure that customers, regulators and healthcare professionals are better able to understand and evaluate the products offered, and encourage the DTC industry to grow responsibly without more traditional regulation.

Step 1: Make Participation in the NIH’s Genetic Testing Registry Mandatory. The most promising development for improving transparency with respect to specific DTC genetic testing companies and products is the recently announced and NIH-backed Genetic Testing Registry (GTR). The GTR is a direct outgrowth of a 2008 report prepared by the Secretary’s Advisory Committee on Genetics, Health, and Society (SACGHS) on the “U.S. System of Oversight of Genetic Testing” (pdf). The SACGHS report recommended the following:

To enhance the transparency of genetic testing and assist efforts in reviewing the clinical validity of laboratory tests, HHS should appoint and fund a lead agency to develop and maintain a mandatory, publicly available, Web-base registry for laboratory tests. (emphasis added)

Although announced two months ago as voluntary initiative, many of the GTR’s supporters have long argued that such a registry should be mandatory, and its voluntary character is unquestionably the GTR’s most significant departure from the original SACGHS recommendation. At its current early stage of development, however, there is plenty of time for that feature to change.

The upside of the GTR is clear. It would provide a single, comprehensive source of information about DTC genetic tests for regulators, purchasers and other end users (including healthcare professionals), enabling side-by-side comparison of tests and allowing regulators or neutral third parties to evaluate the accuracy of their data and claims. It would also likely standardize (or at least clarify) test offerings, spur healthy competition between providers and enable consumers to make purchasing decisions on the basis of meaningful criteria (e.g., price, information content, insurance coverage, etc.) instead of marketing campaigns.

All of these benefits, however, depend on widespread participation in the GTR by DTC genetic testing companies. At the time of the GTR’s announcement, current NIH chief of staff (and long-time GTR proponent) Kathy Hudson conceded that, while she would have preferred a mandatory registry, it was unclear whether the NIH had the authority to enforce such a requirement. While that is likely the case, (arguably) the FDA and (certainly) Congress have the authority to render participation in the GTR mandatory for DTC genetic testing companies.1 DTC companies should be eager to embrace a mandatory GTR (as at least one already has) as a relatively painless way to demonstrate to the public and to regulators their willingness to cooperate and their commitment to providing high-quality and transparent genetic testing services.

Step 2: Continue to Improve FDA Regulatory Transparency. An editorial appearing in last week’s New England Journal of Medicine by two senior FDA officials describes the agency’s recent and substantial efforts to improve transparency. The FDA’s Transparency Task Force is currently entering its third and final phase, seeking ways to improve transparency to regulated industries.

As part of this initiative, the FDA is seeking comment on 21 proposals (pdf) designed to enhance transparency at the agency. The FDA’s recommendations, particularly recommendations 10 and 11, could significantly improve public understanding of how and when the FDA evaluates regulated medical devices. The recommendations are an important step in improving transparency at an agency that has not had enough of it in recent years. Unfortunately, none of the proposed recommendations would improve the transparency of the process by which the FDA determines which products to regulate in the first place, including DTC genetic tests.

Here is where the FDA, specifically the agency’s Office of In-Vitro Diagnostic Device Evaluation and Safety (OIVD), headed by Director Alberto Gutierrez, can help appropriately regulate DTC genetic tests without actually increasing its already substantial regulatory burden. Not only should the FDA encourage transparency across the DTC genetic testing industry by supporting the NIH in its development of the GTR, and strongly encouraging or even requiring DTC genetic testing companies to participate, it should also work closely with the NIH, industry and other key stakeholders to clarify exactly what information it would like to see included in the GTR.

One of the difficulties for the DTC genetic testing industry, at least at present, is that the FDA has been less than clear in describing the elements of DTC genetic tests that most concern the agency. Is it the list of conditions or genetic variants tested? The nature of the claims (informational vs. medical) made by the company or the product? The physical locations at which a test is sold? Or is it some other consideration entirely or, more likely, a combination of all of the above?

The FDA is clearly still refining its policy with respect to DTC genetic tests, and there is plenty of time for it to continue to do so. In the meantime, it should involve DTC companies and customers, medical professionals, policymakers and other key stakeholders in determining the relevant information to collect and review with respect to DTC genetic testing. In doing so it can use the already-in-development GTR as a public tool for transparently refining its policy and collecting relevant information. This approach would go a long way toward eliminating the case-by-case review of DTC genetic testing companies and products that appears to have categorized the FDA’s approach to date.

Step 3: Involve the Federal Trade Commission. As we wrote last week, there is another regulatory agency that could play an important role in the development of DTC genetic testing: the Federal Trade Commission (FTC).

In 2006, the FTC worked with the FDA and the CDC to publish a guidance document for consumers entitled At-Home Genetic Tests: A Healthy Dose of Skepticism May Be the Best Prescription. Four years in the area of DTC genetic testing is an eternity – the FTC’s guidance was issued before any of 23andMe, Navigenics and Pathway Genomics, the three companies currently the focus of the Congressional investigation, existed – but it indicates that the agency has at least some familiarity with the industry. More importantly, the guidance reminds consumers of the FTC’s mission, which has not changed: “to work[] for the consumer to prevent fraudulent, deceptive, and unfair business practices in the marketplace and to provide information to help consumers spot, stop, and avoid them.”

By making the GTR mandatory (Step 1) and working with the FDA to carefully specify the relevant information to be included in the registry (Step 2), the FTC would be well positioned to monitor the DTC genetic testing industry for companies unwilling to subject their products or claims to the public scrutiny afforded by the GTR (Step 3). While the FDA and Congress have launched investigations into well known DTC genetic testing companies, there are a plethora of other companies (see, for example, this list at DNA Test Index or this list at AccessDNA) that appear, for the moment, to have escaped the attention of regulators. Rather than require the FDA or Congress to investigate each new DTC genetic testing company that sprouts up, why not require those companies to register with the GTR?

This would provide the FTC, along with the rest of us, with a single point of entry to collect and evaluate registered DTC genetic testing companies, while those companies that refuse to participate in the GTR will likely be quickly ferreted out and referred to the FTC by an active community of DTC genetic testing companies and consumers with a vested interest in maintaining order industry-wide.

Reports of DTC’s Death Greatly Exaggerated? Using a community- and transparency-driven approach would make it easy to separate the DTC wheat from the chaff, enabling legitimate DTC companies to continue to provide consumers with the genetic information they desire, while minimizing the risk that consumers will be presented with false or misleading genetic testing products or services.

More importantly, focusing on transparency and sustained information gathering is an appropriate, measured response to the developing DTC genetic testing industry. One that will bring companies, consumers and regulators into closer collaboration, without imposing a regulatory regime that would risk stifling the creativity and growth of the industry or depriving consumers of the ability to directly access their genetic information. While it has long been inevitable that regulatory agencies would play a significant role in shaping the future of the genetic testing industry, there is absolutely no reason why, with that day apparently upon us, that development need spell the death of DTC.

__________________________

1 As discussed in our earlier article, there is some disagreement over whether the FDA has such authority. What is not debatable, however, is that Congress, should it desire to do so, could take action that would remove all doubt as to the authority of the FDA (or another agency of its choosing, such as CMS) to regulate DTC genetic tests.

Note, also, that a GTR that was mandatory for DTC genetic testing companies would not need to be mandatory for all providers of genetic tests. A majority of genetic tests are not provided directly to consumers, and this would be a relatively clear distinguishing characteristic upon which to evaluate whether a test was required to be included in the GTR, or simply permitted to be included at the provider’s discretion.

[Dan's continuing analysis on "DTC debate" is by and large perhaps the best, exuding his legal expertise and widespread knowledge. Thus, any notion here is not meant to take away anything from his brilliance, but in a constructive and collective way to try to even further improve upon it.

While "Transparency" is, indeed, essential, perhaps it should be made a priority that the first tasks of such transparency are 1) Definition of the subjects (e.g. this entire debate started from the confusion of a "Gene testing kit" that is indeed just a container for saliva-sample, to be sent with the consumer's separate direct order to a Genome testing company) 2) Identification of problems, if any, 3) Ownership of problems. Further, IMHO Dan's second and third Steps are to a large extent contradictory, that he is not even trying to hide.

Most importantly, while yours truly absolutely agrees with the conclusion that "DTC is not dead" IMHO the US might wish to parse the weight of the prognosis that if in the US jurisdiction is "not dead" - but given the entanglements and resolutions of Step 2 and 3 - himself talking about 6-9 months, having worked with the government my guess is more like 6-9 years - to prevent the US simply falling out from the global competition - an absolutely first imperative is to issue an ultra-quick Congressional Moratorium valid under Federal USA jurisdiction, that until the maze or regulatory agencies put their houses in order, the DTC Genome Testing stays limited to those requirements that the State of California (a known trend-setter in Federal matters) already worked out two years ago (and with which all three targeted CA-based companies absolutely comply already):

1) DTC genome testing must be conducted by the use of Certified Laboratories
2) DTC genome testing must be prescribed by a physician

I would add my blog-reply argument here, that should US DTC be handicapped for an uncertain amount of time by red-tape, Prevention (as it is increasingly genome-based) will not be able to come to the rescue of an unsustainable rise of US Health-Care.

Pellionisz_at_JunkDNA.com]

^ back to top


Why The Debate Over Personal Genomics Is a False One

KQED
May 21st, 2010 Thomas Goetz

[Listen to KQED debate on the website - AJP]

I appeared on KQED’s Forum show this morning to discuss this whole Walgreen’s/Pathway Genomics fallout. Here’s a link to the show:

And here are some quick thoughts:

The controversy seems to have stirred the FDA to assert its authority - and that of physicians - over any and all medical metrics. As readers of The Decision Tree know, I have little patience for the argument that we need doctors as gatekeepers of our genetic information. This isn’t a drug, and this isn’t a device - it’s information about ourselves, as ordinary as our hair color or our waist size or our blood pressure - all things that we can measure and consider without a doctor’s permission.

I’m amazed, in many ways, that this discussion continues to be perpetuated in terms of “can people handle the truth?” - because that line of argument is flawed in so many ways. I’ll offer a few: 1) People are more capable of handling genetic information (and other health information) than they’re given credit for. 2) Most doctors aren’t experts in genetics anyways. 3) If you wait for doctors to give us this information, we’ll be waiting for something like 17 years. 4) This is our information, about us, and we own it as much as we own our thoughts and our values. 5) We may want to ask doctors or genetic counselors about what our DNA means - I’m not saying it’s easy to understand - but that’s entirely our choice.

I’m sincerely fearful that, now with Congress deciding it wants to inspect this stuff, that the FDA will feel obligated to regulate and shut us off from what is rightfully ours. To me, getting access to this information is a civil rights issue. It’s our data.

Some in the government see things clearly here. Donald Berwick, President Obama’s nominee to run CMS - the agency that oversees Medicare and Medicaid - has defended the rights of patients to own their information. The FDA is now run by the well-regarded Peggy Hamburg, who I have only heard great things about; in a brief conversation with her last year, I was struck by her fair-mindedness and belief in the ideals of transparency and greater consumer empowerment. My hope is that she sees the light here. She’s written about how the FDA is a public-health agency, particularly in terms of “risk communication”; well, one of the reasons we communicate risks is to allow people to take responsibility and act in ways to minimize our risks. It’s the basis of preventive health. That’s precisely the potential of personal genomics, and to squash that would have a net effect of undermining the public’s health.

The FDA doesn’t have to use regulation like a hammer to squash innovation and the opportunity for people to use genetics to take control of their health. They can help foster innovation and issue some basic guidelines that recognizes information is a powerful tool, and one that rejects intermediation and paternalism.

I’m crossing my fingers.

[The horizon outlined by Tom Goetz in Wired, of DTC Genome Testing becoming a "civil rights issue" or "discrimination against human diversity" class-action suits is outright frightening. - Pellionisz_at_JunkDNA.com]

^ back to top


Existence Genetics is Pioneering the Field of Predictive Medicine - Nexus Technologies Critical in Understanding and Preventing Deadly Disease

Through Existence’s cutting-edge technological advancements, prediction and prevention of a wide range of both common and rare diseases are possible before they ever manifest. The future of medicine starts today.

Los Angeles, CA (PRWEB) May 20, 2010 -- Welcome to your new Existence! Existence Genetics LLC, the world’s first predictive medicine company, is working with healthcare providers to PREDICT, PREVENT, and PREVAIL over disease.

Founded in 2005 by a team of physicians, Existence Genetics is pioneering the field of predictive medicine by providing Genetically Tailored Technology to the health care industry. Existence provides health care professionals with access to groundbreaking predictive medicine services and delivers understandable, useful, and personalized solutions to protect and preserve their patients’ health. Rather than simply handing bewildering results to an untrained patient, the goal of Existence is to empower health care professionals with genetic technology so that they can efficiently integrate genetic screening into their practice and provide cutting-edge predictive medicine services to their patients. Because of this, Existence provides its services only through health care professionals and not direct to the consumer (DTC) or over the counter (OTC).

Existence has developed patent-pending Nexus technologies to enable comprehensive genetic testing and analysis that focuses on the personalized prevention of disease. The Nexus technologies include a proprietary, highly advanced gene chip known as the Nexus Gene Chip. This first-of-its-kind, highly accurate gene chip allows cost-effective, time-efficient screening for hundreds of potential disease predispositions as well as for genes that are involved in the prevention and treatment of disease, thereby enabling the advent of genetically tailored prevention. The results provided by the Nexus Gene Chip are analyzed by the Nexus IT system, which uses patent-pending technologies such as the disease matrix, reflex analysis, panel screening, and next-generation genetic reports to provide health care professionals with straightforward, genetically-tailored solutions for their patients.

Existence’s Nexus technologies make predictive medicine straightforward, understandable, useful, and personalized. Existence’s One Genome philosophy means that we focus on all diseases regardless of their prevalence and screen simultaneously for hundreds of rare and common diseases. And because its Nexus technologies were developed with this philosophy in mind, Existence will be the first company that is able to provide full genome clinical analysis of full genome sequencing data.

In addition, Existence’s newest technology, the Pythia Approach, which is now in its final stages of development, combines the genetic analysis of two potential parents to determine the disease predispositions and traits their yet-to-be-conceived offspring are likely to inherit.

Through Existence’s cutting-edge technological advancements, prediction and prevention of a wide range of both common and rare diseases are possible before they ever manifest. The future of medicine starts today.

Existence Genetics has offices in California and New York, with headquarters in Los Angeles. To learn more about Existence Genetics and the Nexus technologies, visit www.existencegenetics.com.

[It is a moving target. DTC Genome Testing of today is based on microarray technology, interrogating by microarrays up to 1.6 million DNA bases. Not only in the 6-9 months that is predicted to figure out how to oversee this technology, an entirely new and much more complex technology is already here. "Genome Clinical Analysis" sounds "Medical"... what to do with this one?? Pellionisz_at_JunkDNA.com]

^ back to top


Where to next for personal genomics?

Daniel McArthur
May 21, 2010

The brief Golden Age of direct-to-consumer genetic testing - in which people could freely gain access to their own genetic information without a doctor's permission - seems to be about to draw to a close. In a dramatic week, announcements of investigations into direct-to-consumer genetic testing companies by both the FDA and the US Congress have sent the personal genomics industry into a spin, and it is still impossible to say exactly which way it will be pointing once the confusion passes.

I've been frustratingly unable to find the time to cover the developments as they happened due to other commitments - but fortunately they have been extremely ably covered elsewhere, notably by Dan Vorhaus over at Genomics Law Report and Kirell Lakhman at GenomeWeb.

Here's a high-level summary for those who haven't been following closely:

1. On May 11, direct-to-consumer genetic testing company Pathway Genomics announced that it would be partnering with drugstore chain Walgreens to offer its genetic testing kits on the shelves of Walgreens' 7,500 stores.

2. The same day, the director of the FDA's Office of In Vitro Diagnostic Device Evaluation and Safety, Alberto Gutierrez, was quoted as follows in the Washington Post:

"We think this would be an illegally marketed device if they proceed [...] They are making medical claims. We don't know whether the test works and whether patients are taking actions that could put them in jeopardy based on the test."

3. Two days after the Pathway announcement, and following a letter from the FDA to Pathway, both Walgreens and rival CVS Caremark, who had also apparently been planning to stock the kits, decided to drop the idea of offering Pathway's product in their stores.

4. Yesterday Dan Vorhaus broke the news of a newly-launched Congress investigation into direct-to-consumer genetic testing sparked off by the Pathway controversy (the announcement cites "recent reports that at least one of the companies is seeking to sell personal genetic testing kits in retail locations, despite concern from the scientific community regarding the accuracy of test results").

As always, the best place to go for detailed legal analysis of this ongoing furore is the Genomics Law Report, and in particular Dan's lengthy and incisive first response to Pathway's announcement, and his subsequent analysis of the FDA crackdown.

I have a few overall points to make here.

The end of direct-to-consumer disease genomics?

Nothing is certain yet, but it's entirely possible that these events mark the beginning of the end of DTC genetic testing for health-relevant traits.

The DTC personal genomics industry has so far enjoyed a bizarrely prolonged period of respite from the stifling regulatory embrace of the FDA and other regulatory bodies (while the technical validity of all of the major personal genomics companies is governed by the Clinical Laboratory Improvement Amendments of 1988, there's currently no regulation regarding the interpretation of the raw data).

It has always seemed inevitable that this period would end with a regulatory crackdown, although the precise nature of the eventual regulation - and the events that would trigger the regulatory hammer to come down - were impossible to foresee. Now the hammer is dropping, and although its aim seems capricious (see below), there's little doubt that its long-term impact will be massive. It's certainly not beyond the realms of possibility that companies will be forced to entirely discontinue DTC provision of information for any health-relevant trait.

Personal genomics companies are to some extent prepared for this eventuality. For instance, several of the major companies (e.g. Pathway and 23andMe) have split their disease risk predictions into a separate product from their more "recreational" offerings (such as ancestry, genealogy and non-disease traits), potentially allowing them to maintain a DTC revenue stream even if the DTC disease genomics angle was blocked. (Kudos to Dan Vorhaus for spotting the motives for this behaviour back in July last year.)

Today, GenomeWeb reports that at least one DTC company has gone even further and dropped its direct-to-consumer offering entirely. Navigenics could probably also drop its DTC offering without much harm to its sales, since the company has by all accounts been spectacularly unsuccessful in tapping the DTC market. We may well see the same approach taken by other personal genomics companies in an attempt to stave off the regulatory claws of the FDA.

This outcome would be an absolute tragedy for those of us interested in thoroughly exploring our own genomes. Anyone who has ever tried to get the raw data from their own medical tests from doctors will know how ludicrously difficult this is, due to a combination of bureaucratic incompetence and litigation-shy clinicians. Now imagine that difficulty, multiplied by the sheer scale of genome-level data and the near-complete ignorance of the vast majority of doctors about genetic information.

Comedic blunders from the FDA

While confusion reigns in the DTC genetic testing industry, this whole episode has revealed one thing very clearly indeed: absolute incompetence on the part of the FDA. One cannot help but shudder at the fact that such a transparently clueless agency wields so much power over so many industries.

In a great article over at GenomeWeb, Kirell Lakhman points to a series of contradictions in public statements made by the FDA over the last week (in what he refers to as a "seemingly uncoordinated and contradictory investigation").

Here is an agency that has sat back and watched the industry (albeit lazily, given that it was apparently unaware of Pathway's existence until the Walgreens announcement) for two years, giving no clear guidance regarding its regulatory intentions, and then suddenly announces that retail genetic testing is probably illegal via a quote in the Washington Post.

In addition, the motive for stomping specifically on Pathway seems entirely arbitrary. Gutierrez said in an interview with Pharmacogenomics Reporter (subscription only) that "The fact is that Pathway's bold move to make themselves noticed achieved its end and brought them to our attention", suggesting that the agency would have been happy to let DTC companies continue to operate if they'd done so more quietly.

It's a doubly bizarre statement given that the industry in general (and 23andMe in particular) has been conducting aggressive marketing campaigns to the wider public for a long time. Why did the Walgreen's campaign overstep the mark any further than, say, 23andMe's appearance on Oprah or its zeppelin campaign? It's impossible to know, especially in the complete absence of any substantive guidance from the FDA on what is or isn't acceptable behaviour.

Of course the power of the FDA is so massive, and so arbitrarily wielded (the technical term is "enforcement discretion"), that you won't be hearing many public complaints from personal genomics companies out of fear of retaliation. Instead, the industry is lining up to vow full compliance with the investigations from the FDA and Congress, like shop-keepers telling everyone who will listen how great a job the local gangsters are doing even while they fork out their protection money.

Do we need FDA regulation?

Regular readers will know that I think - for all its faults - the personal genomics industry provides a net benefit to society. Sure, the information provided by personal genomics tests currently has limited utility in terms of health prediction, at least for most of us; but by allowing people to engage with their own data, and generally doing a pretty good job of conveying the complexity and uncertainty of modern genetics, personal genomics companies are non-trivially increasing genetic awareness and literacy, an important public service as we enter the genomic era.

It's worth emphasising that the major personal genomics companies have done a fairly respectable job of self-regulation so far: 23andMe et al. generally present genetic risk information in a way that is far more accurate and accessible than anything we have seen (or are likely to see in the near future) from the medical profession.

It's also important to note that precisely zero evidence exists for the notion that genetic test results are likely to cause serious harm to consumers. So while Arthur Caplan may fret over the idea that customers assigned a low risk of heart disease might "go off and drink milkshakes all day", the existing sociological evidence suggests that genetic risk data alone has relatively little negative effect on consumer behaviour or mental health.

There certainly is room for regulation that filters out the bottom-feeders in the industry - but that is not necessarily a job best done by the FDA. As Dan Vorhaus points out, the Federal Trade Commission, as an agency focused on consumer protection, might be well-placed to step in. In addition, the recently-announced NIH genetic test registry - particularly if it is made mandatory - will hopefully serve as a valuable resource for consumers, allowing them to make informed decisions without requiring them to have an ill-informed clinician hold their hand while they do it.

By all means prosecute companies that make false claims of fact or provide poor-quality assays. By all means provide consumers with additional resources to allow them to make informed decisions. But don't create regulation that makes it hard for new companies to enter the space or introduce new technologies; and if people decide that they don't need their doctor to peer into their own DNA, let them make that choice.

Final thoughts

After years of speculation, the long-awaited crackdown has come. Exactly what type of industry will emerge from the other side is still completely unclear, and we can only hope that regulators restrain themselves from the heavy-handedness they have inflicted on other industries.

Personal genomics is a young field, but it's also a crucible for the future of personalised medicine. Excessive regulation at this stage will cripple innovation in the industry by raising the cost of starting new businesses and developing novel approaches. If the FDA is given free rein to stifle the field with formidable regulatory requirements this will do long-term damage to the development of personalised medicine.

I'd encourage US readers to make their thoughts on this known - write to your politicians, tell them what you've learned from your own genome, and inform them about what a terrible idea excessive regulation would be for the future of medicine.

[This is a very wise assessment from the UK by highly respected blogger Daniel McArthur - Pellionisz_at_JunkDNA.com]

^ back to top


How Bad Can a House Investigation be for DTC Genomics?

GenomeSherpa Blog
May 20, 2010

Ok, so you've been summoned to Congress to testify

It won't be that bad if you know what you are in for. So let's review.

1. A chart listing the conditions, diseases, consumer drug responses, and adverse reactions for which you test; [no problem - they are on the websites - AJP]

2. All policy documents, training materials, or written guidance materials regarding genetic counseling and physician consultations, including documents regarding what conditions, diseases, drug responses, or adverse reactions trigger the need for genetic counseling or physician consultation, and documents governing communications with consumers regarding individual genetic testing results [Both a nightmare and substantial give-away of Proprietary Information - AJP]

3. All documents relating to the ability of your genetic testing products to accurately identify consumer risk, including:

a. internal and external communications regarding the accuracy of your testing; [Both a nightmare and substantial give-away of Intellectual Property - AJP] of b. documents describing how your analysis of individual test results controls for scientific factors such as age, race, gender, and geographic location;
c. third party communications validating the association between the scientific data your company uses for analyzing test results and the consumer's risk for each condition, disease, drug response, or adverse reaction as identified by the results of an individual test; and
[an absolute nightmare since "third parties" include solid business partners, fierce competitors and external agents serving adverse purposes - AJP]
d. documents relating to proficiency testing conducted by your clinical laboratories.
[Relatively easy since microarray interrogation of partial DNA is an established technology - AJP]

4. All documents regarding your policies for processing and use of individual DNA samples collected from consumers, including:

a. policy documents and protocols regarding collection, storage, and processing of individual DNA samples;
b. policy documents and protocols relating to protection of consumer privacy; and
[a,b are relatively easy - to a large extent already found on their websites - AJP]
c. documents regarding collected DNA sample uses other than to provide individual genetic counseling to a consumer, including documents relating to third-party use of collected DNA samples. [This is extraordinary tricky - will be a major headache - AJP]

5. All documents regarding compliance with the Federal Food, Drug, and Cosmetic Act and U.S. Food and Drug Administration (FDA) regulations. [This is either trivial since all companies comply to prevailing FDA regulations, but said agencies can always pull pseudo-irrelevant red-tape that is just crazy to associate with the mission of DTA Genome Companies - AJP]

And you should have that to them in about 2 weeks. [Totally arbitrary, and frankly ridiculous imposition of resources on struggling DTC companies - resorting to legal remedy is rather likely - AJP]

What could be so harmful? [A lot - AJP]

If you know anything about the history of such investigations, they are mostly a dog and pony show that ends up in one of a few options.

1. Public Pillorying that leads to a slap on the wrist and a consumer base who doesn't trust you anymore (See Toyota) [This is extremely unlikely - the Medical Establishment would like to kill DTC - AJP]

2. A massive class action lawsuit from some enterprising attorneys who review the publicly available documents that the House requests via Freedom of Information. [An extremely dangerous proposition for the National Security of the USA - if the Congressional Investigation will amount to a "core dump" to the non-US entitites poised to steal US business, the US might be doomed -AJP]

3. The Congress forces you to behave like normal society rather than a bunch of radicals trying to take over the world. [In theory, this a very noble proposition. However, the rest of the world will not be civilized by he US Congress - and the end result could easily be a ripoff of the Grand Old USA - AJP]

4. Some clone company sees all your internal documents via a Freedom of Information Act, copies the good, removes the bad and launches in like 6 months...... [Extreme danger - with an supreme likelihod, as judged form Asian aggressive players - AJP].

5. Someone goes to jail, perp-walk style. [Slim but not the most threatening scenario - unless some far-Eastern (or even Russian?) colleagues are caught in outright stealing US Intellectual Property. Probability is not zero, as it has been amply documented in the past - AJP]

Since 5 is not realistic, I think we can expect some combination of 1-4 for these companies.

The worst outcome is probably Number One here.

The consumer base already doesn't trust Google/23andSerge

Navigenics already has a distribution network, but if the physicians don't trust the test, they won't order it.

Pathway will have a bump in the road and no retail launch.

Number 2 could hurt too, especially Ms Wojiciki who could get personally named in the suit as well as investors like Dyson.

If I was the lawyer, those are the deep pockets I would be after.Navigenics is owned by P&G now and their corporate counsel will likely shield them.

Pathway has probably the least customers to be exposed to such a lawsuit, unlike 23andSerge's 30k

Number 3 stinks for the "Research Revolution, Che Style" but probably won't hurt Navigenics or Pathway.

Number 4 is a definite reality. I have already heard that scuttlebutt on the street.

So, I ask. Is getting companies to behave responsibly and acknowledge that some of what they are doing is medical testing so bad? Ryan made the move. Very smartly Ms. Phelan. I knew she would.

[Actually, the outcome may be a whole lot more significant, a veritable watershed, for history (explanation to follow publicly/by appointment). It depends, what the key players will collectively do, in an organized manner, from the global perspective outlined below by Juan Enriquez - Pellionisz_at_JunkDNA.Com]

^ back to top


CVS Follows Walgreens Down Pathway of Least Resistance

May 18, 2010

GenomeWeb

By Kirell Lakhman

CVS Caremark has joined Walgreens in postponing plans to sell Pathway Genomics' consumer genetic-testing kits in its stores until Pathway squares its regulatory issues with the FDA.

CVS, which had initially planned to offer the tests in its stores in August, "will follow the FDA discussions closely and make a final decision on whether to carry this product after their questions about the test kits are resolved," CVS spokesman Mike DeAngelis told Dow Jones Newswires today.

The move comes five days after CVS rival Walgreens chose "not to move forward with offering the Pathway product to our customers until we have further clarity on this matter."

The decision by the two pharmacy giants - and especially CVS Caremark , a sophisticated supporter of genetic testing - underscores the need for regulatory transparency and fairness, and highlights how far the FDA has slipped behind the rapidly evolving DTC genetic-testing landscape.

^ back to top


Company plans to sell genetic testing kit at drugstores

By Rob Stein

Washington Post Staff Writer

Tuesday, May 11, 2010; A01

Beginning Friday, shoppers in search of toothpaste, deodorant and laxatives at more than 6,000 drugstores across the nation will be able to pick up something new: a test to scan their genes for a propensity for Alzheimer's disease, breast cancer, diabetes and other ailments.

The test also claims to offer a window into the chances of becoming obese, developing psoriasis and going blind. For those thinking of starting a family, it could alert them to their risk of having a baby with cystic fibrosis, Tay-Sachs and other genetic disorders. The test also promises users insights into how caffeine, cholesterol-lowering drugs and blood thinners might affect them.

The over-the-counter test marks the first foray of personalized genomic medicine into the corner drugstore. The move is being welcomed by those who hope that deciphering the genetic code will launch a new era in biomedical science.

But it's being feared by those who worry it will open a Pandora's box of confusion, privacy violations, genetic discrimination and other issues.

The new test comes as federal regulators, bioethicists, geneticists, doctors and patients have been increasingly struggling with how to use, interpret, regulate and guard against abuse from the flood of genetic information, tests and technologies being developed because of the massive, government-sponsored Human Genome Project.

For years, companies have been hawking tests on the Internet that can analyze genes for a person's risk of some diseases, and genetic tests for paternity and ancestry have been widely available in stores.

But the plan being announced Tuesday by Pathway Genomics of San Diego to sell its Insight test at about 6,000 of Walgreens' 7,500 stores represents the boldest move yet to bring the power of modern molecular medicine to the mass market.

"It's the first widespread retail availability of genetic tests that are directed specifically at health issues," said Joan A. Scott, director of the Genetics and Public Policy Center at Johns Hopkins University.

The Food and Drug Administration questioned Monday whether the test will be sold legally because it does not have the agency's approval. Critics have said that results will be too vague to provide much useful guidance because so little is known about how to interpret genetic markers.

"It doesn't seem like a good use of resources or something people should be spending their money on yet," said Sharon F. Terry, who heads the Genetic Alliance, a Washington-based coalition of patient groups, researchers, private companies, government agencies and public policy organizations.

Others have said that the test is irresponsible and could give many buyers a dangerous false sense of security or, conversely, needlessly alarm them.

"It is reckless," said Hank Greely, director of Stanford University's Center for Law and the Biosciences. "Information is powerful, but misunderstood information can be powerfully bad."

The breast cancer test, for example, will screen for only a few of the genetic mutations associated with the malignancy, so it won't exclude the possibility of getting the disease because of other mutations or nongenetic reasons. Misunderstanding this, women whose results sound reassuring might forgo mammograms. On the other hand, a result suggesting an increased risk could prompt some to seek unnecessary follow-up tests and treatments.

The pregnancy planning test could prompt couples to decide not to get married or have children when their risk of having a baby with a disorder could be small. Or it could lead those who decide to proceed to seek genetic testing of the fetus, which could lead to more abortions.

"There's a cascade effect, potentially," said Colleen McBride, chief of the social and behavioral research branch at the National Human Genome Research Institute. "Some of these we may be able to anticipate, and some we may not."

In response to a query from The Washington Post, an FDA official said that the agency planned to investigate the test.

"We think this would be an illegally marketed device if they proceed," said Alberto Gutierrez, director of the FDA's office of in-vitro diagnostics. "They are making medical claims. We don't know whether the test works and whether patients are taking actions that could put them in jeopardy based on the test."

Company officials said the test does not require the agency's approval because the analysis will be done at the company's lab.

"Our understanding under the current regulation is, this test does not have to have FDA approval per se," said David Becker, Pathway's chief science officer. "And we do not claim that is does."

Gutierrez said the fact that the test involves sending kits to consumers for them to collect their own DNA samples raises questions about whether it requires FDA validation. He said the agency was evaluating similar tests.

Pathway officials said the test would help more people get access to potentially invaluable genetic information.

"We believe the market is ready for this," said Jim Woodman, vice president of corporate strategy. "We think there's more awareness of genetics these days."

With a $20 to $30 kit, customers will spit into a plastic vial to provide a DNA sample for analysis and ship the package to the company.

For $79, customers can get their DNA tested for how their bodies are likely to respond to 10 substances, including caffeine, cholesterol-lowering drugs called statins, the blood thinner warfarin and the breast cancer drug tamoxifen.

For $179, prospective parents can be tested to see whether they carry 23 genetic conditions, including the blood disorder beta thalassemia, diabetes and polycystic kidney disease. For the same price, they can be tested for their own risk for 23 conditions, including heart attack, high blood pressure, leukemia, lung cancer and multiple sclerosis.

For $249, they can get all of the tests.

The results, although not definitive, could encourage people to adopt more healthful lifestyles if they find they might be at increased risk for heart attack or certain forms of cancer, company officials said.

Officials said that the company has strict procedures to protect confidentiality and that it offers genetic counseling by phone, both before and after getting the test results, to make sure customers interpret them properly.

"I think there is some underestimation of the ability of the American public to understand this kind of information," Becker said. "They may not understand the exact specifics, but they do understand that these are propensities."

[This was the article that triggered the "DTC Genome Testing Debate"- Pellionisz_at_JunkDNA.com]

^ back to top


Joining The Genomics Revolution Early

May 7, 2010 - 11:39 am
Forbes

Addison WigginBio

Addison Wiggin is the executive publisher of Agora Financial, LLC, a fiercely independent economic forecasting and financial research firm. The executive producer of the acclaimed documentary film I.O.U.S.A. and 3 time New York Times best-selling authort, Addison is also the editorial director of Agora Financial’s daily 5 Min. Forecast and The Daily Reckoning.

On Tuesday, as part of a new documentary project we’re working on, we traveled to Boston to meet Juan Enriquez in his office in the Prudential Center. Enriquez is a managing director of Excel Venture Management and the best-selling author of two books: As The Future Catches You and The Untied States of America.

Specifically, we asked him about the algae harvesting project of Synthetic Genomics Inc. a company he co-founded with J. Craig Venter and Hamilton Smith. Venter, you may know, led the team that sequenced the human genome back in 2001, which was the genesis of Human Genome Sciences, Inc.

This new project uses genetic engineering to coax algae into using energy sunlight “to convert carbon dioxide into cellular oils and even some types of long-chain hydrocarbons that can be further processed into fuels and chemicals.”

In other words, they’re using an inconspicuous little lichen to convert those dreaded “greenhouse gases” into a renewable energy source at a remarkable pace. The project officially began last July 14, when Exxon Mobil agreed to throw $300 million into the pot.

Excel Venture Management has strategically positioned itself to help bring new game-changing technologies out of academia and introduce them to the marketplace.

“Countries that fail to commercialize their research discoveries remain diminished,” Enriquez points out. “Take the U.K., for example: They discovered penicillin and DNA, but preferred to let the knowledge sit in a college lab somewhere, rather than let the professor, god forbid, benefit from the discovery. The moment we start adopting those same attitudes in the United States, we will begin to decay.”

[It would be enormously important to ask Dr. Juan Enriquez - with an outstanding global perspective and towering understanding of drivers of economies and civilizations, what he thinks that a suppression/delay of US DTC Genome Testing woud mean to the US versus global equation - Pellionisz_at_JunkDNA.com]

^ back to top


DTC Genomics Targeted by Congressional Investigation

May 20, 2010
GenomeWeb

By Turna Ray

NEW YORK (GenomeWeb News) – Pathway Genomics' attempt to market genetic tests at retail stores Walgreens and CVS/Caremark has not only invited regulatory action from the US Food and Drug Administration, but now it has raised eyebrows in the US Congress.

Henry Waxman, chairman of the House Committee on Energy and Commerce, announced today that the committee is investigating direct-to-consumer genomic companies. As part of that investigation, the committee has sent letters to three firms — Pathway Genomics, 23andMe, and Navigenics.

Decode Genetics, the Icelandic molecular diagnostics firm that runs a DTC genomics service in many US states, called DecodeMe, was not sent a letter.

According to a statement posted on the committee website, the letters were prompted by "recent reports that at least one of the companies is seeking to sell personal genetic testing kits in retail locations, despite concern from the scientific community regarding the accuracy of test results."

Congressional action comes after FDA last week sent a letter to Pathway concerning its announcement that pharmacy chain Walgreens would offer the firm's saliva sample collections kits related to its personal genomics service. That service tests people's predisposition for disease, gauges their pharmcogenomic drug responses, and conducts pre-pregnancy genetic testing.

In its letter to these companies, the House committee is requesting information on several aspects of the tests they sell directly to the consumer, including the specific diseases and drugs for which the services provide genomic risk data; policy documents and materials on genetic counseling or physician consultation; data showing the accuracy of the risk predictions delivered by these services; details on policies regarding handling of DNA samples; as well as documents relating to the services' compliance with FDA regulation.

The requested information, which includes documentations from Jan. 1, 2007 to the present, must be submitted to the committee by June 4, according to the letters.

Although the FDA has not issued any formal guidelines specific to DTC genomics firms, it has always said it has the authority to regulate these services. In comments to GenomeWeb Daily News sister publication Pharmacogenomics Reporter last week, Alberto Gutierrez, director of the FDA's Office of In Vitro Diagnostic Device Evaluation and Safety, said that long before Pathway decided to migrate its service from the internet to store shelves, the agency had been in discussions with players in the DTC genomics industry.

After receiving FDA's letter, Walgreens backed out of its plans with Pathway to sell the kits at its stores. Simultaneously, CVS/Caremark said it had intended to sell Pathway's saliva sample collection kits in a similar deal beginning in August, but now has put those plans on hold until regulatory issues are resolved.

In a blog post last week, Navigenics said that from the inception of the company it has sought "to work closely with regulators." The company pointed out that it has participated with others in the industry and the Personalized Medicine Coalition to set industry standards; agreed to not market its test to consumers but go through doctors in New York, which bans DTC marketing of genetic tests; and has obtained a laboratory license to operate as a DTC firm in California.

After receiving the letter from the House Committee on Energy and Commerce, Navigenics told Pharmacogenomics Reporter that Navigenics "has always followed [its] own policy of transparency and open communication with state and federal governments.

"In fact, we have already proactively engaged in conversations with key stakeholders in Washington this week, including with senior staff at the House Committee on Energy and Commerce a day before this letter was issued," said Amy DuRoss, Navigenics VP of policy and business affairs. "We will be glad to respond to the Committee's requests in a timely fashion, and we look forward to further cooperation with committee members in the future."

Additionally, 23andMe issued a statement after receiving the letter from the House committee, saying, "We will comply with the Committee on Energy and Commerce's request for information. We look forward to sharing information detailing what individuals can learn about their own bodies through personal genetic testing and how our company is facilitating important scientific research in the field."

^ back to top


Effects of Alu elements on global nucleosome positioning in the human genome

BMC Genomics 2010, 11:309
Yoshiaki Tanaka, Riu Yamashita, Yutaka Suzuki, Kenta Nakai

Understanding the genome sequence-specific positioning of nucleosomes is essential to understand various cellular processes, such as transcriptional regulation and replication. As a typical example, the 10-bp periodicity of AA/TT and GC dinucleotides has been reported in several species, but it is still unclear whether this feature can be observed in the whole genomes of all eukaryotes.

Results: With Fourier analysis, we found that this is not the case: 84-bp and 167-bp periodicities are prevalent in primates.

The 167-bp periodicity is intriguing because it is almost equal to the sum of the lengths of a nucleosomal unit and its linker region. After masking Alu elements, these periodicities were greatly diminished.

Next, using two independent large-scale sets of recently published nucleosome mapping data, we analyzed the distribution of nucleosomes in the vicinity of Alu elements and showed that (1) there are one or two fixed slot(s) for nucleosome positioning within the Alu element and (2) the positioning of neighboring nucleosomes seems to be in phase, more or less, with the presence of Alu elements. Furthermore, (3) these effects of Alu elements on nucleosome positioning are consistent with inactivation of promoter activity in Alu elements.

Conclusions: Our discoveries suggest that the principle governing nucleosome positioning differs greatly across species and that the Alu family is an important factor in primate genomes.

^ back to top


Potential of genomic medicine could be lost, say science think-tanks

From The Times

May 17, 2010

The NHS needs a new body for evaluating diagnostic tests if it is to make the most of advances in genomic medicine, says a report to be published on Tuesday.

The PHG Foundation and the Centre for Science and Policy at the University of Cambridge say that the potential of genetic testing to deliver better treatments at lower cost will be delayed unless there is a system to establish the benefits.

The absence of such a system means that hospitals and GPs may waste money on new tests that do not have clear benefits for patients, while ignoring others that can lead to better clinical outcomes. Just as the National Institute for Health and Clinical Excellence (NICE) currently recommends which treatments are cost-effective for NHS use, new genetic diagnostic techniques must be examined in a methodical way.

“The Department of Health should establish an evaluation and decision-making body, as a matter of urgency, to direct research funding towards important strategic questions and ensure evidence-based implementation of both new diagnostic techniques and informatics systems within the NHS,” the report says.

The recommendation comes in response to a House of Lords inquiry into genomic medicine published last summer, which found that the NHS was not ready to take advantage of genetic advances in healthcare.

The falling costs of reading DNA mean that it is likely to be possible to sequence any person’s entire genome for less than £1,000 within a year or two. Scientists have also started to identify how variations in DNA affect responses to drugs or susceptibility to disease, raising the prospect of personalised medicine based on individuals’ genetic profiles.

Doctors could potentially use genetic information to select the best drugs for treating particular patients, or to calibrate doses of medicines with potentially dangerous side-effects, such as the blood-thinning drug warfarin. Companies such as 23andMe and Pathway Genomics are already selling genetic tests directly to consumers that provide some of this information.

Little research, however, has so far shown that knowing details of a patient’s genome is helpful to doctors, and leads to better medical outcomes when it is used in prescribing drugs.

The new report, which was compiled following four seminars attended by more than 50 doctors, scientists, ethicists and patient representatives, says that this needs to be addressed as new genetic tests are offered to the NHS.

NICE recently established a diagnostics assessment programme to start this, and is currently conducting a pilot project, but a more comprehensive system is needed. New commissioning structures are also required to ensure that validated tests are accessible everywhere.

Caroline Wright, head of science at the PHG Foundation, said: “The heart of the problem is that we do not have enough data on whether these tests actually help patient care. We desperately need the equivalent of clinical trials for diagnostics.

“There’s an implicit assumption that testing is good, that knowledge is power, but the key question is does a test result helpfully change the management of a patient? If not, it is a waste of money.

“When public money is being spent, it must be spent sensibly to get better care outcomes. It’s really important that anything funded by health systems has evidence behind it.”

^ back to top

BGI Expands Into Denmark with Plans for $10M Headquarters, Staff of 150

May 19, 2010

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) — BGI will create a $10 million European headquarters in Copenhagen, where it plans to eventually hire up to 150 scientists and support employees.

BGI will recruit between 20 and 50 people during the first year of the Copenhagen HQ — to be called BGI Europe — then establish a sequencing platform allowing for the hiring of between 50 and 100 people. The project will be China's largest investment in Denmark.

The "strong and still growing research community within biotech in Denmark has attracted our attention," Songgang Li, associate director of BGI, said in a statement released yesterday. "We see some interesting prospects for partnership, and I feel confident that we have acted wisely in selecting Denmark as our European base."

BGI expects to generate DKK 5 billion ($829 million) in first-year revenue from the Copenhagen HQ, according to a statement released by Denmark's Ministry of Foreign Affairs.

Chinese and Danish officials signed an agreement creating BGI's European HQ on May 17, during the Denmark-China Economic & Trade Cooperation Forum held in Copenhagen. The signing ceremony was attended by Denmark's Minister for Economic and Business Affairs Brian Mikkelsen and China's Commerce Minister Chen Deming.

According to the official Chinese government press agency Xinhua, Deming led the largest-ever Chinese trade delegation to visit Denmark, which is China's second largest source of foreign investment. The delegation consisted of more than 100 Chinese entrepreneurs focused on promoting trade and investment with China.

The trade forum was timed to coincide with the 60th anniversary of the establishment of diplomatic relations between China and Denmark, one of the first Western countries to recognize the People's Republic government.

BGI actually announced the creation of BGI Europe six days before the trade forum on May 11, through a pair of one-sentence statements on its website. One statement said the Copenhagen HQ "will offer scientific and technological collaboration and services to the whole European countries, providing R&D in technology and products, [and] seeking out opportunities of cooperative projects in the fields of sequencing and bioinformatics."

The other statement said BGI Europe's priorities include "jointly establishing key laboratories with universities and other research institutes" on the continent.

Founded in 1999 as the Beijing Genomics Institute, BGI is based in the Beijing suburb of Shenzhen. Facilities at BGI Shenzhen include the Sino-Danish Cancer Research Center, opened last year by the institute in cooperation with University of Copenhagen Denmark, Aarhus University, Southern Denmark University and other research institutions. According to BGI, the center uses next-generation sequencing technology for the identification, development, and clinical validation of new biomarkers for early diagnosis of breast cancer.

Denmark is home to more than 130 biotech companies and more than 270 providers of services for the biotech industry, with some 25,000 people employed in the life sciences, Ole Frijs-Madsen, the director of the Danish trade promotion agency Invest in Denmark, said in a statement released by the Danish law firm MAQS, which facilitated the BGI-Danish agreement.

^ back to top


Rapid Rise of Russia

[The Rapid Rise of Russia is illustrated here by the weblog (with 1/2 month data for May, 2010) featuring the paradigm-shift from "Junk DNA" and "Central Dogma" to fractal approach to DNA (the New Science created by AJP in Hungary, October 2006), leading to "HoloGenomics" and "The Principle of Recursive Genome Function". Russia shows downloads more than twice of China and India COMBINED - AJP]

Skryabin et al. "Combining two platforms for Full Genome Sequencing of Human." Acta Naturae. 2009 Dec; 3(3):50-55

Letter from the Editor

Last months of the past year were marked by the growing interest of various government officials and parliamentarians, as well as the media, to the problems of the development of the biological information market and biotechnology. Even a simple review of dates and facts shows that our country’s leaders decided to take a serious approach to the revival of Russia’s technological potential in this area. Presidential Advisory Board for the Questions of Modernization and Technological Development held its meeting in Pokrov, where President D.A. Medvedev inspected the technological grounds for new medical compounds production. Prime Minister V.V. Putin visited the pharmaceutical company in Zelenograd, where medicines based on the recombinant proteins technology are produced. On October 15, 2009, debates in the State Duma of the Russian Federation were held, the topic of which was “a perfectibility of the legislative support for the biotechnological industries.” Industrial Committee in the State Duma has initiated that meeting. Another session with the participation of the Ministry of Science and Education and the Ministry of Industry and Trade of the Russian Federation was held to discuss the science and technology issues, and finally, a Presidential message to the Federal Assembly of Russia contained a significant section on biopharmaceuticals. It is a common knowledge that before Perestroika the Soviet Union held one of the leading positions in the world in the field of biotechnological industries. A lot was achieved due to fast development of both academic and practical “bioscience,” which managed to bounce back rapidly during “post-Lysenko” development. Unfortunately, that period of positive growth in biology, like in many other areas of scientific and technological progress, was abruptly stopped by the revolutionary developments of the early 90s. Nowadays a lot of questions are being sharply posed about the revival of ‘life sciences’ in combination with new biotechnologies and the biopharmaceutical industry. The concerns are connected both with the national security and the participation of our country in the international division of labor in the XXI century. Discussions are mainly focused on accelerating the development of biological sciences and biotechnologies, as well as on methods of increasing research effectiveness to cope with international standards (publications in leading scientific journals, high citation index, intensive patenting). The resources for the development of the biotechnology – be it governmental corporations’ or private companies’ funds – are still undefined. Perhaps it is possible to use both sources in reasonable proportions, but the mechanisms and details of the investing process are vague, and its possible pace is unclear. Another great concern is the expertise of the projects. After political barriers have been taken down, Russian scientists had no formal restrictions to migration to the West, and the turmoil of the 90s led to mass emigration of scientists from Russia, thus hurting the biotechnological sphere severely. We have to wonder now if we could aim towards “re-immigration,” in the way China had focused its efforts on the return of its former citizens to raise their industry? Along with the global challenges facing our leaders, scientists and businessmen, there are some other problems that need immediate attention. Many of them are tied to the customs control of international transportation of biological compounds, including those that are necessary for the media cultivation for cell lines and animals. Nowadays the process of their delivery to the Russian Federation is rather complicated, and in certain cases (for example, chemicals kept at low temperatures), is practically impossible. Also, we have to note that laboratory instruments, materials, and technological equipment imported into the Russian Federation are substantially more expensive for the local consumers than for their colleagues in European countries or the U.S.. This all makes the growth of a biological industry in Russia significantly more difficult than in the West. All the above said led the editorial board of this journal to dedicate its Forum section to the problems of Bio-farm. We selected several articles from authors in the science world, in business, as well as from the media: we sought to show the opinion spectra of various experts on the problems connected to the development of the biological industry in Russia.

[Kevin Davies features an article on the same topic in his Bio-IT World - AJP]

^ back to top


Genomics goes beyond DNA sequence

Published online 10 May 2010 | Nature 465, 145 (2010) | doi:10.1038/465145a

A technology that simultaneously reads a DNA sequence and its crucial modifications makes its debut.

Alla Katsnelson

[Protein selective of DNA sequence information - Here you go, Dr. Crick, with your joke by Central Dogma - AJP]

What makes two individuals different? Biologists now know that the genome sequence holds only a small part of the answer, and that key elements of development and disease are controlled by the epigenome — a set of chemical modifications, not encoded in DNA, that orchestrate how and when genes are expressed. But whereas faster, cheaper and more accurate sequencing technologies have developed rapidly, techniques to map the epigenome have lagged behind.

Sequencing company Pacific Biosciences, based in Menlo Park, California, has now developed an integrated system that simultaneously reads a genome sequence and detects an important epigenetic marker called DNA methylation. "I think it's an important step forward, although I think it is a baby step," says Joseph Ecker, a plant geneticist at the Salk Institute for Biological Studies in La Jolla, California, who was not involved in the work.

DNA methylation — the addition of methyl groups to individual bases — is just one of many epigenetic markers of DNA and its associated proteins. Others include modification of the histone proteins that DNA winds around to form chromatin — the tightly packed cluster that makes up chromosomes — and the activation of small non-coding RNA molecules.

DNA methylation, which reduces gene expression, is linked to key developmental events, as well as many types of cancer. It is the best-studied epigenetic modification, mainly because tools have existed to study it, says Susan Clark, an epigeneticist at the Garvan Institute of Medical Research in Sydney, Australia.

The gold-standard method for detecting DNA methylation, which Clark's group developed more than 15 years ago, is bisulphite sequencing, in which unmethylated versions of the base cytosine are chemically converted into another base, uracil. Sequencing the converted DNA allows scientists to reconstruct a genome-wide methylation map. But the technique has several drawbacks. Not only is it expensive and time consuming, it also damages DNA, reducing the map's accuracy. And it doesn't detect methylation at adenine bases, which are very prevalent in organisms such as bacteria.

Pacific Biosciences' approach for detecting DNA methylation, published this month in Nature Methods1, builds on the company's sequencing technology. The system uses an enzyme called DNA polymerase to read a strand of DNA and build a complementary strand out of nucleo tides labelled with fluorescent molecules. As each component is added to the growing strand, it produces a flash of light — the colour of the light corresponds to the identity of the base, and thus reveals the sequence of the template DNA.

Analysing the pulses of light, and the time between them, can also show whether methylation is affecting polymerase activity. This has now been exploited to detect methyl adenine, methylcytosine and a poorly understood modification called 5-hydroxymethylcytosine. "We foresee with this technology that in the future there will be a unification of the fields of epigenomics and genomics," says Stephen Turner, the company's founder and chief technology officer.

Game changer?

Although the data are promising, obstacles remain. "There are distinct advantages, but we're not rushing out tomorrow to apply this because it's not prime time for human methyl ome mapping," says Ecker.

One problem is that although the technique is great at distinguishing adenine from methyl adenine, it doesn't quite reach single-base resolution for cytosine and methylcytosine. It also lacks one of the key promised benefits of Pacific Biosciences' sequencing technology: its ability to read long sequences of DNA, up to 8,000–10,000 base pairs, which makes it easier to assemble the data into complete genomes. Instead, the reported methylation read-length is only about 1,000 base pairs. [Which is already many times longer than the extremely short reads of "shotgun sequencing" - AJP].

Turner says that the company is working to solve these problems. It will ship the first sequencers that use fluorescent labelling this year, and plans to add the methylation mapping capability next year.

"What needs to be done now is to make it robust and accurate," says Clark, a steering-committee member of the International Human Epigenome Consortium, a bid launched in January to map the epigenome in multiple cell types2. "There's a lot of trouble shooting that needs to be done to get it to be accurate enough to be able to compete with bisulphite sequencing."

Several companies are working on similar technologies. UK-based Oxford Nanopore Technologies published a report last year showing that it could detect methylated DNA at a single-molecule level3. But that system and others are still at an earlier stage of development.

Some say that the promise for such a technique is huge. Bisulphite sequencing for a single human genome can cost up to US$100,000, says Robert Martienssen, a geneticist at Cold Spring Harbor Laboratory in New York. With the latest technique, the cost of a full-genome methylation map would drop to $100–1,000, he says. "That will change everything."

There is no shortage of epigenetic questions ripe for probing. One example is in tumour biology, where different cancer cells are likely to have different methylation patterns. Another is how cells in a single organism take on different functions despite having identical genomes. "This is exactly the technology you could use to look for epigenomic changes in specific cell types," says Martienssen, who is also on the International Human Epigenome Consortium's steering committee.

Ecker says researchers still haven't pinned down the significance of, say, having a methylation mark in one position and not another, and what's really needed is more studies that unify genomic and epigenomic information. "As you get more genomes to compare, then of course the differences take on some meaning," he says. "We're just lacking numbers at this point."

[Unification of the Genomic- and Epigenomic halves of the Information was already done by HoloGenomics. Now The Principle of Recursive Genome Function is eminently one of the questions (for which "there is no shortage") that can be experimentally verified or rejected. While some still would like to get away with tweaking the "joke" of Francis Crick with his infamous "Central Dogma" that proteins via epigenomic channels can not alter the "DNA sequence information", it is clearly untrue that DNA sequence information would remain un-altered by making it unreadable by methylation. To say that the information of a book is not affected by tearing crucial pages of chapters off amounts to not understanding what information is. After half a Century of setback by Central Dogma and JunkDNA mistaken axioms it is time to move on, the brilliant technology that is both fast and cheap compels the field to put the issue on the agenda - Pellionisz_at_JunkDNA.com]

^ back to top


Walgreens To Sell Genetic Test Kits For Predisposition To Diseases, Drug Response

Huffington Post

05/11/10 10:06 PM

NEW YORK — The largest U.S. drugstore chain, Walgreen Co., will start selling genetic testing kits at many of its stores later this month, according to Pathway Genomics, which makes the kits.

Pathway said Tuesday that Walgreen will sell saliva swab kits that are used to determine predisposition for chronic diseases, and response to common drugs like Plavix, Tamoxifen and Coumadin.

They can also be used to determine if a person carries a gene for diseases like Alzheimer's, cystic fibrosis, and Tay-Sachs disease.

The tests will be available at Walgreen stores starting in mid-May, it said. [This Friday - AJP]

The company said the testing kits will cost $20 to $30 each and will include a saliva collection kit and a postage-paid envelope that customers can use to send their saliva sample to the Pathway laboratory.

Customers can then go Pathway's Web site and order tests. Pathway says the tests – for drug response, "pre-pregnancy planning" and "health conditions" – start at $79 and run up to $249 for all three.

Pathway is based in San Diego. Walgreen, which is headquartered in Deerfield, Ill., runs about 7,500 stores in all 50 states as well as Washington, D.C., and Puerto Rico.

Pathway said the test will not be available in New York due to state law.

Other companies, including Navigenics Inc. and 23andMe Inc., also offer direct-to-consumer genetic testing, spawned by recent genetic discoveries.

State and federal public health officials, however, have urged consumers to be skeptical, noting that related research is in its earliest stages and doctors have little training in interpreting the results.

Comment (1) by Andras Pellionisz

"State and federal public health officials, however, have urged consumers to be skeptical, noting that related research is in its earliest stages and doctors have little training in interpreting the results"

Francis Collins, M.D., Ph.D., Head of NIH explains even clearer in his bestseller book on Personalized Medicine that all doctors simply can not have the wherewithal to interpret genomic testing results, since at the time of their training not only their teachers did not profess what it takes, but the genome revolution has not even happened till our times – but “The Future Has Already Happened” (Title of Chapter One). As YouTube "Shop for your Life" demonstrates, we should not even set the "mission impossible" goal to educate everyone to M.D., Ph.D. levels, eligible to Nobel Prize. The task is for genome informatics industry to automate, in a user-friendly manner, the utilization of genomic data, in an interoperable manner with health-records and personal preferences. Ordinary people should be empowered to "shop by their genome" using their mobile computers disguised as cell phones in the same Walgreen's and CVS for nutrients, supplements, cosmetics, drugs (etc) what fixes or fits their genome. "Ask what you can do for your Genome!"

^ back to top


Bio-informatics Springs Up to Place Genome in Neverland

CHUN GO-EUN
Friday, May 7th, 2010

[SAMSUNG Genomic Medicine coming together with IT - AJP]

A modern person's great expectation of convergence technology is on the subject of being "forever" young (living well and beautiful at any age). With the development of information technology and biotechnology, should we really accept the fact of getting old and ill - hairs falling out, organs running down, and knees wearing out? Imagine a brand new ultra definition television outshining its viewer. This is where the decade-spanning research of genome or bio-informatics is proving its real worth. IBM, Google, Samsung SDS, and recent scientists' investments and discoveries prove that there is a thirst of the bio-informatics industries around the world which can secure two words for people today and future: Forever Young.

Google's Strategic Investment to Biotechnology Start-up

In 2007, Google has invested 3.6 millions dollars in 23andMe, a privately held personal genomics and biotechnology company based in California. With a round of funding of Google, New Enterprise Associates, Mohr Davidow Ventures, and Genentech, 23andMe has been developing new technology and methods that will enable its consumers to understand their own genetic information at an affordable cost....

...The Latest Discovery from Canada

Then on May 7, researchers of the University of Toronto was on the headline of the Vancouver Sun for cracking hidden splicing code in genes. This signifies the possibility of a small number genes generating a large number of genetic messages. According to Allison Cross' interview with Brendan Frey, his team has produced a web tool for researchers that shows how different genes are being spliced. "If you go in there and peer at that DNA, you can compare it to mutations that have been observed by medical doctors," he said. Doctors sometimes sequence the genomes of their patients to see why they have a disease. If they find mutations, they can see if they match up with the code words discovered by Frey's team. "And you can see if those mutations match up with these code words we've discovered . . . then, there's an implication that that mutation is interfering with the production of these genetic messages," he said. "So now there's an explanation for why the disease might occur."

Samsung SDS Buckles with Companions to Set Drive

Bio-informatics-diggers are also spotted in Korea. Samsung SDS has made several strategic movements in recent past to join the gene sequencing race. Considering how at least 17 start-ups and existing companies have been the roadrunners in the sequencing race since 2009, Samsung SDS is best advised to start with extra gears. Before organizing genetic information, Samsung SDS organized the best experts as its partners in wide range of sectors including bio-informatics cloud system, medical expertise, and sequencer.

Samsung SDS made a technical cooperation with Cloudera, an American company, to secure the expertise of Hadoop (innovative platform for consolidating, combining and understanding mass data). Then Samsung made further cooperation with Korean Bioinformation Center and Lee Kil-yeo Cancer & Diabetes Research Center. On March 24, Samsung SDS takes both the global and home network to the next step by signing MOU with Life Technologies and Samsung Medical Center. Gregory T. Lucier, Chairman and Chief Executive Officer of Life Technologies said, "Samsung Medical Center's medical expertise and SDS' bio-informatics experience will perfectly accord with our highly accurate next generation sequencing technology. Samsung SDS is currently pilot testing with two families totaling 8 people to sequence and analyze their genetic information. Samsung SDS plans to continue developing technology in full global collaboration. It also envisions to provide affordable gene sequencing solution within three years.

Kim In, CEO of Samsung SDS said, "By working closely together with Medical, IT, and BT leaders worldwide, we believe that our collaboration will contribute greatly to the better understanding of the patient's genetic information, which will find cure and conclude personalized medicine."

... Samsung SDS plans to throw a coming out party for new bio-informatics and mobile services this year. The company seeks the 200 trillion won market in 2013 with the convergence of BT and IT.

Prefect Recipe for Treatment

"I'd wait until the cost of genome sequencing hits around $100. I will be okay as long as I collect all my genetic information by the time I am forty, cause that is when I would actually start having problems here and there," Kate (25) from Texas commented on her blog. Would an upcoming era of personalized medicine be an extension of living in Neverland or the high-class silver town? Further discoveries and developments carries on with future patients and customers' full of curiosity, expectation, and imagination.

^ back to top


Hood Wins $100k Kistler Prize

LeRoy Hood, awarded the Lasker Prize (2009) and Kistler Prize (2010)

Leroy Hood, the pioneer of high-speed gene sequencing technologies that made the Human Genome Project possible, has been awarded the 2010 Kistler Prize. The $100,000 award is named after Walter Kistler, the inventor and president of the Bellevue, WA-based Foundation for the Future. Past winners include famous scientists J. Craig Venter, Richard Dawkins, and Edward O. Wilson. In a statement, the foundation said, "Hood's discoveries have permanently changed the course of biology and revolutionized the understanding of genetics, life, and human health."

[The Nobel Committee must be thinking very hard, since those instrumental in the Human Genome Project are certainly eligible, but arriving at the "the magic trio" is by no means trivial. The Lasker Prize ("the American version of Nobel Prize") has already been awarded to Lee Hood, and now he is decorated with the Kistler Prie - that has also been awarded to Craig Venter. Both gentlemen excelled in both in the Human Genome Project and its transformation to particular outstanding practical utilizations. I case of Venter, the utility is alternative energy - while in case of Hood it is his trade-mark, P4 (Genome-Based Predictive, Participatory and Personalized Medicine). There are several contenders who were instrumental in the HGP and some are now they are well on their ways for the next wave of awards, for deciphering the function of Genome Regulation. The choice of the third person is must be an agonizing decision for those eligible to nominate Nobel laureates. - AJP]

^ back to top


Crisis in the National Cancer Institute

CHICAGO, IL, May 3, 2010 --/WORLD-WIRE/-- Cancer Prevention Coalition Chairman Samuel S. Epstein, M.D. says avoidable causes of cancer are being ignored by U.S. federal agencies while nearly one in two men and more than one in three women in this country now develop cancer in their lifetimes.

He cites an April 25 New York Times editorial on the National Academy of Sciences' new report that National Cancer Institute's (NCI) system for judging the clinical effectiveness of cancer treatments is approaching "a state of crisis."

As critical, says Dr. Epstein, "the NCI's system for publicizing avoidable causes of cancer remains virtually non-existent, even though nearly one in two men and more than one in three women now develop cancer in their lifetimes."

On April 24, Nobel Laureate Dr. Francis Collins, director of the National Institutes of Health (NIH), appointed by President Barack Obama on August 2009, delivered the Francis Collins Lecture in Chicago. Dr. Collins' lecture focused on his landmark discoveries of the genetic basis of disease, including cancer. In reference to a question on the role of genetics in avoidable causes of cancer, Dr. Collins responded, "I am unaware of any avoidable causes of cancer."

Dr. Epstein says it is not surprising that President Obama still remains unaware of a wide range of avoidable causes of a wide range of cancers as summarized in the following press release issued by the Cancer Prevention Coalition nearly 18 months ago.

January 23, 2009 --/WORLD-WIRE/-- President Barack Obama is the first President to develop a comprehensive cancer plan. While the plan reflects strong emphasis on oncology, the diagnosis and treatment of cancer, no reference is made to prevention.

President Obama's cancer plan should emphasize the many avoidable causes of cancer.

The plan defines and coordinates the responsibilities of four federal agencies: the National Cancer Institute (NCI), for research and clinical trials; the Centers for Disease Control and Prevention, for epidemiological follow up and support of cancer survivors; the Centers for Medicare & Medicaid Services, for funding cancer related care; and the FDA, for regulating cancer drugs.

In 1971, Congress passed the National Cancer Act which authorized the National Cancer Program, calling for an expanded and intensified research program for the prevention of cancer caused by occupational or environmental exposures to carcinogens. Shortly afterwards, President Richard Nixon announced his "War Against Cancer," and authorized a $200 million budget for the NCI. Since then, its budget has escalated by nearly 30-fold, to $5.3 billion this year.

Meanwhile, the incidence of a wide range of cancers, other than those due to smoking, has escalated sharply from 1975 to 2005, when the latest NCI statistics were published. These include malignant melanoma (172%), Non-Hodgkin's lymphoma (79%), thyroid (116%), testes (60%), and childhood cancers (38%).

In November 2008, the NCI claimed that the incidence of new cancers had been falling from 1999 to 2005. However, this is contrary to its own latest statistics. These show increases of 45% for thyroid cancer, 18% for malignant melanoma, 18% for kidney cancer, 10% for childhood cancers, and 4% for testes cancer.

Disturbingly, the NCI has still failed to develop, let alone publicize, any listing or registry of avoidable exposures to a wide range of carcinogens. These include: some pharmaceuticals; high dose diagnostic radiation; occupational; environmental; and ingredients in consumer products - food, household products, and cosmetics and personal care products.

The NCI has also failed to respond, other than misleadingly or dismissively, to prior Congressional requests for such information.

In March 1998, in a series of questions to then NCI Director Dr. Richard Klausner, Congressman David Obey requested information on NCI's policies and priorities. He asked, "Should the NCI develop a registry of avoidable carcinogens and make this information widely available to the public?" The answer was, and remains, no.

Klausner's responses made it clear that NCI persisted in indifference to cancer prevention, coupled with imbalanced emphasis on damage control - screening, diagnosis, treatment, and clinical trials.

Moreover, NCI's claims for the success of "innovative treatment" have been sharply criticized by distinguished oncologists. In 2004, Nobelist Leland Hartwell, President of the Fred Hutchinson Cancer Control Center, warned that "Congress and the public are not paying NCI $4.7 billion a year," most of which is spent on "promoting ineffective drugs" for terminal disease.

It should be further emphasized that the costs of new biotech cancer drugs have increased more than 100-fold over the last decade. Furthermore, the U.S. spends five times more than the U.K. on chemotherapy per patient, although their survival rates are similar.

The Obama Cancer Plan is subject to Congressional authorization, and funding approval by the House and Senate Appropriations Committees. These committees will be in a position to require that major priority should be directed to cancer prevention rather than to oncology. Clearly, the more cancer is prevented, the less there is to treat. This will also be of major help in achieving Obama's goal "to lower health care costs," said the Cancer Prevention Coalition in the above January 23, 2009 news release.

Today, Dr. Epstein says, "An encouraging move in support of directing priority to prevention is the April 2010 appointment of a 'National Cancer Advisory Board (NCAB) Ad Hoc Working Group' to review NCI's opportunities for 'diagnosing, treating, and preventing cancer.'"

Samuel S. Epstein, M.D. is professor emeritus of Environmental and Occupational Medicine at the University of Illinois at Chicago School of Public Health; Chairman of the Cancer Prevention Coalition; The Albert Schweitzer Golden Grand Medalist for International Contributions to Cancer Prevention; and author of over 200 scientific articles and 15 books on the causes and prevention of cancer, including the groundbreaking The Politics of Cancer (1979), and Toxic Beauty (2009 Benbella Press).

To read Dr. Epstein's columns in the Huffington Post, go to: http://www.huffingtonpost.com/samuel-s-epstein

CONTACT:

Samuel S. Epstein, M.D.
Chairman, Cancer Prevention Coalition
Professor emeritus Environmental & Occupational Medicine
University of Illinois at Chicago School of Public Health

[Readers may find something very unusual with this Press Release - Pellionisz_at_JunkDNA.com]


Stanford bioengineer [Quake et al.] explores own genome

By Lisa M. Krieger
San Jose Mercury News

A Stanford bioengineer has become the first scientist in the world to decode his own DNA with a machine he invented, allowing him to peer into his genetic blueprint to see his risk for disease — and expanding the frontier of medicine.

"I was curious about what was written in me," said Stephen R. Quake, 41, who has devoted the past decade to building the technologies behind his Heliscope Single Molecule Sequencer. "I'm following the great tradition of scientists who experiment on themselves."

A decade ago, sequencing of the first-ever whole genome by the federal government took many years and cost $400 million to $500 million. Quake's machine, the size of a freezer, sequenced his human genome in only four weeks, for $50,000. The procedure is expected to cost $10,000 by the end of this year.

Only a handful of people have seen their complete genomes and contemplated the consequences — physical and emotional. All were done by large genome sequencing centers, equipped with huge staff and hundreds of machines.

Now Quake has become one of the first 10 people to enjoy such an extraordinary, and unnerving, look into his own mortality. The findings were published in the current issue of the British medical journal Lancet.

"It sent a shiver down my spine," said Quake.

It's a vision that valley biotech startups like 23andMe and Navigenics have been selling for several years, but those companies are providing mere snapshots of sections of code — whereas this is a deep and long look into one man's entire 2.6 [6.2, rather - AJP]-billion-letter-long genetic story.

For the young professor, the news was reassuring: No gene variants were found that showed high odds of developing a horrible condition with no treatment.

That Quake is a healthy subject made this experiment more fascinating. He eats well, exercises and doesn't smoke, but he has a family history of vascular disease. And he lost a first cousin at only 19 to presumed cardiac failure.

Quake wondered about his risk of an inherited disease — hypertrophic cardiomyopathy — which causes enlarged hearts that don't beat efficiently and risk heart attacks.

Because cardiologist Euan Ashley, who runs Stanford's Hypertrophic Cardiomyopathy Center, was alarmed, he and Quake used the Heliscope to analyze a broad spectrum of Quake's genome. Ashley and his team of Stanford researchers designed an algorithm to overlay the genetic data upon what was already known about Quake's inherent risk based on his age and gender. They analyzed the genome for 55 conditions, including heart disease, obesity, diabetes, schizophrenia and gum disease.

The impediment to the medical use of genomes, said Ashley, is no longer the technology — but the ability to understand and interpret what the technology reveals.

Most genes linked to disease simply nudge the odds of developing the illness up or down a bit. Some diseases have complex roots, caused not by a few common variants but a vast number of rare variants, offering for the most part no clear target for drugs or diagnosis. And the significance of many genes remains a mystery.

Using sophisticated software, experts must weigh the contribution of each gene variant according to the number, and sample size, of published studies linking it to disease. And these studies are ongoing, so the software will need constant updating.

Also needed will be more geneticists and counselors to advise patients, Ashley said.

"The technology is advancing at an incredible rate, but in order to make this help for patients and doctors, we'll need to continue to invest in our understanding of how to interpret this information," said Elizabeth Kearney, president of the National Society of Genetic Counselors. There are 3,000 genetic counselors in the United States, although this number is expected to grow, she said.

Daniel MacArthur, a United Kingdom geneticist who writes the popular blog Genetic Future, said: "This study provides a glimpse of things to come as we begin to move into the era of personal genomics. The authors have done a fantastic job of integrating information from many different sources to make sense of genetic data."

But he added that it reveals how much remains to be studied.: "We still don't have anything close to a full catalogue of the genetic variants that influence disease risk - and it's likely that Quake's predicted risk for common diseases will change considerably as more variants are uncovered over the next few years."

Quake arrived at Stanford as an undergraduate, where he raced through a bachelor's in physics and a master's in math in only four years. The son of an early software pioneer in Connecticut, he grew up immersed in computers, writing his first program using a stack of punch cards at age 11.

After earning a doctorate in physics from Oxford University, he apprenticed in the Stanford lab of Nobel laureate Steven Chu — now director of the Department of Energy — before landing at the California Institute of Technology at only 27. Further analysis of Quake's genetics found rare variants in three genes that are associated with sudden cardiac death. Two are little-studied; there is no meaningful data about their significance. The third was worrisome enough that Ashley recommended two heart imaging tests, an echocardiogram and ultrasound. A variant in a fourth is linked to family history of coronary artery disease.

But his genes also show that he'll respond well to heart medications.

Other genes nudged his odds of developing various illnesses up and down. His odds are slightly higher for developing prostate cancer, obesity, type 2 diabetes and coronary artery disease. The opposite is true for his risk of Alzheimer's disease; due to several protective variants, his risk is only 1.4 percent.

The toughest part of the entire process, said Quake, was donating the blood needed to obtain his DNA.

"I hate needles," he said. "I guess I'm a bit of a wimp."

gene testing getting off the ground

An infant industry is already capitalizing on gene testing.

Some companies test for risk for a single health condition, such as celiac disease. Others are focused on telling customers something about their ancestry. Their services typically cost $100 to $500. Other companies, such as 23andMe and Navigenics, test customers for gene variants called single nucleotide polymorphisms (SNPs) that are known to be more common among people who develop diseases such as breast cancer. These companies, which charge $450 to $1,000, do not read all 3 billion base pairs of the entire [3.1 Bn for haploid - AJP] genome, like Quake"s test.

Of the Stanford study, 23andMe co-founder Anne Wojcicki said, "We look forward to the day when full genome sequencing, which provides additional detail in the form of rare variants, is as affordable as a SNP test is today."

^ back to top


James Watson Just Can't Stop Talking at GET

April 27, 2010

The GET conference is underway in Cambridge and James Watson, one of the "personal genome pioneers" at the meeting, is letting his thoughts be known during the discussions. At this time in his life, Watson says he wasn't worried about what he might have learned from getting his genome sequenced. "If I were 20 I wouldn't want to know because you just worry. At 80 I don't worry if I'm going to get cancer," Watson said. (He did, however, have his ApoE status redacted from his genome sequence.) To encourage more people to put their genomic profiles online, Watson suggested to "give them a pie or something." Then when asked about the ethics of making whole-genome sequences available, Watson responded: "You hear this ethical stuff, and it's just crap."

One of the fastest-growing players in the sequencing field is BGI, but Watson isn't worried about the competition. "Whether they'll produce anything we'll see. I'm not afraid of the Chinese. I still think we're better," he said. "They haven't developed any technologies, and technology is now leading the game."

Finally, in discussing how to advance scientific discovery, Watson said, "The limiting factor now is the intelligence of the scientist, not money."

-- Comment (1)

Estimating Asia

At the present point of his life, no wonder Jim Watson is not overly weary about early onset of a disease threatening him prematurely.

Or, that the march towards informatics and industrialization of genomics (“genome based economy”) will fully blossom on Jim’s watch, though it was predicted to be a global game changer as far back as 2001 by Juan Enriquez in his bestseller “As the future catches you”.

Some are weary a bit when estimating Asia versus the USA. The industrialization does not necessarily depend only on the “intelligence” – but rather, on the civilization of empires. Juan Enriquez quoted the Chinese Civilization: “Of the fourteen dynasties ten lasted longer than the entire history of the United States” (p.21).

To illuminate the dangers for the USA inherent in underestimating the pace of industrialization of genomics, let me quote in addition here (see fuller accounting in my column) the cases of Korea, Japan, India and Singapore. DTC genome testing has just became global, by Seoul-based DTC and full genome analysis institute, with their declared ambition for the Chinese and Indian markets. No wonder, since they have the backing of SAMSUNG. Presently, not a single US-based DTC has such a strong declared alliance with a US “Big IT”; does it raise any eyebrows on this side of the Pacific Rim?

Or, Silicon Valley is getting ready for providing “affordable mass production of full DNA sequences” by both Complete Genomics and Pacific Biosciences. In some contrast to earlier goals, both companies will stop short of “DNA analysis and interpretation”. Yesterday, we learned that Complete Genomics teamed up with Japan’s RIKEN for that. Also Silicon Valley based Affymetrix released its latest microarray optimized for diseases e.g. of the Chinese Hun populus (the largest homogeneous ethnic block of the World). One of the plants of Affymetrix in Singapore welcomed the news wholeheartedly... Meanwhile, as I report in my column, a well-known Silicon Valley CTO has packed up and moved to China...

Time to say again “Hello Mr. Watson, can you hear me?” while trying to create a new industry for the Great Land of USA?

Pellionisz_at_junkDNA.com

^ back to top


Joint research begins on individual-level mechanisms of gene expression

Posted In: Life Sciences

The RIKEN Omics Science Center (OSC) has partnered with American company Complete Genomics to develop next-generation technology for the rapid analysis of individual-level human gene expression.

The partnership combines the OSC’s expertise in omics science with technology for the rapid sequencing of complete human genomes developed by Complete Genomics to explore new possibilities in the study and application of cutting-edge gene expression analysis.

Collaborative research in the new project will seek to clarify the underlying mechanisms of complex gene expression in humans. To do so, complete genomes of a number of individuals, sequenced by Complete Genomics, will be analyzed for differences in genetic information and RNA expression using RIKEN’s CAGE (cap analysis of gene expression) technology. While technology for the analysis of complete human genomes is today increasingly widespread, the current collaboration takes such sequencing one step further, focusing on RNA levels (transcriptome) representative of detailed differences in gene arrangement, with important applications to genetic profiling.

In the future, through collaborative research using technology developed independently by the two organizations, RIKEN and Complete Genomics envision further cooperation toward the creation of a commissioned genome analysis service. The first stage of the new partnership was launched on 7 January 2010.

^ back to top


New Algorithmic Method Helps Elucidate Molecular Causes of Inherited Genetic Diseases

GenomeWeb
April 2010

By Matthew Dublin

Researchers at the Buck Institute for Age Research have developed a bioinformatics-based method to predict the molecular causes behind inherited genetic diseases. Using a combination of specially designed algorithms, the investigators interrogated available databases listing known sites of protein function to find other possible protein function sites. They then used the algorithms to look at proteins that are associated with disease--causing mutations and searched for statistical co-occurrences of mutations that were close to those functional sites. When they were finished, the team had analyzed some 40,000 amino acid substitutions, making their effort one of the most comprehensive mutation studies.

"There are a few reasons why this is difficult. Genetic data is hard to collect together, so it's a lot of effort to go out [and combine all] of these resources, pulling in all this mutation data, and doing this annotation analysis on them," says Sean Mooney from the Buck. "The second reason this is hard is that tools that predict functional sites or structural sites in proteins are usually developed by individual groups. They aren't inter-operable in any way, so we spent an enormous amount of effort pulling these tools in and making this annotation pipeline."

Mooney and his team have ported their algorithm-based approach into a Web tool, which is now freely available on his lab's website for anyone in the community to use.

The mutation profiling Web-based tool could have other applications, such as helping researchers who manage databases of clinically observed mutations develop hypotheses about what those mutations are doing on a molecular level. It could also help cancer researchers who are re-sequencing tumors to locate mutations that drive tumor growth.

"The functions that we looked at in the paper are relatively narrow in terms of the universe of molecular functions that proteins and genes participate in, and we would like to include those [other functions] in this approach," Mooney says. "Protein mutations make up a small percentage of the genetic variance. We want to expand this and look at other mutations out there."

[The Buck Institute for Age Research (in larger Silicon Valley) is a pioneer of investigating the function of (formerly) "Junk DNA" - especially in the aging process. DNA-binding proteins are a prominent example of "epigenomic" channels, that constitute with the other side of the coin, "genomics" the new research & development field of HoloGenomics. The results featured above are also significant since they are based on predictive, algorithmic approach (as opposed to "brute force" approaches, using e.g. statistics, similarity-measures, correlation analysis, etc). In all fairness, another "algorithmic approaches" also developed recently, e.g. predicting microRNA-s in silico, that in turn could be verified or rejected by in vivo exeperimentation. Algorithmic approaches will be prominently featured at the upcoming "Personalized Medicine in P4 Health Care" meeting in Silicon Valley (June 23-25, see below), since the expense of "brute force" approaches alone are unlikely to be economically sustainable. Securing the algorithms as IP the predictive approaches have already been proven a lucrative reimbursement model for "Big IT" (since a single 19-23-base oligo as a microRNA is worth over $150,000 - with Merck and The Max Planck Institute in the lead. - Pellionisz_at_JunkDNA.com]


Affymetrix Launches Axiom Genome-Wide ASI Array For Maximized Coverage of East Asian Populations

April 26, 2010

SANTA CLARA, Calif., Apr 26, 2010 (BUSINESS WIRE) -- Affymetrix, Inc. (AFFX 6.57, -0.11, -1.65%) today announced the launch of the Axiom(TM) Genome-Wide ASI Array, the first commercial product to provide maximum power for genome-wide association studies (GWAS) in East Asian populations. This array is the second catalog release for the Axiom(TM) Genotyping Solution, which includes a suite of population-optimized arrays for genomic studies that will be delivered in the next year.

The Axiom Genome-Wide ASI Array enables researchers to characterize the genetic basis of disease in Asian populations and also offers high genomic coverage for Caucasian populations, making it a powerful, high-throughput tool for ancestry, population, and personal genomic studies.

"There is evidence that differences in disease frequencies and intensity within different populations may have a genetic basis," said Dr. Edison Liu, Executive Director of the Genome Institute of Singapore (GIS), which is using the Axiom Genome-Wide ASI Array in ongoing studies to explore the genetic basis of disease in East Asian populations.

"The increase in knowledge of population-specific human genetic variation is paying off with the creation of GWAS arrays optimized for specific populations," said Dr. Liu, adding that he is pleased with this advanced approach to genome-wide array design.

"This offers researchers increased power to detect population-specific allele associations. With the dramatically increased intensity of research in population genetics in Asia, the Axiom Genome-Wide ASI Array will offer tailored precision for the identification of disease alleles."

The Genome-Wide ASI Array optimizes genomic coverage with known disease association markers, chromosomes X and Y, mitochondrial genomes, and ADME genes. Content was drawn from the 1000 Genomes Project as well as Han Chinese and Tokyo Japanese data made available through the International HapMap Project.

Dr. Jianjun Liu, Group Leader and Associate Director of the Human Genetics group at GIS, is pleased with the performance of the Axiom Genome-Wide ASI Array. "We have run more than 1,000 samples with average call rates above 99.7 percent and HapMap concordance in excess of 99.8 percent," said Dr. Jiu.

The complete Axiom Genotyping Solution consists of Axiom ASI Array Plates, complete Axiom Reagent Kits, an automated target preparation station developed jointly by Affymetrix and Beckman Coulter, and the GeneTitan(R) Instrument. The GeneTitan Instrument integrates a hybridization oven, fluidics processing, CCD imaging device, and analysis software for maximum data reproducibility, user productivity, and scalability. Scientists can run more than 760 samples per week with minimal user intervention, a significant advantage over competing products.

"The ASI Array offers a complete solution to researchers looking for novel genetic variations associated with complex disease in East Asian populations," said Kevin King, president and CEO. "This solution is ideally suited to customers interested in exploring the genetic complexities underlying disease in Asian populations. We look forward to further expanding the Axiom family of genotyping products in the future."

For more information about Axiom Genotyping Solution, please visit www.affymetrix.com/axiom

Beckman Coulter(R) and Biomek(R) are registered trademarks of Beckman Coulter, Inc. Beckman Coulter Biomek Systems are for Laboratory Use Only; not for use in diagnostic procedures.

About Affymetrix

Affymetrix technology is used by the world's top pharmaceutical, diagnostic, and biotechnology companies, as well as leading academic, government, and nonprofit research institutes. More than 1,900 systems have been shipped around the world and more than 21,000 peer-reviewed papers have been published using the technology.

Affymetrix is headquartered in Santa Clara, Calif., and has manufacturing facilities in Cleveland, Ohio, and Singapore. The company has about 1,000 employees worldwide and maintains sales and distribution operations across Europe and Asia. For more information about Affymetrix, please visit www.affymetrix.com.

About the Genome Institute of Singapore

The Genome Institute of Singapore (GIS) is a member of the Agency for Science, Technology and Research (A*STAR). It is a national initiative with a global vision that seeks to use genomic sciences to improve public health and public prosperity. Established in 2001 as a centre for genomic discovery, the GIS will pursue the integration of technology, genetics and biology towards the goal of individualized medicine. The key research areas at the GIS include Systems Biology, Stem Cell & Developmental Biology, Cancer Biology & Pharmacology, Human Genetics, Infectious Diseases, Genomic Technologies, and Computational & Mathematical Biology. The genomics infrastructure at the GIS is utilized to train new scientific talent, to function as a bridge for academic and industrial research, and to explore scientific questions of high impact. For more information, visit the GIS website: www.gis.a-star.edu.sg.

[We reported that DTC became global by Korea's very ambitious plans, reaching far beyond their own population. Look for DTC emerging in Singapore, with its 72% population of Chinese ancestry becoming a "Petri dish" for China. Also, look for DTC employing Affy arrays to reach for markets by these tools specialized far specific populations - AJP]


Digitization Slashing Health IT Vendor Dominance

Dell says traditional health IT vendors will lose market share as the medical industry converts from film to digital medical imaging. [Digital is the "ultimate equalizer" - this article could also be written for Genome Industry in Health Care 2.0 - AJP]

By Nicole Lewis
InformationWeek
April 26, 2010 01:37 PM

As the push to digitize medical imaging at hospitals and other medical facilities becomes more prevalent, many large healthcare IT vendors will see their dominant market share dwindle, predicts Jamie Coffin, VP of Dell's healthcare and life sciences division.

In an interview with InformationWeek, Coffin said big health IT players are losing ground in medical imaging, where conventional radiological film is being replaced by picture archiving and communication system technology. Moving from a film-based clinical environment to PACS allows hospitals to depend less on the hardware and software of original equipment manufacturers like Siemens and Phillips and utilize other technology vendors to store, archive, retrieve, distribute, and present digitized images in different formats.

"Historically there has been no incentive for companies like GE, Phillips, and Siemens who build these MRI or CAT scanners to make it easy for you to move data between a Phillips environment and a Siemens environment," Coffin said. "They really have been against this whole integration of healthcare because it made it easier for them to keep their footprint around the imaging machine," Coffin added. [Partial and full DNA interrogation, to be made interoperable with health-data and personal preferences will hit the health care system with a similar challenge. Affiymetrix, Illumina, 454, SOLiD, Complete Genomics, Pacific Biosciences, Ion Torrents (etc, etc) will pour digitalized (microarray) and genuinely digital (full DNA sequence) data, where the original equipment manufacturers are not motivated to provide their output in a standardized format. Standards will be set by the dominant IT company that makes such data interoperable and processes them at the point of care - either in hospitals or through digital telemedicine - AJP]

By breaking down the barriers associated with traditional film-based image retrieval, PACS gives doctors quicker access to these images electronically and allows companies like Dell to capitalize on the changes by creating new technological solutions that more easily distribute medical images at the point of care.

For instance, Dell will be urging its hospital customers to purchase its new technology which allows hospitals to perform vendor-neutral archiving of patients' images. [Processing partial or full DNA-data will likewise call for vendor-neutral technology both in hardware and software - AJP]

"Five years ago I met with one of the big players in this space, and the president of the company said 'you know healthcare IT is 15 years behind every other industry and we are pretty happy that way because it locks our customers into our technology,'" Coffin recalls being told. "A lot of these companies are used to making 70- and 80-point margins on their software and hardware deals, and the more hardware they can sell, the more money they make," Coffin adds.

With the dynamics of health IT changing, companies like Dell are looking at new opportunities and in some cases staying away from others. For example, Coffin said his company doesn't currently see a good reimbursement model for telehealth technology, but has partnered with the American Medical Association to develop a platform that will make it easier for doctors and other healthcare providers to adopt IT associated with electronic medical records, ePrescribing, and laboratory services.

Dell's acquisition of Perot Systems for $3.9 billion pushed the company further into providing healthcare IT to large hospital systems, but Coffin says Dell still sees a sweet spot at the small to medium-sized medical practice where storage solutions are in demand. On the storage side, Dell has partnerships with companies like EMC and VMware and offers storage and virtualization solutions. The company also provides cloud computing to small and medium-sized medical offices as part of Dell's affiliated physician model.

"We don't think it makes sense for a 10-physician practice to worry about maintaining security and privacy of data, so we charge them a monthly fee. It's essentially a per-physician charge," Coffin said. [The question for a practice of any number of physicians is already solved for blood tests - where to send the specimens (to Labs, for service per sample). With affordable full DNA sequencing upon us, full DNA will likewise be sent to basement Digital Labs equipped with special hardware and software, for routine processing service - AJP]

Coffin, who has spent two decades in healthcare IT, also said that most hospitals operate in the 1% to 3% profit margin, and notes that IT vendors have to understand healthcare economics as much as healthcare IT.

"When the financial crisis hit there was terror in every IT managers' eyes because they were worried about all these financial constraints that were hitting them all at once. They've delayed spending on IT for the last two or three years," Coffin said.

However, the government's push to assign billions of dollars specifically for the development of healthcare IT under the American Recovery and Reinvestment Act is the kind of bailout the healthcare sector needed.

"Now we are starting to see the market move in a very different way. We see a pickup and significant spending around critical technology [related to] storage, mobile point of care [India has about 300 million people with access to running water and sewage, yet about 500 million people carry mobile computers (disguished as cell phones), with hospitals often quite far apart - AJP], and compliance," Coffin said.


When Reading DNA Becomes Cheaper Than Storing the Data [Not "Disposable Genome" - AJP]

By Rachel Lehmann-Haupt | Apr 23, 2010
BNET

The fascinating thing about the emerging field of commercial [industrialized - AJP] genomics is how it ties together so many different areas of research — from biology to computer science, and just about everything in between [and beyond; e.g. engineering of supply chain management, see below - AJP]. Progress in all these disciplines is deeply interconnected, which is why data storage turns out to be as vital as lower priced gene mapping to the commercial success of genomics technology [better said, "industry" - AJP].

As a result of this marriage, genome sequencing is getting much cheaper, and will soon reach the point where it can be used in a clinical setting and covered by health insurance. [better said, "demanded by health insurance" - e.g. health insurance will cover certain medication only with prior genomic test that the given expensive medicine is actually effective for the personal case - AJP]. A chart created by Eric Lander of the Broad Institute shows that sequencing costs have dropped by a factor of 14,000 over the past decade, roughly 100 times faster than Moore’s Law in semiconductors.

One obvious consequence is a likely explosion of stored genomic data, and a whole new raft of associated costs. At a recent panel discussion I attended — Exploring Personal Genetics: The Brave New World — Michael Goldberg, a partner with Mohr Davidow Ventures, made an interesting comment: “Moore’s Law has now been married to biology in ways that were only conceptualized in the 1990s.”

In response, David Magnus, the director of Stanford’s Center for Biomedical Ethics, noted that data-storage costs are another key economic factor for the future of the genomics business. I recently caught up with Magnus on the phone and asked him to elaborate. If storage becomes economically burdensome, he said, mapped genomes may just become disposable. ”If you assume the rate of acceleration for another ten to fifteen years, the cost is going to be nothing,” he said. “At some point it will be easier to re-sequence rather than store the data on a chip or a server.”

In an April 2009 report published in the journal Biotechniques, the authors write:

The cost of storing the gigabytes of raw data produced by each run of the Illumina GAII or AB SOLiD has been estimated to be greater than the cost of generating the data in the first place. It is now common practice to delete the raw image files once they have been processed to produce the relatively small text sequence and quality data files. While the long-term storage of the text sequence files is feasible using current tape and disc technology, maintaining the data in a readily usable form where it may readily be interrogated by users is more of a challenge.

There are currently projects to re-sequence 1000 human genomes, as well as multiple plant and animal varieties to identify genetic variation within species associated with phenotypic variation. The submission of complete re-sequence data to the international repositories would result in the storage of highly redundant data sets, bloating the size of the database and reducing the efficiency of queries. As an increasing number of reference genome sequences become available and the cost of re-sequencing continues to decline, the problem of data redundancy will increase to a point where storage within the primary data repositories becomes impractical.

The theory that sequence repositories will constantly increase in size is likely to be challenged with the increasing availability of reference genome sequences. Once a reference genome sequence has been produced, users are predominantly interested in variation from this reference. [Where the huge challenge is "the definition of reference sequence" - see below, AJP]

Jay Flatley, the CEO of Illunima, recently told me the cost of storing a single genome is only 40-50 megabytes, but that the bloating problem will be a problem in storage facilities holding hundreds of thousands of genomes.

“The biggest value to companies is to have the genomes of more than 5000 people because that is where scientists are going to learn the most about genetic variation,” Flatley said. It’s also where there’s the greatest cost demand on the computing infrastruture in terms of storage, tracking software, aggregation, and privacy protection.

So the near future may not lie in more efficient space for redundant data, but in more efficient reference genome sequences and better access to them [this statement is highly debatable, see below - AJP]. That way doctors can sequence genomes, check the reference data, and then dispose of the genome knowing they can always be re-sequenced. This would also help protect the privacy of the patients [again, it is highly debatable how privacy is secured by means of comparision to "reference sequence", if any - AJP]. As the genomes increasingly lend themselves to radical file compression, however, even storage costs in large facilities may eventually be surmountable relative to the cost of making genomes disposable. [It is well known that genome files can not be "compressed" significantly by algorithms customary in computer science - however, the basic (and now proven) tenet of FractoGene is that the DNA is fractal. "Fractal Compression" typically yields a 30,000 to 1 compression - AJP]
Rachel Lehmann-Haupt (www.lehmannhaupt.com) , a journalist and editor, is the author of In Her Own Sweet Time: Unexpected Adventure in Finding Love, Commitment and Motherhood (Basic Books 2009). She is currently working on a book on the impact of genetics technologies and products on our lives.

[As HoloGenomics (Genomics together with Epigenomics, expressed in Informatics) is "Industralized", not just "biology meets computer science" but hard-core engineering concepts and techniques need utterly serious attention. The article above (mostly from the very indirect angle of "ethics") is tangential to a fundamental aspect of any industry; called "supply chain management". To illuminate the topic, no engineer would set up an oil/gas well in a desert that pours oil faster than the capacity of pipes, delivering the product to target (otherwise oil/gas is either wasted, or an ever-expanding storage facility would be called for, among others making production not only much more expensive, but also very dangerous). We can quote an example closer to a critical R&D turning into "industrialization". When it became not just an idea, but a plan to industrially harvest enormous amounts of energy from nuclear fission/fusion, upon acceptance of the "Manhattan project", the first request was the pittance of $4,000 to purchase grafite, which -as it was known from decades of research - can regulate (down) the nuclear chain-reaction by absorbing neutrons that would kick off a hyper-escalation. This was an absolute "safety priority" - before anything more was even planned. It is quite frightening, therefore, that the fledging "Genome Industry" is taking off before (except a few) focusing on "Understanding Genome Regulation". From the key Industrial Engineering (and Investment) viewpoint of "supply chain management" no DNA sequencing makes any economic sense that provides faster supply than there is demand for. The fundamental question is, how Industrialized Genomics will strike the critical balance of "brute force", versus "algorithmic" utilization of genomes. Presently, many believe that vast amounts of full DNA sequences need to be amassed, such that "comparative genomics" sorts out, by means of mainly statistical "brute force" computation, e.g. how different cancerous genomes look "similar" (looked at by special pattern recognition techniques), and/or how DNA sequences of different species (and of individuals) show natural diversion, as opposed to pathological "structural variants". This is a rather horrendously expensive proposition; perhaps comparable to building "atom smashers" (with price tags in many dollar billions) before quantum mechanics and sophisticated computer models would be available to predict the mind-boggling trajectories how nuclear particles blow up. (Nuclear physics develops by the interplay between theoretical physics predicting the expected trajectories, and the actual trajectories found - further improving the underlying theory). "Reference Sequence" as a scientific concept may provide an algorithmic approach (wherein only the differences of a given sequence from the "reference" would be stored - but it is highly questionable if e.g. for "homo sapiens" there might be possible to establish a single "reference sequence". "FractoGene", asserting (based on evidence) that the DNA is a multifractal, the scientific task becomes the establishment of "fractal templates and their parameters", as a highly reduced set of information. This would both account for the "parametric" (diversity) differences, as well as for "structural variants" (including Fractal Defects).

Sounds easy? In theory, yes. In practice, it is quite a project - but is well within the wherewithals of resources around - should they request an implementation. - Pellionisz_at_JunkDNA.com

^ back to top


23andMe Special Sale on DNA Day (Apr 23 only) - full service for $99

Your DNA on Sale

[23andMe, in a stunning marketing move, took off $400 from its full service package - reducing price of testing for 148 genomic conditions to a mere $99, that is 67 cents per condition; some of them may help saving your life! The service includes your downloading the raw data file of the results. To see how HolGenTech is gearing up to use your SNP raw data file for daily "genome based product recommendation" empowered by your cell phone, view YouTube "Shop for your Life!" - AJP]

^ back to top


Predictive, Participatory, Personalized Prevention (P4) Health Care
[Chaired by International HoloGenomics Society Founder, Dr. Pellionisz]


PREDICTIVE, PARTICIPATORY, PERSONALIZED PREVENTION:
PERSONALIZED MEDICINE IN P4 HEALTH CARE

DISTINGUISHED SPEAKING FACULTY INCLUDES:

Andras Pellionisz, Founder International HoloGenomics Society, Sunnyvale, CA
David Williams III, Chief Marketing Officer PatientsLikeMe, Cambridge, MA
Jason Coloma, Head of Strategic Partnering Roche Molecular Systems, Pleasanton, CA
Gary Marchant, Professor of Law Arizona State University, Tempe, AZ
Ryan Phelan, Founder & President DNA Direct, San Francisco, CA
Michael Christman, President and CEO Coriell Institute for Medical Research, Camden, NJ
Jim Gudgeon, Senior Policy Analyst Intermountain Healthcare, Salt Lake City, UT
Manohar Furtado, Distinguished Scientific Fellow Molecular Biology Systems R&D, Life Technologies San Francisco, CA
Vance Vanier, CEO & President Navigenics, Foster City, CA
David Speechly, VP Corporate Affairs Celera, Alameda, CA
Risa Stack, Partner Kleiner Perkins Caufield & Byers, Menlo Park, CA
Alex de Winter, Partner Mohr Davidow Ventures, Menlo Park, CA

AND MANY MORE!

FEATURING A DYNAMIC NOT TO MISS KEYNOTE PRESENTATION DELIVERED BY OUR DISTINGUISHED CONFERENCE CHAIR:

Dr. Andras J. Pellionisz, Founder of the International HoloGenomics Society; Genome Informatics as a Key to Interpretation of Personal Genomes in Preventive Participatory Personalized Prevention (P4) Health Care

^ back to top


BioMerieux, Knome Team on Sequencing-Based MDx

April 21, 2010
By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – BioMérieux and Knome announced today that they will collaborate on developing next-generation sequencing-based molecular diagnostics.

The collaboration includes the French diagnostics firm taking a $5 million equity stake in Knome. Though the firms did not disclose the details of the investment or further financial terms, they said that BioMérieux has the right to designate one director for election to the board of directors of privately held Knome.

Under terms of the agreement, BioMérieux will have exclusive rights to license Knome's genome analysis platform for use in the in vitro diagnostics market. In return, Knome gains access to BioMérieux's intellectual property in DNA extraction and sample preparation.

BioMérieux said that developing multiplex, DNA sequencing-based diagnostics is part of its 2015 strategic roadmap. The firm said that it intends to develop next-generation cancer and infectious disease diagnostics using Knome's sequence analysis technology and bioinformatic tools.

Cambridge, Mass.-based Knome, which launched in late 2007 as a personal genomics firm, last year inked a deal with SeqWright, under which Knome's personal genome sequencing and analysis service is offered through SeqWright's CLIA-certified laboratory.

^ back to top


Eric Lander's Secrets of the Genome ["Mr. President, the Genome is Fractal!" - AJP]

GenomeWeb
April 20, 2010

At the urging of his daughter, a senior at Princeton, Eric Lander came down to New Jersey to talk about maps yesterday evening. ("I like maps!" Lander said in a line that became his motto for the next 90 minutes.) The crowded room in McCosh Hall included Princeton faculty and undergraduates as well as community members and local high school students (they received class credit). The audience paid close attention, despite the hard wooden chairs and some level of technicality (don't worry if you don't get this slide, there's another after it, he assured the audience), as Lander enthusiastically discussed the importance of maps in aiding people to better understand everything from geography to chemistry to biology.

Genome maps, he went on, evolved from early chromosomal walking-based linkage maps to today's sequencing and epigenomic maps. (In one of his many jokes from the evening, Lander said in describing the technique, "It was called chromosomal walking. I'm from Brooklyn, I call it chromosomal schlepping.") These maps are now used to study rare and common diseases as well as evolutionary relationships in finer and finer detail. Finally, Lander mentioned a new kind of map — keeping up with the popularity of 3D these days — that suggests that the genome folds into a fractal globule.

[Congrats to Dr. Lander (et al, Science Oct 9, 2009) for holding the flag high "Mr. President, the Genome is Fractal!" The concept of fractal geometry of the DNA-folding structure dovetails ideally with the FractoGene concept (2002, widely accepted in Personal Genomes, Cold Spring Harbor, Sept, 2009) that the fractal organization of the DNA governs fractal growth (function) of organelles, organs and organisms. While waiting for Eric's webcast to appear, the reader might wish to view the 3 YouTube(s) "Pellionisz", elaborating the FractoGene concept over 58 minutes in "Google Tech Talk", see also peer-reviewed science paper The Principle of Recursive Genome Function, both in 2008 - Pellionisz_at_JunkDNA.com]

^ back to top


Malaysian Genomics Resource Centre Berhad Launches US$4000 Human Genome Bioinformatics Service

Synamatix Press Release
2th April 2010

[Nice. So now you understand Genome Regulation? - AJP]

KUALA LUMPUR, 2 April 2010 – Malaysian Genomics Resource Centre Berhad (MGRC) today announced the release of its US $4000 Human Genome Bioinformatics service. This is a comprehensive end-to-end bioinformatics analysis service for human genome sequencing projects. The special offer price includes the pre-processing of 30X sequence data from Illumina's Genome Analyzer or Complete Genomics' DNA sequencing platform followed by the mapping and reporting of CNVs, SNPs and Indels. Users may also opt to do identification of structural variations and comparative genomics as additional services. MGRC is also releasing a new genome browser for viewing the genome. This 'built-from-scratch' browser is optimised to handle the volume and complexity of next-generation sequencing data.

This service is part of MGRC's SynaWorks programme, an extensive suite of bioinformatics solutions specifically tailored to manage and leverage data generated from next-generation sequencers.

MGRC Managing Director, Robert Hercus, said, "Since launching SynaWorks in 2008, we have successfully completed a large number of human and cancer genome projects for our customers. This has led to the continual improvement of existing pipelines, which in turn has enabled MGRC to provide a comprehensive and rapid service for whole human genomes in a short turnaround time and at groundbreaking low costs."

The Human Genome Bioinformatics service would be most beneficial for small and medium research facilities which may have hardware or manpower resource limitations. This service enables researchers to outsource the analysis of their data at a low price and gives them more time to focus on their core business and research.

Praveen Gupta, Vice President of Business Development at PREMAS, stated, “India’s involvement in genomics research is growing at an incredible rate. Using next-generation sequencing platforms, enormous amounts of data will be generated and there will be a strong need to analyse and process this data. We anticipate that this partnership with Synamatix [Malaysia] will strengthen and expedite genomics research as well as help research institutes achieve results more efficiently. With their vast experience in developing cutting-edge technologies, we feel confident in successfully undertaking large-scale genome sequencing projects that are being initiated in India.”

About MGRC

Malaysian Genomics Resource Centre Berhad (MGRC) provides cutting-edge bioinformatics services and applications to users throughout the globe. The key components that form the backbone of MGRC's operations are its Contract Genomics Services or SynaWorksTM, Sequencing Services, and Proprietary Data and Access Services. The company also conducts training and education programmes as part of its Corporate Social Responsibility Programme, which includes bioinformatics workshops, wet-lab workshops, the Eminent Speaker Series lectures and free access to online applications on the bioinformatics portal, www.mgrc.com.my.

^ back to top


Barcode app tracks allergies

Monday, 05 April 2010
Deakin University

Barcode reading of Nestle products for allergy

Allergy sufferers could soon be able to use their iPhone to scan a food’s barcode at the supermarket to determine whether it’s safe to eat.

The application, being developed by Deakin University, GS1 Australia and Nestlé, will allow consumers to instantly access detailed product information including allergens such as wheat, egg, peanuts and shellfish directly from their iPhone.

Deakin University Associate Professor Caroline Chan, said the application would help consumers make quick yet informed choices about their health.

When you read a label the product information is often so small you can barely read it, nor understand it,” she said.“In Australia all packaged food products carry a barcode but its use is limited to inventory control and to settle purchases at the cash register.”

Associate Professor Chan, an information systems expert, said the barcoding system administered by the not-for-profit organisation, GS1 Australia, had ‘unlimited potential’ because it could be associated with other valuable product data such as serving size, nutrient information, and environmental related information.

“We wanted to really harness all this information on the bar-coding system and team it up with detailed product information provided by Nestlé to give consumers a tool that had the potential to improve their health and raise public awareness,” she said.

Associate Professor Chan said initial testing of the application had been encouraging and the next step was to seek funding for a consumer trial. She was confident the application would be expanded to appeal to people on special diets or those with specific nutritional needs.

GS1 Australia Chief Executive Officer, Maria Palazzolo said the exploration of mobile technology using the ubiquitous barcode is the next frontier for GS1 Australia.“There is a tremendous opportunity for GS1 to provide business-to-business applications to engage consumers with business-to-consumer tools.”

[This project follows the example introduced by HolGenTech, Inc. at the First Consumer Genetics Conference (Boston, June 2009) and demonstrated at the PMWC2010, seen in YouTube since January 16, 2010. It is apparent even from the provisional presentation, see YouTube on October 30, 2008, that the system and method had already been filed at USPTO and is Patent Pending. An advantage of "consumer giants" Procter and Gamble, Nestle (etc) to faster use the US market is, that the UPC barcode system in the US is not limited to "inventory control", but e.g. USDA already maintains a huge database of ingredients attributed to UPC-s. - AJP]

^ back to top


Human Genome Mapping’s Payoff Disappoints Scientists

Bloomberg
March 31, 2010, 6:06 PM EDT
By Ellen Gibson

March 31 (Bloomberg) -- The time and money invested in genomic cancer studies has yielded only “modest” advances and is diverting funds from time-tested approaches for understanding disease, a leading gene scientist said.

Ten years after the first survey of the human genome, the payoff is getting mixed reviews. Francis Collins, former head of the government program that mapped the genome’s sequence, praises the wealth of genetic data emerging. J. Craig Venter, who led a private push for the sequence, and Robert Weinberg, who found the first human cancer-causing gene, said information gained from genomic research so far doesn’t justify the cost. Their editorials are published today in the journal Nature.

Genome projects, mostly designed to generate data, are taking funds from those designed to test ideas about the cause of cancer and its treatment, Weinberg said. It’s unclear whether the onslaught of genetic data has helped illuminate the biology underlying cancers or just added more complexity, Weinberg said. [How about spending on a third component - a fraction of the costs assigned to GENERATING "ideas" (software-enabling algorithms how genome regulation is derailed) - AJP]

“There has not been an adequate critical examination of how useful some of these massive data-generating projects and technologies are,” Weinberg said in a phone interview yesterday. “The question is how much bang we’ve gotten for the buck, and from certain perspectives it’s been modest.”

Citing advances in molecular and cellular biology, immunology and neurobiology, Weinberg argued in his editorial that hypothesis-driven science -- the process of coming up with a theory and testing it in experiments -- has served scientists well over the past half-century.

No ‘Major Breakthroughs’

Genomic data has yet to yield “major breakthroughs” in our understanding of how a tumor develops or how many mutations are needed to cause one, said Weinberg, co-founder of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts.

“These projects consume an enormous amount of resources and researchers’ energy,” said Weinberg. “The repercussions of major agencies shifting their funding allocations will be felt for a generation.”

Collins contends that the amount of money funneled into large-scale genomic projects is probably about 1 percent of total biomedical research funding -- a “tiny” portion, he said in a phone interview today.

The National Human Genome Research Institute in Bethesda, Maryland receives about $500 million in annual funding, according to public records. For companies like San Diego-based Illumina and Life Technologies in Carlsbad, California, deciphering a person’s full genetic code takes about a day and costs $4,500 to $10,000.

Rapid Sequencing

Venter said he is impressed by the rapid improvements in sequencing tools, though feels that the technology is outpacing scientists’ ability to interpret the data for the benefit of patients.

“Spending lots of money to generate huge data sets without any real effort to getting to new knowledge or understanding has been a huge frustration,” Venter said in a phone interview today. “It’s now easy with the new technology to generate a lot of different data, but there are very few groups or scientists generating knowledge out of this data. We’re at a frighteningly unsophisticated level of genome interpretation.[In all frankness, there are some fairly sophisticated algorithmic (software enabling) interpretations around - about genome regulation by fractal recursive iteration. Fractal algorithms can compress enormous "complexity" - and even in the fractal-appearance of tumors it is quite visible that derailment of recursion to cause uncontrolled growth is the suspect - AJP]

As the current head of the National Institutes of Health, Collins, who stood alongside then-president Bill Clinton in 2000 to announce the first draft of the human genome, is in a position to influence funding. In his editorial, he praises ambitious ventures like the Cancer Genome Atlas, which is analyzing tumors and blood samples from 20 types of cancer.

Data-Harvesting Advantages

“As the cost falls and evidence grows, there will be increasing merit in obtaining complete-genome sequences for each of us,” he wrote.

As head of the publicly funded Human Genome Project, Collins raced with Venter and his for-profit company Celera Genomics to map the first human genome, which ended in a tie as announced at Clinton’s White House ceremony in 2000.

Todd Golub, director of the cancer research program at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, disagrees with Weinberg’s take. “This large-scale, data- harvesting approach to biological research has significant advantages over conventional, experimental methods,” he said in a separate Nature editorial.

Genome-based screening technologies are “providing a powerful new source of leads” about how cancer develops, he wrote.

He offered the example of Novartis AG’s Gleevec, now the standard treatment for chronic myeloid leukemia. The key discovery about what drives this form of cancer came from comparing the genomes of tumor cells to normal cells, he said.

Power of Genomics

“The power of the genomic approach is you don’t have to be limited by what you already know,” Collins said. “You can survey all the DNA in a cancer cell and find out everything that made that good cell go bad.”

Another genome-driven triumph, according to Golub, was the discovery of a new class of drugs for treating skin cancer. In 2002, DNA sequencing revealed that melanoma patients have frequent mutations in the BRAF gene. It was a “smoking gun,” but prior to that discovery, there had been no reason to suspect it, Golub said. Now Basel, Switzerland-based Roche AG and Berkeley-based Plexxikon Inc. have developed a BRAF-inhibiting drug that is in the last stage of testing needed for U.S. approval.

These successes “didn’t come from our deep dissection of cancer biology pathways,” Golub said in a phone interview yesterday. “They came from unbiased surveys of the cancer genome. If you let the genetics speak for themselves, that gives you a very direct path to drug discovery.”

Revealing Abnormalities

Eventually genomic analysis will reveal the complete set of genetic abnormalities involved in cancer, Golub said.

“That’s a great place to start, but to really have impact, we need to be able to manipulate those abnormal mechanisms,” Golub said. “At the moment, the conventional drug-discovery approach is not fully up to the task.”

Collins agrees that the biggest challenge facing scientists is translating genetic insights into approved drugs -- a long, failure-prone process, he said.

“But it’s hardly fair to say that the fact that we haven’t cured cancer means it’s all a flop,” he said.

^ back to top


Big science: The cancer genome challenge

Published online 14 April 2010 | Nature 464, 972-974 (2010) | doi:10.1038/464972a

[22,910 point mutations in non-coding (regulatory) DNA, 134 in genes - AJP]

Databases could soon be flooded with genome sequences from 25,000 tumours. Heidi Ledford looks at the obstacles researchers face as they search for meaning in the data.

When it was first discovered, in 2006, in a study of 35 colorectal cancers1, the mutation in the gene IDH1 seemed to have little consequence. It appeared in only one of the tumours sampled, and later analyses of some 300 more have revealed no additional mutations in the gene. The mutation changed only one letter of IDH1, which encodes isocitrate dehydrogenase, a lowly housekeeping enzyme involved in metabolism. And there were plenty of other mutations to study in the 13,000 genes sequenced from each sample. "Nobody would have expected IDH1 to be important in cancer," says Victor Velculescu, a researcher at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University in Baltimore, Maryland, who had contributed to the study.

But as efforts to sequence tumour DNA expanded, the IDH1 mutation surfaced again: in 12% of samples of a type of brain cancer called glioblastoma multiforme2, then in 8% of acute myeloid leukaemia samples3. Structural studies showed that the mutation changed the activity of isocitrate dehydrogenase, causing a cancer-promoting metabolite to accumulate in cells4. And at least one pharmaceutical company — Agios Pharmaceuticals in Cambridge, Massachusetts — is already hunting for a drug to stop the process.

Four years after the initial discovery, ask a researcher in the field why cancer genome projects are worthwhile, and many will probably bring up the IDH1 mutation, the inconspicuous needle pulled from a veritable haystack of cancer-associated mutations thanks to high-powered genome sequencing. In the past two years, labs around the world have teamed up to sequence the DNA from thousands of tumours along with healthy cells from the same individuals. Roughly 75 cancer genomes have been sequenced to some extent and published; researchers expect to have several hundred completed sequences by the end of the year.

The efforts are certainly creating bigger haystacks. Comparing the gene sequence of any tumour to that of a normal cell reveals dozens of single-letter changes, or point mutations, along with repeated, deleted, swapped or inverted sequences (see 'Genomes at a glance'). "The difficulty," says Bert Vogelstein, a cancer researcher at the Ludwig Center for Cancer Genetics and Therapeutics at Johns Hopkins, "is going to be figuring out how to use the information to help people rather than to just catalogue lots and lots of mutations". No matter how similar they might look clinically, most tumours seem to differ genetically. This stymies efforts to distinguish the mutations that cause and accelerate cancers — the drivers — from the accidental by-products of a cancer's growth and thwarted DNA-repair mechanisms — the passengers. Researchers can look for mutations that pop up again and again, or they can identify key pathways that are mutated at different points. But the projects are providing more questions than answers. "Once you take the few obvious mutations at the top of the list, how do you make sense of the rest of them?" asks Will Parsons, a paediatric oncologist at Baylor College of Medicine in Houston, Texas. "How do you decide which are worthy of follow up and functional analysis? That's going to be the hard part."

Drivers wanted

Because cancer is a