2009 ends
(Dec 29) Top scientists share their future predictions [for next Decade]
(Dec 28) Project on genetics of cancer expands
(Dec 28) The 2010 Paradigm-Shift in DTC Genome Testing - Expect Less thus Get More
(Dec 27) deCODE Discovers a Major Risk Factor for Type 2 Diabetes Dependent on Parent of Origin
(Dec 26) deCODEme's embarrassing data processing glitches - lessons for companies and customers
(Dec 21) Genome Theory - Seven Years In The Doldrums, Taking Off In The Last 2 Years
(Dec 16) The Genome Generation [of Church] - The case for having your genes sequenced.
(Dec 15) Dyson Sees Opportunities in Personalization
(Dec 13) Australia boots up GPU supercomputer [NVIDIA Tesla cluster for genomics - AJP]
(Dec 12) Singapore team completes genetic map of Han Chinese
(Dec 10) Genome Patri - [First Indian Genome Sequenced]
(Dec 08) PriceWaterHouseCoopers Projects 11 Percent Annual Growth in Genomically Guided Personalized Medicine
(Dec 07) Chromosomal Deletions Found in Severely Obese Kids
(Dec 05) China to develop third-generation genome sequencing instrument
(Dec 03) Complete Genomics And GATC Biotech Collaborate On Human Genome Sequencing Projects [Sustainability of Genomics -AJP]
(Nov 25) China: Nine nations in one? - [Global economical engine of Personalized Medicine]
(Nov 24) Singapore achieves breakthrough in study of 3D Whole Genome Mapping [Structurally recursive DNA]
(Nov 18) Knome Launches First Platform-Agnostic Human Genome Sequencing and Analysis Service for Researchers
(Nov 18) NSF Funds Petascale Algorithms for Genomic Relatedness Research
(Nov 18) How a medical revolution may transform Northern Virginia
(Nov 17) Finding new ways to grow in Silicon Valley
(Nov 12) Why Can't Chimps Speak? Key Differences In How Human And Chimp Versions Of FOXP2 Gene Work
(Nov 06) Back-up by Half a Century - Genome Regulation (1961) becomes parallel (2008)
(Nov 05) Getting Results Back
(Nov 04) Complete Genomics cracks the door open to JunkDNA analysis in mass-production
(Nov 02) TedMed Explores Future of Health Care
(Nov 02) What do they know that we don't know? - Celebs for Prevention
(Oct 28) 23andMe has 30,000 “active” genomes, launching “Relative Finder” soon
(Oct 23) ParaHaplo: A program package for haplotype-based whole-genome association study using parallel computing
(Oct 22) Personal Genome Sequencing Identifies Mendelian Mutations
(Oct 19) 'Personalized nutrition' is goal of Nutrigenomics initiative
(Oct 16) The rise of epigenomics - Methylated spirits
(Oct 15) Salk-Led Team Generates First Map of Human Methylome
(Oct 14) Personalized Medicine - New Leroy Hood startup raises $30 million in financing
(Oct 09) Science adviser to Prez. Obama; Eric Lander et al. say The DNA is Fractal
(Oct 08) Fractal globule architecture packs two meters of DNA into each human cell, avoids knots
(Oct 08) Knome Personal Genomics Service Expands to Include 93,000 Rare Mutations
(Oct 07) Beyond the Genome [Does Fractal Iterative Recursion makes sense to you? - AJP]
(Oct 06) Beckman Prepares for Genomics Services Gold Rush
(Oct 06) IBM Looks to Make DNA Analysis Cheap and Easy
(Oct 06) IBM CEO also wants to resequence the health-care system
(Oct 05) I.B.M. Joins Pursuit of $1,000 Personal Genome
(Oct 01) $100 million in grants thrill Medical Center [Houston]
(Sep 30) Obama, Collins Laud $5B in NIH Stimulus Funds, Much for Genomics
(Sep 29) NIH Grants $45M for Genome Science Centers [Bye-bye Crick's "Central Dogma"- AJP]
(Sep 28) David Duncan Has a Prescription for American Health Care [Prevention]
(Sep 27) New Survey Finds Alzheimer's Disease a 'National Priority,' with Voter Support Across Party Lines
(Sep 27) Habits to Help Prevent Alzheimer's - How to lower your risk.
(Sep 24) Genetic testing company 23andme may offer GPs a chance to try service
(Sep 23) Point of Inflection at the Cold Spring Harbor Laboratory' "Personal Genomes" Conference
(Sep 22) Kuberre: Think Outside the Box [Kevin Davies]
(Sep 22) SMRT Software Braces for the Pacific Biosciences Tsunami [Kevin Davies]
(Sep 22) Brown & Oxford Nanopore [Kevin Davies]
(Sep 22) David Dooling: Gangbusters at the Genome Center [Kevin Davies]
(Sep 20) Venture Investor Uncovers DNA of Economy [Juan Enriquez]
(Sep 19) Collins, Venter among recipients of White House science and tech medals [the oncoming Nobel Prize for Sequencing of the Human DNA - AJP]
(Sep 18) Personal Genomes Get Very Personal
(Sep 17) Personal Genome Conference in Cold Spring Harbor, 14-17 September, 2009
(Sep 14) CSHL gears up for 2nd annual Personal Genomes meeting
(Sep 12) Apple sheds light on Illumina’s genome app
(Sep 09) REPEAT-Scientists unlock secrets of Irish potato famine genome
(Sep 09) Irish Researchers Sleuth Out Unique Human Genes Originating from Non-Coding DNA
(Sep 08) Complete Genomics Delivers 14 Human Genomes to Pfizer and Others
(Sep 08) Helicos Sells Multiple HeliScope Sequencers to RIKEN Institute
(Sep 04) 23andMe Co-Founder Linda Avey Leaves Personal Genetics Start-Up to Focus on Alzheimer’s Research
(Aug 31) Miami Institute for Human Genomics Receives $20M Gift for Research
(Aug 31) Illumina Announces Delivery of the First Genome Through Its Individual Genome Sequencing Service
(Aug 30) Are we ready for personal genomics?
(Aug 29) Your Chance to get Seriously Wealthy from the Next Wave of “Computational Biology
(Aug 27) CSHL scientists develop new method to detect copy number variants using DNA sequencing technologies
(Aug 24) Complete Genomics Nets $45M in Financing [Caskey on Board]
(Aug 21) Study: Chinese herbs could treat heart disease
(Aug 20) The Single Life - Stephen Quake Q&A
(Aug 19) The Impact of the Genome
(Aug 19) Collins sets out his vision for the NIH
(Aug 18) Knome - Offering consumers whole-genome sequencing--and software to interpret it
(Aug 16) Harris: Silicon Valley may be getting its mojo back
(Aug 14) The DNA discount
(Aug 13) Health 2.0 could shock the system
(Aug 11) Nancy Brinker: Finding a national cancer strategy - and an Open Letter
(Aug 08) "To Fight Cancer, Know the Enemy" “The War on Cancer is a Bunch of Sh@t.” [Both by Nobelist James Watson]
(Aug 08) Not 'Genomic Junk' After All: LincRNAs Have Global Role In Genome Regulation
(Aug 07) Senate confirms new NIH director [Francis Collins]
(Aug 03) Your Genome - There is an App for that
(July 31) Roche and Google.org start initiative for early discovery of new diseases
(July 30) How To Make A Fortune From The Personalized Medicine Revolution
(July 29) The 15-Minute Genome: Faster, Cheaper Genome Sequencing On The Way
(July 29) Programming cells by multiplex genome engineering and accelerated evolution
(July 26) Double chromosomes equals more plant power
(July 22) Human Genome doubles quarterly revenue
(July 18) SNPs in Non-Cancerous Tissue May Differ From Those In Blood, Study Finds
(July 17) Navigenics Cuts Price of Screening Service 60 Percent
(July 14) Exxon Sinks $600M Into Algae-Based Biofuels in Major Strategy Shift
(July 09) Complete Genomics Raises $14.5M as Part of Series D Financing
(July 08) Collins nominated to head NIH
(July 08) Viva la Revolución de … 23andMe
(July 07) [Excel] VC Investor Launches $125M Life Sciences Fund
(July 04) How Biotech Will Reshape The Global Economy
(June 30) Biology's Odd Couple
(June 29) Beyond the Book of Life
(June 20) See how shopping with your PDA and personal genome can advise you on best choices for you
(June 19) HolGenTech Demonstrates first-ever PDA Combination with High Performance Genome Computing at Boston Consumer Genetics Conference
(June 18) Google invests another $2.6 million into 23andMe, a biotech startup founded by Mrs. Sergei Brin
(June 18) Team homes in on genetic causes of neuroblastoma [the era of SNiP-s is OVER! - AJP]
(June 16) Personalized medicine: An interview with Esther Dyson
(June 12) Consumer Genetics Show, Boston, June 9-11, 2009 - As DNA sequencing melts from $50k to zero $, Genome Based Economy opens wide to PDA-s.
(June 12) MicroRNA Replacement Therapy May Stop Cancer In Its Tracks
(June 10) First Ever: Final Program of Consumer Genetics Conference, Boston 9-11 June, 2009
(June 10) Life After GWAS: For Some Researchers, Focus Shifts to Rare Variants, CNVs [we are in the "beyond SNPs era" - AJP]
(June 06) 'Junk' DNA Proves To Be Highly Valuable [Think FractoGene - AJP]
(June 01) First Direct-to-Customers Genomics Conference - Personal Genome Computers for our Genome Based Economy
(June 01) Will Consumers Sustain Direct-to-Customer Genomics?
(June 01) The Price of Silent Mutations - [Think "The Principle of Recursive Genome Function"]
(June 01) Genome analysis: the global bottleneck
(May 30) [Beyond SNP-s] - Illumina introduces Infinium bead-chips
(May 28) FORBES: The Next Big Thing: Personalized Medicine
(May 27) Genetics-based products stir concerns
(May 23) Francis Collins, Gene-Mapper, Said to Be Choice for U.S. Research Head [of NIH]
(May 21) The Vagaries of Genome [Copy Number] Variation: Do You Copy?
(May 21) 'Junk' DNA Has Important Role, Researchers Find - [any Moron left who would not know it by now?]
(May 19) Quantum dots light up individual DNA binding proteins
(May 18) "Genome Computing for the Masses" - Knome rolls out genome sequencing for $1 per gene
(May 18) MMRF and the Broad Institute to perform whole genome sequencing of multiple myeloma samples
(May 15) Personal Genome Project opens doors to individualized health care and more
(May 14) Thermodynamics and Cancer
(May 13) International team cracks mammalian gene control code
(May 11) Do-it-yourself genetic sleuthing
(May 06) America needs a Human Genome Project for personalized health care
(May 06) A Great Big "Nein!" to DTC Genetic Testing
(May 05) Moving Target [of "Getting beyond SNiP-s]: CNVs in Disease
(May 05) NABsys Secures $4M First Round to Develop Electronic DNA Sequencing
(May 05) Collins answers the Big Questions on science and faith
(May 04) George Church' "Full DNA Sequencing Experience" sold for $68,000
(Apr 29) Cures Acceleration Network - Hat in Hand, Specter Proposes New Agency for Cure
(Apr 27) California Outpatient Centers Offering 23andMe Service
(Apr 25) Is IT ready for a pandemic after mergers, layoffs?
(Apr 24) CDC confirms 7 cases of swine flu in humans
(Apr 23) Mapping a Human Genome, via an eBay Auction
(Apr 21) Helmsley grant launches Salk Center for Nutritional Genetics
(Apr 17) Getting personal -The promise of cheap genome sequencing (The Economist)
(Apr 16) Genome-based Personalization 2.0 (Dance and Breath Genome Regulation - in 58,000 views, Think Recursive Genome Function...)
(Apr 15) Researchers create novel nanotechnique to sequence human genome
(Apr 14) Gene Network Sciences Announces Brain Cancer Collaboration With M.D. Anderson
(Apr 10) HoloGenomics: your Genome is NOT your Destiny
(Apr 09) Era of personalised medicine
(Apr 08) Lordy Me: Navigenics names Jonathan Lord New CEO
(Apr 07) Genes Take A Back Seat
(Apr 05) Geneticist wants to open research center in N. Va [Personalized Medicine "Ignite Institute" by Dietrich Stephan]
(Apr 05) Doors open at Genomic Medicine Institute [Genome based personalization of Medicine - AJP]
(Apr 02) Beyond the SNP
(Apr 01) Even a lousy mitochondrial DNA is highly fragmented [fractal - AJP]
(Apr 01) Google launches venture capital subsidiary
(Mar 31) 23andMe targeting pregnant women using "mommy bloggers"
(Mar 30) Researchers Identify Genetic Variations That May Increase Risk of Breast Cancer
(Mar 26) Dangled DNA reads genetic code
(Mar 25) DOE Joint Genome Institute going for "Energy & Environment" - NIH Personalized Medicine in the Global Genome Based Economy - "Regulatin' Genes"
(Mar 25) Real Time Genomics launches with $3M for data analysis software
(Mar 22) Officials announce $1 billion Elk Run project [Burrill is to set up shop with Mayo at Rochester - AJP]
(Mar 21) Complete Genomics to Seek CLIA Certification, Offer Sequencing to DTC Genomics Firms
(Mar 20) Synthetic Biology: Transforming Cells Into Microscopic Biological Computers
(Mar 19) Operon e-coli - towards FractOperon theory of regulating recursive genome function
(Mar 18) Toward Synthetic Life: Scientists Create Ribosomes -- Cell Protein Machinery
(Mar 12) 23andMe Launches Parkinson's Disease Genetics Initiative [Save your health for the price of a dinner; $25 - AJP]
(Mar 11) Stem cell policy may aid [California] state research efforts
(Mar 11) Scientists Advance Stem Cell Research [Texas]
(Mar 09) William Haseltine explains the medical, energy, and industrial implications of the genomic revolution
(Mar 05) Inheriting changes in gene regulation [Time for HoloGenomics]
(Mar 05) DNA algorithm predicts human behaviour [Genome Based Economy taking off in New Zealand...]
(Mar 04) Safeway gives $685,000 to TGen for breast cancer research
(Feb 23) From One Genome, Many Types of Cells. But How? ["The Principle of Recursive Genome Function" is a published theory, "How"! - AJP]
(Feb 16) Fractals Reveal Degradation of Behavioral Control with Aging and in Alzheimer’s Disease
(Feb 16) Physicists use fractals to help Parkinson's sufferers
(Feb 15) Ray Kurzweil: “The human brain is simpler than it appears…it’s basically just a recursive probabilistic fractal.”
(Feb 15) New Clues to How Primates Evolved
(Feb 14) Genome Based Economy is Here - within weeks, not years...
(Feb 13) Complete Genomics Releases Proof-of-Concept Data for Its Sequencing Technology
(Feb 13) Eric Lander: Academia Converges with Commercial Industry, Government Health Care
(Feb 13) Procter & Gamble - Private Commercial Industry emerging as a pillar of Genome Based Economy
(Feb 13) There is money - Syndicated risk diversification
(Feb 12) Cure for the common cold? Not yet, but possible
(Feb 04) Massachusetts Governor D. Patrick says the "Genome Based Economy" is a "Wonderful Idea"
(Feb 02) Genome Based Economy - on YouTube
(Feb 01) Chromatin signature reveals over a thousand highly conserved large non-coding RNAs in mammals
(Jan 31) You've Got Email -- A Human Genome
(Jan 30) Francis Collins Addresses State of Personalized Medicine
(Jan 27) Churchill Club: Personal Genome Computing Heralds the Genome-Based-Economy
(Jan 26) Digital Frontlines; Extending Human Life - And Data [Forbes]
(Jan 21) Is DNA Mapping in Your Entrepreneurial DNA?
(Jan 15) Churchill Club "Genome Computing" Event Expanded [January 22, Palo Alto]
(Jan 12) Race for Inexpensive Genomes Goes Electric [Illumina Nanosequencing -"Magic Three"]
(Jan 08) Churchill Club Focuses Event on Genome Computing and the Economy: New Prosperity for Silicon Valley?
(Jan 08) Despite Economic Climate, VC Cash is Available for LS Tool Firms, Say Experts
(Jan 03) 2009 Second Generation Sequencing Stayed in Bioinformatics' Driver Seat in 2008 [But will not in 2009 - AJP]
(Jan 02) 2009 NIST Calls Personalized Medicine a 'Critical National Need,' Seeks Advice for New Funding Programs
(Dec 31, 2008) The Year of Bespoke Medicine
(Dec 26, 2008) The Chaos Inside a Cancer Cell [Think FractoGene - AJP]
(Dec 26) Lander and Varmus on Presidents' Council of Advisors on Science and Technology [suggestions - AJP]
(Dec 26) I am ‘reading’ the human genome, says Sydney Brenner
(Dec 25) Happy Holidays
(Dec 18) Steve Jurvetson on focusing on cleantech during the economic storm
(Dec 12) University of Houston Biotech Spin-off Sold for $20 Million
(Dec 11) National Physician Group MDVIP Partners with Navigenics to Provide Personal Genetic Tests for Preventive Medicine Practice
(Dec 07) Navigenics' and 23andMe' perfect gift for the Holidays. In hard times, a price war to save your life and your loved ones'
(Dec 04) HHS: Personalized medicine will rely on IT [Is the Government IT ready for the Dreaded DNA Data Deluge ? -AJP]
(Nov 30) 94 per cent human genes generate multiple product [Think FractoGene - AJP]
(Nov 29) Hierarchical structure of higher order repeats [The fractalility of DNA is now "commonly accepted" - AJP]
(Nov 28) Sergey Brin and Larry Page as "Top Influential People" [Venture Philantrophy - AJP]
(Nov 27) Can DNA Tests Help You Change Your Life? [Genomic testing is an over $1 Bn business '08 -AJP]
(Nov 26) Valley Girls: Linda Avey and Anne Wojcicki [23andMe is "The invention of the year" by Time -AJP]
(Nov 23) Pacific Biosciences Raises $20M in New Funding [rounding to $193 M]
(Nov 21) The genomic frontier: Personalized medicine in action
(Nov 18) 23andMe Poem: Enough to make you spit?
(Nov 17) Curing the Disease, Not the Symptom
(Nov 13) BioTechniques - VIDEO OF THE WEEK: Is IT ready for the DNA data deluge?
(Nov 12) Taking Science Personally [Andy Grove and Sergey Brin know about Informatics. Genomics became Informatics -AJP]
(Nov 11) Now: The Rest of the Genome
(Nov 09) Where to look for regulatory variants [and what they are - AJP]
(Nov 08) Stirring Up the Junk DNA Realm
(Nov 07) Washington University scientists first to sequence genome of cancer patient [street is lit up, but to search for?]
(Nov 04) 'Junk' DNA Proves Functional; Helps Explain Human Differences From Other Species
(Nov 01) Is IT ready for the DNA Data Deluge?  - Pellionisz' "Tech Talk" at Google on YouTube
(Oct 26) Waiting for the Hurricane to Hit- Notes on Life in Silicon Valley
(Oct 24) Dallas-Fort Worth stepping up medical technology firms
(Oct 19) Gene regulation - it's all in the DNA
(Oct 18) Venture Capital Investment holds in $7 Billion range in A3 of 2008 despite turmoil in the Financial Markets
(Oct 17) Ultraconserved Sequences: The Core Code of DNA? ["Lucid heresy" of FractoGene: they are Fractal Templates]
(Oct 17) Study finds value in 'junk' DNA [ALU-s were pseudogenes in PseudoGenetics...]
(Oct 16) Scientists discover 'protector of the genome' [protein protects DNA; "never say never"]
(Oct 15) Navigenics to add gene sequencing to its personal genomic service [HoloGenomics' Direct-to-Customers Business Model is Complete]
(Oct 15) Gene Express, AbaStar, Castle Biosci, U MD Anderson CC, NCI, ISB, Complete Genomics, Johns Hopkins, Rosetta Genomics
(Oct 13) DeCode Shares Surge 40 Percent on Skin Cancer SNP Findings [PostModern Genomics is an Applied Science]
(Oct 13) Francis Collins: “Sen. Obama has provided a detaied plan for his science, technology and innovation agenda
(Oct 07) HoloGenomics (Principle of Recursive Genome Function) embraced within 3 months by peer-reveiwed science journal
(Oct 06) Intel’s Global Research Head, Andrew Chien, Sizes Up the State of West Coast Innovation
(Oct 06) Dawn of Low-Price Mapping Could Broaden DNA Uses [The "Dreaded DNA Data-Deluge" is here]
(Oct 01) Molecular Diagnostics Take Center Stage
(Sep 30) First Francis, Now Elias. NIH, We Hardly Recognize You [The EpiNIH - AJP]
(Sep 25) PacBio Eyes Clinical Diagnostic Market for Third-Gen Sequencing Platform
(Sep 20) How Much Can You Learn From a Home DNA Test?
(Sep 19) Modeling Recursive RNA Interference [Author: Marshall WF, Editor John S. Mattick]
(Sep 18) LRRK2 [Sergey Brin, Google Founder has elevated risk of Parkinson's]
(Sep 04) Billionaire Broad's $600 Million Gene Bet [Venture Philanthropy]
(Sep 11) First Arab genome sequenced
(Sep 03) Bio-Revolution to Emerge Amid Aging Society [Korean Male and Female Genome - Securing $ 52.9 Billion in Pharma]
(Aug 20) California Licenses 2 Companies to Offer Gene Services

Top scientists share their future predictions

The Sunday Times
December 27, 2009

Nothing much is going to happen in the next 10 years. [Thus far, the predictions are similar to my subdued "lowering expectations" - see below, AJP]. Of course, that’s not counting the diesel-excreting bacteria, the sequencing of your entire genome for $1,000, massive banks of frozen human eggs, space tourism, the identification of dark matter, widespread sterilisation of young adults, telepathy, supercomputer models of our brains, the discovery of life’s origins, maybe the disappearance of Bangladesh and certainly the loss of 247m acres of tropical forest.

As I said, just another decade really.

These days, “just another decade” always means 10 years of future shock. Science, technology and the contemporary mania for change combine to stun the imagination. It is the way we live now, in a condition of permanent technological revolution.

In 2000 — remember? — the internet all but died when the dotcom stock market bubble burst [Sure ... tiny Google just became Yahoo's search engine, with Yahoo refusing to buy Google for $1 M - AJP]. You could stand on top of the World Trade Center. And mobile phones were just, er, phones. Today, you still get up and eat breakfast, but, outside, it’s a different world.

Next? Well, as Woody Allen said, if you want to make God laugh, tell him your plans for the future. But, taking a punt, I reckon the brain is the one to watch. Science has been zeroing in on the 2lb 14oz of grey and white custard-like stuff between your ears for some time now. It’s not been easy. In spite of the evidence of The X Factor, the human brain is very complex custard indeed. But some people are getting very excited [Kurzweill and I tend to believe that "the brain is a probabilistic fractal"; looking complex, but "complexity is in the eye of the bewildered" - AJP].

“By 2020, genetics and brain simulation will be giving us personalised prescriptions for marriage, lifestyle and healthcare.” This is Henry Markram, director of the Blue Brain project in Switzerland, an attempt to reverse engineer the brain by building one from the ground up inside a supercomputer.

“We won’t need a psychologist to tell us why we feel unhappy. All we’ll need to do is log into a simulation of our own brain, navigate around in this virtual copy and find out the origins of our quirks ... Computers will look at a virtual copy of our brains and work out exactly what we need to stop our headaches, quiet the voices talking in our heads and climb out of the valley of depression to a world of colour and beauty.”

Gosh. But isn’t there still that pesky problem of other people and their brains? It’s their quirks that tend to get in the way of my happiness. No problem, we can climb inside each other’s brains.

“The big thing for me is being able to link two brains together for communication.” This is Kevin Warwick, a cybernetics scientist at Reading University. “This could have great implications for teaching. Sometimes, no matter how you explain something, it takes forever for the penny to drop. [The one who laughs last is the one who did not get it ...- AJP]...

James Watson, co-discoverer of the structure of the DNA molecule, thinks gene sequencing will be the key to unlock the custard and even stir it. “Disorders like Alzheimer’s disease, epilepsy, Parkinson’s disease, schizophrenia, bipolar disease, unipolar depression, obsessive-compulsive disease, attention deficit disorder and autism will finally have their genetic guts open for all to see.”

Some of the most impenetrable and harrowing mental illnesses known to man will, Watson believes, be understandable and maybe even curable.

“The exact location and biological function of the DNA variants causing many depressive disease and related disorders cannot be revealed too soon,” he says. ...

“There will be a breakthrough. My hunch is that research on motor neurone disease will provide crucial clues and by 2020 we will know why cells die in some, perhaps many, of these diseases. It could be another decade before we see the impact on health, but by 2020, we must be on the way to this ultimate goal of modern medical science,” says Blakemore. ...

Chris Rapley, director of the Science Museum and professor of climate science at University College London, says we cannot cut emissions fast enough, so we need to suck carbon dioxide out of the atmosphere, perhaps using artificial trees that eat it.

“If it can be achieved, it will allow us to exploit the substantial reserves of oil, gas and coal to sustain society through the inevitably long and hard transition to a low-carbon world, without causing dangerous climate change. If ever there were a technical project that humanity should invest in, this is it.”

Craig Venter, the genetic maverick who first sequenced the human genome, may have one solution. He’s working on making bacteria that excrete diesel, leaving the Saudis wondering what to do with all that oil. “The debate on fuels and energy is blown out of proportion. We are very close to solving the energy needs in a way that will make our children enjoy cheaper and more efficient energy than what we see today,” he says....

It’s just another decade of future shock. So it goes. Of course, the real shock will be what actually happens, which is never the same as what people say will happen. But, anyway, the shocking Noughties are over, happy new ... good grief, we haven’t even predicted a name for it! [This one is easy: "The Decade of Genome Based Economy" - AJP]

Project on genetics of cancer expands

There’s already progress in the study of tumor DNA

Dec. 27, 2009

The tiny strands of DNA coiled within human cells probably don't contain all the answers needed to end cancer as a going concern.

But this genetic information could bring us close to that goal, say scientists participating in a multibillion-dollar project to exhaustively study the genetics of cancer.

The Cancer Genome Atlas program kicked into high gear this year, transforming from a pilot project into a five-year plan to ascertain the genetics of 20,000 tumor samples from 20 different cancers.

The amount and quality of the data we've gotten is already changing how we think about cancer,” said Dr. Gordon Mills, chairman of the Department of Systems Biology at the University of Texas M.D. Anderson Cancer Center.

After completing the 13-year Human Genome Project in 2003, which sequenced the DNA from a single person, scientists and physicians have sought to use that information to improve human health.

Cancer is a natural fit because it is primarily a disease of DNA, occurring when glitches in our genetic code cause good cells to turn cancerous.

Since the pilot project began in 2006 to begin studying the genetics of various tumors, Baylor College of Medicine has received more than $40 million to complete nearly one-quarter of the work to sequence the DNA of tumor cells.

Everything is going to change,” said Richard Gibbs, director of the Human Genome Sequencing Center at Baylor College of Medicine. “There will be new drugs, new combinations of drugs, personal matching and early detection. I think it's dramatic.”

‘Personalized medicine'

There has already been some success. Last year scientists released the results of their study of the deadly brain cancer glioblastoma, finding a possible mechanism by which glioblastoma cells become resistant to the standard treatment.

The short-term promise of the cancer genome project is the matching of specific cancers with the right cocktail of drugs. Today a cancer patient might go through two or three rounds of treatment before finding a regimen that works for a particular form of cancer.

Reaching the age of “personalized medicine” in cancer will require scientists to first get a handle on the genetics of each tumor type — the point of the Cancer Genome Atlas project — and then begin to genetically test cancer patients and finally to optimize therapies for each tumor type.

Mills said the last two steps are coming, noting that M.D. Anderson has almost completed the initial phase of a project dubbed “T9” to genetically test the tumors of 10,000 patients and identify the best treatments for them.

Within a few years, it seems likely that cancer patients will routinely have the genetics of their cancer tested. Then will come the need to optimize therapies for each tumor type, but the completion of the cancer genome project is an essential step for that to happen, Mills said.

“What we're going to get from the Cancer Genome Atlas is all the data that is necessary to determine what trials we need to do to change patient management,” he said.

Torrent of data expected [See YouTube - AJP]

One of the biggest challenges of the cancer genome project is how to manage the torrent of data it will produce.

“Torrent is an understatement,” said Dr. John Weinstein, chairman of M.D. Anderson's Department of Bioinformatics and Computational Biology. “I refer to it as a tsunami.”

A single strand of DNA inside a human cell has 3 billion “letters” or chemicals that make up about 20,000 genes that carry hereditary information and signal cells to make proteins or take other actions.

If scientists were to print the 3 billion DNA letters in a single strand — its As, Ts, Cs and Gs — they could fill a stack of books taller than the world's tallest building, Weinstein said. And he and other scientists are dealing with 20,000 such “stacks.”

Looking for patterns

As part of the cancer genome program, Weinstein has received about $1.5 million this year, with the promise of more funding for four more years, to help scientists and physicians make sense of the data being produced at Baylor and elsewhere.

And they don't just want to catalog it, they want to write software and design algorithms so others can dive into the data and find meaningful patterns and information useful to defeating cancer.

“For some of the diseases I would be surprised if it didn't have an impact within the next five years,” Weinstein said.

[There was much discussion of this challenge at the Cold Spring Harbor "Personal Genomes" conference this September. I brought attention of the participants that a similar mortal challenge had once been averted already several decades ago. In the Cold War, Soviet submarines, equipped with nuclear weapons to wipe out civilization were successfully tracked by "pattern recognition" of their sonar - thankfully, rather laud - sound patterns. There was an entire line of technology developed for this challenge; the so-called "Neural Net" algorithmic approach, that is eminently software-enabling. As a pioneer of the Neural Net technology - awarded by the first international prize by the Alexander Humboldt Prize for Senior Distinguished American Scientists, 1990 by Germany - I indicated without hesitation in CSH the availability of the proven technology of Neural Nets now deployed for pattern recognition in Cancer Genomics. As Founding Editor of several Neural Net Journals - presently Section Editor for Neural Nets for The Cerebellum, one has to be acutely aware that e.g. one single neuron of the Cerebellum, the Purkinje cell arises as a pattern of at least 90 genes (see current issue) - Pellionisz_at_JunkDNA.com]

The 2010 Paradigm-Shift in DTC Genome Testing - Expect Less thus Get More

The "Decade of DNA Sequencing" is about to come to a close, and the "Decade of Understanding (Holo)Genome Regulation" looms large. Time seems right for a profound re-assessment of expectations and delivery.

The first decade of 21st Century DTC Genome Testing ends on a note that delivery falls short of expectations; as is evident from the two exemplary articles below. Re-interpretation of SNP-results yielded by the leading and potentially unstoppable but presently perhaps limping US "Direct-To-Customers" genome testing company, (23andMe) by cross-examining results by the "first ever" DTC genome testing company, Kari Stefansson's "DeCode". His company had just declared bankruptcy for reasons largely beyond their control. Not only because a brutal recession destabilized "boutique industries", but since Iceland as a country went bankrupt due to the global financial crisis.

My thesis is, slightly differing from the excellent analysis by Daniel McArthur (see below), that the temporary setback and re-grouping is a result of the public expecting too much from 21st Century Genomics at large that would exceed any real delivery possible at the present early stage.

"Too much expectation" was caused by two main reasons:

(1) The obsolete common attitude needs time to change into the new understanding that "Your Genome Is NOT Your Destiny". Once access to the DNA opened up, many mistakenly believed, before Genomics and EpiGenomics integrated into HoloGenomics, that the A,C,T,G string of 6.2 Bn letters returns an exact and perfect understanding of our genomic future once forever. Not so. Even the sequence is liable to change (e.g. due to mutations suffered through air travel, CT scanning, exposure to mutagen environments and chemicals). In a larger sense, e.g. the methylation-pattern of the genome constantly evolves with growth (age), onset of expressions by genomic and epigenomic factors. While this should be taken as good news (indeed, the "Circle of Hope" that the HoloGenome is not a thermodynamically closed system, and thus one can effectively interact with the given hereditary material) - it also introduces (what I call) a new kind of genomic "Principle of Uncertainty"; even full exact knowledge or a snapshot of even full DNA will not yield a deterministic prediction for anybody's life. Those who thought that the DNA was omnipotent to forecast life (and, unlike traditional medicine, will instantly provide exact diagnosis, therapy and cure for all hereditary diseases) are, therefore disappointed - but those with glitches in their DNA can actually be hopeful.

(2) There was too much initial pressure on Genomics to provide an ultimate (and instantaneous :-) "Personalization" e.g. in "Medicine", because of the (mis)perceived "exactness" of digital genomic information. Not so. The public must accept that the most basic attitude towards medicine of "always seeking a second opinion, and occasionally not getting full solutions" also applies in every sector of the 21st Century Genome-Based Economy; including branches of Personalized Medicine. The basic attitude in traditional medicine does not find it strange, at all, that one doctor may interpret results of lab tests in a different manner than another doctor's "second opinion". Also, in traditional medicine it is accepted that doctors may be almost cueless in providing diagnosis, therapy and cure for certain diseases (such as cure for cancers, Alzheimer's, Parkinson's. Further, doctors' conclusions may be tentative recommendations rather than absolutely guaranteed solutions ("take two pills of aspirine and call me in the morning"). Also, "Medications" are rather sharply divided to "prescription drugs" regularly available per order of qualified personnel of Medical Doctors and (supervised) Physician Assistants (though with online availability even that is rapidly changing). For "over the counter" medications, myriads of food additives and supplements consumers generally rely on "recommendations" from laypeople, such as parents, teachers, friends, neighbors, hearsay - and in a rapidly increasing manner, on automated internet recommendations. The public may best adjust to expectations from 21st Century Genome-Based-Economy by considering implicit genomic proclivities (known allergies, lactose and/or gluten-intolerance etc, even without taking any genome tests at all) and initial fractional interrogation of SNP-s (known or vaguely suspected to be harmful alone, or in a pattern with often thousands of other SNP-s, CNV-s and other structural variants of DNA) as extensions by means of yet another new emerging technology. Some of these technologies are already available to cope with "the dreaded DNA Data Deluge" - see YouTube, at least at a demo-level, but naturally needing "second opinions" - while, of course, never giving up on the very real promise that hologenomics based on full (re)-sequencing of human DNA will provide, perhaps much sooner than most think, the kinds of "diagnosis, threapy and cure" for heretofore unsolved hereditary (and sporadic) hologenomic-diseases, such as cancers. The public (in the USA especially through their Congressional Representatives) must understand, however, that private business will support e.g. "internet recommendations" extended to "genome-based recommendations" - but the basic research towards mathematical understanding of hologenome-regulation will be as distant as government funds are trickling (or pouring) for resolving a colossal challenge. (Development of quantum theory was not "Big Science" by any government - but "the Manhattan Project" definitely was, and a "New War on Cancer" based on algorithmic and thus software-enabling understanding of hologenome misregulation may also be, with the public willing). Till then, the public is best advised to always use "second opinion" for any kind of genome-testing results, and accept the already well-accepted business models e.g. that most internet companies use with very lucrative success - extending "earlier activities-based" advertisements to "health- and genome-based-product-placements", mostly as "advices" (with no guarantee) of e.g. the "over-the-counter" segment of "Personalized Medicine".

It is expected that the "Personalized Medicine World Conference 2010" on January 19-20, 2009 in Silicon Valley will validate this projected trend.

[Pellionisz, holgentech_at_gmail.com - December 27, 2009]

deCODE Discovers a Major Risk Factor for Type 2 Diabetes Dependent on Parent of Origin

A Single SNP That Confers Increased Risk if Inherited From the Father, but is Protective if Inherited From the Mother

REYKJAVIK, Iceland, December 16 /PRNewswire-FirstCall/ -- Scientists at deCODE genetics, Inc. (Nasdaq:DCGN) publish in the journal Nature the discovery of a version of a common single-letter variant in the sequence of the human genome (SNP) with a major impact on susceptibility to type 2 diabetes (T2D). The impact of the T2D variant is not only large, but unusual: if an individual inherits it from their father, the variant increases risk of T2D by more than 30% compared to those who inherit the non T2D-linked version; if inherited maternally, the variant lowers risk by more than 10% compared to the non T2D-linked version. Nearly one quarter of those studied have the highest risk combination of the versions of this SNP, putting them at a roughly 50% greater lifetime risk of T2D than the quarter with the protective combination. This is the second largest effect of any genetic variant for T2D apart from SNPs in TCF7L2, discovered by deCODE in 2006.

"We could make this discovery beacause we are in the unique position of being able to distinguish what is inherited from the mother from what is inherited from the father. This we can do because of the large amount of data we have assembled on the Icelandic population. These data empower us in many ways. For example, using our ability to impute sequence data, we can multiply by 100 times the amount of information generated by sequencing one individual. We can use these tools to discover and integrate rarer variants into our tests and scans, identify drug targets for licensing, and put our know-how at the disposal of our service customers. We believe that this is an important advantage for conducting large-scale whole sequence studies over the next couple of years," said Kari Stefansson, CEO of deCODE.

Because the risk is inherited and varies in this way, the SNP, located on chromsome 11, had never been linked to T2D even though it had been genotyped in large, traditional genome-wide association studies (GWAS). These do not distinguish between paternally and maternally inherited SNPs. But deCODE can track the parental origin of virtually any SNP in the genome of the tens of thousands of Icelandic participants in the company's gene discovery work. In this study, deCODE used its population-wide genealogy database and proprietary statistical tools to determine the parent of origin of a number of SNPs in some 40,000 Icelandic participants in the company's gene discovery programs. Some of these SNPs had previously been associated with different diseases and are located near "imprinted" genes - genes in which only the maternally or paternally inherited copy is "switched-on" to encode a protein. Five of these, one each in breast and skin cancer and three in T2D, showed that the parental origin of the variants affects the risk they confer.

deCODEme's embarrassing data processing glitches - lessons for companies and customers

[From the Genetic Future blog by Daniel McArthur - AJP]

Late last week I noted an intriguing offer by personal genomics company deCODEme: customers of rival genome scan provider 23andMe can now upload and analyse their 23andMe data through the deCODEme pipeline.

On the face of it that's a fairly surprising offer. As I noted in my previous post, interpretation is what generates the real value for personal genomics companies, so giving it away for free seems a bizarre approach to business - especially for a company living on the edge of a financial precipice. However, I also argued that the intention here is likely to be to generate an opportunity for deCODEme to display its interpretation skills to otherwise entrenched 23andMe customers, in preparation for the upcoming battle for interpretation supremacy in the whole-genome sequencing era.

In digging back through my archives I realised that this isn't actually the first time that this strategy has been employed in the personal genomics game: back in June this year, 23andMe offered its interpretation service free to customers of Illumina's freshly-launched $48,000 whole genome sequencing service (the original source is this subscription-only article in industry publication In Sequence).

It's nonetheless the first time that a personal genomics company has opened itself up to genome scan customers, and it's certainly a disruptive (and potentially game-changing) move. AccessDNA's Jordanna Joaquina even goes as far as to speculate that this may herald a shift in deCODEme's strategy towards pure data interpretation. I personally think this is unlikely for deCODEme itself, but I wouldn't be shocked to see a proliferation of multi-platform interpretation services over the next 12 months (Knome's recently announced discovery service is a step in that direction).

But creating an interpretation service that can deal seamlessly with data provided in a multitude of formats from different providers can be a challenging task, as deCODEme learnt in a particularly embarrassing manner this week:


deCODEme opens its doors to free data upload from 23andMe customers
December 17, 2009

A curious tweet this morning from personal genomics company deCODEme, barely a few weeks after the declaration of formal bankruptcy of parent company deCODE Genetics:

@decodegenetics: Migrate to deCODE this winter! Upload your genetic data for free. http://www.decodeme.com/data-upload

Here's a description of the service from the URL in the tweet:


deCODEme wants to give even more people the chance to enjoy the best in personal genomics. Our bioinformatics team has just launched a simple system to enable existing customers of 23andMe™ to migrate their data into deCODEme and to join our growing community. If you already have a 23andMe genetic scan, just click on the button below to begin the upload process and start to view your genome using deCODEme's many advanced features.

This service is available to existing 23andMe customers and for a limited time only.

Enjoy and spread the word!


Basically, the company seems to be providing its interpretation service free to customers who have already had their genome scanned by competitor 23andMe.

That's a bold and initially rather puzzling move. Those of you who've been following the personal genomics industry will know that the value of genome scans is not in the actual generation of the data (this is a straightforward procedure), but in the breadth and quality of the interpretation service.

Converting a series of half a million or so genetic data points into predictions of ancestry and disease risk is a non-trivial exercise, and requires the creation and constant, painstaking maintenance of a database of genetic associations. Parsing the literature to extract the required information can be a frustrating exercise, made even more difficult by the sheer rate that new associations are being generated.

Why, then, would deCODEme choose to give away its hard-won interpretation to customers of its most successful competitor?


November 24, 2009
Posted by Daniel MacArthur at 8:45 AM

A short but glorious rant

Misha Angrist has a very brief but eloquent rant in response to the genomics nay-sayers in this Nature News piece on the bankruptcy of deCODE Genetics.

Here's a taste:

I agree: GWAS is of limited value and this probably contributed to deCODE's demise. But whatever deCODE's fate, if whole human genomes can be sequenced for < $2000, isn't it about time we stopped kicking GWAS's ever-stiffening corpse? Second, just because something is not a medical necessity, does it follow that it is worthless?

Genome Theory - Seven Years In The Doldrums, Taking Off In The Last 2 Years
Francis Collins at "Future of Life" 2003 [Photo by AJP]- and a tabloid of top eleven looking beyond old axioms

This researcher [AJP] participated at “The Future of Life” conference – a celebration in February, 2003 of the 50th Anniversary of the Discovery of the Double Helix in Montery, California. From “Watson and Crick”, Jim was there since Dr. Crick was too ill before his passing away in 2004. Dr. Ohno died in 2000. The joyful 3-day celebration was sobered by the looming Iraq war – and the general feeling that the “Genes/Junk” system of axioms, and the “One gene – one disease – one billion dollar pill” business model of Big Pharma were perhaps mortally ill – and the quite significant concentration of business movers and shakers present did not seem to have a firm grip on how the future of genomics will be shaped.

I rose to speak very briefly (just handed over the November 21, 2002 SFGate article "Junk DNA revisited" see following Ohno's facsimile on "Junk DNA") – to announce that my intellectual portfolio held by HelixoMetry, established in August 2002, is based on the new concept of FractoGene – holding that “Fractal DNA governs the growth of fractal organelles, organs and organisms”. I did not hope for – and did not get – an instant acceptance or even much appreciation at a time when Venture Capitalists, when checked out what e.g. “introns” were, came back with the textbook answer “to keep the protein coding sections (exons) apart”… Certainly, Dr. Watson’s determination to find the gene of e.g. schizophrenia was not diminished, let alone diverted, by an initiative “to look into the Junk DNA” for answers...

At the marvellous reception in the Monterey aquarium at night of Feb. 19, 2003, chatting face-to-face with Francis Collins, he seemed to be dismissive of the radical proposal of elevating “Junk DNA” from the ash-bin of old remnants of evolution that it was assigned to by Susumu Ohno in 1972. Francis' central counter-argument was that “look at the rice, its genome is several times larger than ours – and DNA of some amoeba is inflated by a horrendous amount – that can not be anything but junk”. My pinpointing triplet-run hereditary diseases (such formations are often intronic, and as an M.D.-Ph.D. Francis was very familiar with them) visibly got him into a serious wondering mode.

So much so, that returning to D.C. Francis Collins on May 22, 2003 he asked Congress for a massive 4-year project (ENCODE) that was aimed at a deep analysis of 1% of the human DNA, to see if the gene/junk axioms were correct.

ENCODE gave a 4-year advantage to FractoGene, since it was developed based on never, for one moment, believing either Crick’s Central Dogma, or the Junk DNA misnomer. Rejection of these assumptions was based the author’s fractal model of development of Purkinje neuron 1989, that implied a “recursion”, see point 3.1.3. Unfortunately, at that time such heresy was not only forbidden in theory but was heavily oppressed in practice; resulting discontinuation of my NIH grant and outright denial of an NIH proposal (see acknowledgement in the above paper) to empower me to follow-up on a school of though that was regarded “a double lucid heresy”. Likewise, several decades of study of Purkinje neurons (search for some 17 "Purkinje" in publications) left no doubt for me that 1.3% of the DNA with 87.3% of Junk “doing nothing” just does not contain sufficient information left (a mere ~100 million A,C,T,G bases) e.g. to define development of a human being (when a 2-hour movie requires fifty times more storage on a DVD!)

As the ENCODE collaboration of a slew of countries progressed, with dozens if not hundreds of researchers meticulously looking at 1% of the human DNA, it increasingly leaked out that “Junk DNA” was anything but Junk - reinforcing the core belief of some pioneers, a select minority that never believed either the Central Dogma or Junk DNA misnomers in the first place. Realizing that millions, if not hundreds of millions of people were dying of “Junk DNA diseases” (syndromes caused by dysregulatory structural variations in the intronic and intergenic regions), this researcher felt responsible and established an International PostGenetics Society of like-minded scientists that became the first international organization to have declared on October 12, 2006 in their European Inaugural in Budapest, Hungary that “Junk DNA was a scientific misnomer”.

This facilitated the release of ENCODE data, instead of the planned September to June 14 of 2007. With the myriads of findings, most importantly that “the DNA is pervasively transcribed” in Science, Nature and 28 other Journals it was laid bare the “Junk DNA” is anything but Junk – and the architect of ENCODE, Francis Collins issued the call that “the scientific community has to re-think long-held views” (and resigned from NIHGR – to reappear in 2009 as head of the entire NIH).

In all fairness, both mistaken beliefs died a thousand deaths since their origin of 1956 (“Central Dogma” by Crick) and 1972 (“Junk DNA” by Ohno) by a slew of pioneers (see below) – but we all know that the only way to take away a toy from a child in the sandbox without screaming is to offer a “better toy” first.

With ENCODE out (and both Crick and Ohno gone) Collins’s call for “re-thinking” was heeded within 6 months (The Principle of Recursive Genome Function, submitted December 7 and accepted December 18, 2007). Main importance may be attributed to the Principle was that it superseded both obsolete dogmas (duly citing main pioneers e.g. McClintock, Jacob and Monod, Baltimore, Mattick, Boussard etc. who dealt obsolete axioms blow after blow) with the concept of Recursion – new in Genome Theory, though recursion (“feedback”) is ubiquitous in science and technology. Further, once the DNA>RNA>PROTEIN>DNA… recursion is undeniable, the question becomes not if such recursion occurs – but to further specify, as the Principle paper did, that Genome Function is to be explained in a specific form of recursion; in terms of Fractal Iterative Recursion. Further, the recursion from Proteins to DNA, via epigenomic channels united Genomics with Epigenomics in terms of Informatics. Thus, IPGS became International HoloGenomics Society (2008), and I disseminated the above views widely through Google Tech Talk and Churchill Club YouTube-s in Silicon Valley. While in some remote provinces a "scientist"-loner, known to be the epitomy of intolerance, may still cling to the disarmingly naive notion that most of our DNA is Junk, to my knowledge there is nobody in Silicon Valley still stuck with the 20th Century.

Where are we now, two years after the Principle was disseminated as an accepted manuscript?

Independent dismissals of the Central Dogma/Junk DNA misnomers and the "horror vacui" in theory.

The most significant development is the study by Eric Lander (Lieberman-Lander-Dekker et al) to show on the cover article of Science, Oct. 9, 2009 (see lead story) that "Mr. President, the DNA is fractal" – as the 19 co-authors showed at the level of structural folding a 2m long DNA strand into the nucleus of a cell that is one millionth smaller in diameter (maximal density), with the two extremely crucial consequences that a) the folding, following the Hilbert curve is “knot free”, that is, its pervasive transcription is not blocked by glitches. Second, the 3-D Hilbert curve show the maximal density yet minimal functional distance of its sequence-regions, e.g. a section at the end is about equally neighbor with a section in the middle, or at 3/4th of the length of the strand. It is noteworthy (and duly quoted) that the concept by Lieberman-Lander-Dekker (et al, 2009) originated from Alexander Grosberg (1993, pictured fifth from left, look in the top row), and reaches back to Mandelbrot’s fractal approach (1983), preceded by Hilbert (1891). Literature of fractal mathematics is well established and vibrant.

This DNA fractality is established from a quite different and independent viewpoint as FractoGene, that was deduced from the fractal model of the Purkinje brain cell (Pellionisz, 1989), that grows governed by the fractal DNA (the DNA sequence-fractality evidence was presented at Cold Spring Harbor, September 14 at the “Personal Genomes” meeting, 2009).

The apparent DNA fractality, most interestingly but not without some intrinsic incongruency, is also approached by one of the strongest Pioneers dismantling both the Central Dogma and JunkDNA misnomers, the school of Dr. Mattick (2009).

In the issue of “Natural Genetic Engineering and Natural Genome Editing” of the New York Academy, Dr. Mattick, in a paper authored alone, “deconstructs” the Central Dogma and JunkDNA misnomers, and argues against the feasibility that the theory underlying genome function might be “combinatorial geometry”:


Since the birth of molecular biology it has been generally assumed that most genetic information is transacted by proteins, and that RNA plays an intermediary role. This led to the subsidiary assumption that the vast tracts of noncoding sequences in the genomes of higher organisms are largely nonfunctional, despite the fact that they are transcribed. These assumptions have since become articles of faith, but they are not necessarily correct. I propose an alternative evolutionary history whereby developmental and cognitive complexity has arisen by constructing sophisticated RNA-based regulatory networks that interact with generic effector complexes to control gene expression patterns and the epigenetic trajectories of differentiation and development. Environmental information can also be conveyed into this regulatory system via RNA editing, especially in the brain. Moreover, the observations that RNA-directed epigenetic changes can be inherited raises the intriguing question: has evolution learnt how to learn?

Excerpts (Mattick, 2009)

The central dogma “DNA makes RNA makes protein” … This has been a recurring problem in the history of science, wherein initially reasonable but unproven generalizations become entrenched and are then resistant to questioning and even harder to overturn. In molecular biology, belief in the proposition that genes are generally synonymous with proteins has created an entire intellectual edifice based on uncritical acceptance of comfortable assertions about the nature of genetic information and the functioning of the system, especially with regard to gene regulation. It has also fostered (as orthodoxies always do) an indifferent and sometimes dismissive response to observations that have the capacity to challenge the status quo, as well as resistance to alternative explanations that may be equally if not more cogent. The latter are all too frequently met with an unfair burden of proof, rather than receptivity and curiosity from open and enquiring minds. Indeed while objective skepticism should be equally applied to orthodox and novel ideas, especially in the face of surprising new facts, it is not. At this point it is necessary to refer to the widely accepted concept of “combinatorial control” by regulatory factors, which has been invoked to explain the “progressively more elaborate regulation of gene expression” that must occur in the more complex eukaryotes on the argument that “given the combinatorial nature of transcription regulation, even a twofold increase in the number of factors could produce a dramatic expansion in regulatory complexity”... Implicit within this claim is that this presumed combinatorial explosion of regulatory potential far exceeds that required to account for the difference between nematodes and humans.However, this is simply an assumption that has not been rigorously assessed mathematically, physically, or mechanistically, although it is superficially consistent with some co-fractionation data and the fact that gene expression can be influenced by many factors. The belief in the power of combinatorial expansion of proteomic and regulatory capacity has also been reinforced by the fact that alternative splicing can expand the proteome enormously, and the expectation that this expansion is greater in humans than in nematodes. This data also shows that combinatorial control of gene expression, at least as it is commonly conceived to operate, cannot be used to relieve the accelerating regulatory problem, at least in prokaryotes, as the relationship between numbers of regulators and genes would scale totally differently (with an exponent much less than 1). This leads to a reappraisal of what is meant (and superficially reported) as “combinatorial control,” in terms of an expansion in the levels of controls that are, as far as one can tell, binary in nature and which is entirely consistent with a body of evidence from decision theory which has not been hitherto considered. Rather it seems that the eukaryotes have expanded their regulatory options by the introduction of a hierarchical cascade of decisions, frommodification of chromatin at various levels (DNA methylation and various types of histone modifications) to the regulation of transcription, splicing, translation, RNA stability, and RNA modification and editing. Modulation of any of the regulatory factors involved will affect the outcome and can be interpreted as combinatorial control, but in reality are likely to operate at different levels within a decisional I suggest that the current protein-centric conception of molecular biology is primitive, and that the eukaryotic genome may be better viewed as an RNA machine,131 rather than simply a suite of protein-coding genes with cisacting regulatory protein binding sites. Indeed, apart from the fact that some genes encode proteins, it seems that the true situation may be completely the opposite to that assumed in ignorance of the increasingly dominant role that advanced regulatory systems play in the evolution and development of complex organisms. Finally, the phenomenon of paramutation, 178–181 by which RNA-directed epigenetic changes, including potentially those altered by RNA editing, can be inherited both mitotically and meiotically, and the existence of various classes of RNA-templated DNA synthetic and repair enzymes,175,182,183 raises the intriguing question: has evolution learnt howto learn?

Comment by AJP on the above paper might summarize that the approach by Mattick (2009) corroborates the “Principle of Recursive Genome Function” - by removing both Central Dogma and JunkDNA “show-stoppers” – but leaves the reader with the conclusion that “combinatorial control” as an underlying theory of genome function “can not be used”. Further, in any DNA-RNA-PROTEIN-DNA… recursive system the question if the genome is an “RNA machine”, “DNA machine” or “Protein machine” reminds one of the “which was first, the chicken or the egg”, as in a recursion each component plays an equally indispensable role.

Significantly, in the same issue of Natural Genetic Engineering and Natural Genome Editing (2009) Dr. James A. Shapiro also poublished a paper “Revisiting the Central Dogma in the 21st Century”, in which not only the “Central Dogma” is shredded once again, but “Lesson 2.” of Shapiro leaves the readers in the void where even the definition of “gene” vanishes: “Classical atomistic concepts of genome organization are no longer tenable. We cannot any more define a ‘gene’ as a unitary component of the genome or specify a “gene product” as the unique result of expressing a particular region of the genome”. In a separate Chapter Shapiro lays down “What New Informatic Concepts do We Need to Elaborate in a 21st-Century View of the Genome and Evolution”? – but Dr. Shapiro, though recognizes “the algorithmic nature of genome expression and genome restructuring in evolution” neither offers (an algorithmic) genome theory, nor does he reference one that was published already at the time of his essay.

In science, however, “the horror vacui” rules – i.e. phenomena without scientific explanation cry for any explanation.

Mattick’ solo manuscript has recently been followed up by Mattick, Taft and Faulkner (2009) “A global view of genomic information – moving beyond the gene and the master regulator”. “Central Dogma”, “Junk DNA”, “turning genes on and off” misnomers are completely gone (never even mentioned), and even the “master regulator” is mentioned in the context of the title (that we have to move beyond):

“such proteins [e.g. Hox-protein, AJP] are only parts of larger networks that influence muscle or hindbrain development in vivo, respectively … and do not fully explain the diversity and fine structure of organs and tissues. Similarly, chromatin-modifying proteins have a profound impact on developmental processes … because they lie at the functional centre of epigenetic regulatory networks, not because they themselves make locus-specific regulatory decisions but because they act on other information that does”.

Apparently, the concept of underlying of “combinatoric” regulation (a term used twice in the paper) seems to be adopted from Geoffrey J. Faulkner who at least since 2006 advocated a “combinatorial output from the genome”. Such an approach may have roots in “Combinatorial Chemistry” which, in turn has the underlying mathematics of “Combinatorial Geometry”.

Therefore, since the above papers all (independently) dispose obsolete notions of “Central Dogma”, “Junk DNA”, classicial definitions of “genes”, “introns”, dispose of “master regulators” and “turning genes on and off” it is possible that the approaches are convergent to the structuro-functional view of Fractal Recursive Iteration of Genome Function. However, the “Combinatorial Geometry” approach (e.g. via the arduous detour to fractals by way of "Fractal Combinatorial Geometry", instead of the fractal approach by Mandelbrot, see also SuperFractals by Barnsley) will have to cope with some of the thorniest mathematical challenges that even perhaps the greatest mathematician of modern times (Paul Erdos) could not resolve (see 70 hits on "Erdos problem" "Combinatorial Geometry", and the book of "Unsolved Problems in Geometry").

Genome Theory development (not entirely unlike development of Quantum Theory when the axiomatically un-splitable atom split) therefore, has a long road ahead. The upcoming new Decade will show a spectacular development in our understanding of Genome Regulation - and thus control of dreadful hereditary diseases, such that the new philosophy will prevail that "Your Genome is NOT Your Destiny".

The Genome Generation - The case for having your genes sequenced

By George M. Church | Newsweek Web Exclusive
Dec 15, 2009

The genomic revolution started in 1964, when Robert Holley and his colleagues at Cornell and the U.S. Department of Agriculture deciphered the first gene sequence, indirectly "reading" the order of the four bases (adenine, thymine, cytosine, guanine) that pair up to make all genes, thereby allowing us to understand the blueprint from which each human is built. At first, the process was slow and costly (77 bases required four researchers and three years), and the prospect of ever figuring out the sequence of every human gene—the 3 billion bases of the human genome—seemed remote. However, aided by robotics, several teams raced to complete the job in the 1990s. By 2003, at a cost of $3 billion, most of one human genome had been sequenced.

The success of the Human Genome Project led people to speculate that someday every person would have his or her genome sequenced. At a cost of billions per genome, of course, it was an impossible dream. But beginning in 2004, a wave of next-generation sequencing technologies emerged, and costs began to drop 10-fold each year. Today we can sequence a million individuals' genomes for what it would have cost to sequence one person's genome five years ago. At a current cost of $5,000, it's become so inexpensive that some business models project that personal genome sequencing could be provided to individuals for free by third parties (insurers, employers, governments) who might be able to use the information from the sequencing as a way to reduce health-care costs. No matter who pays for it, as technology improves, the cost will continue to go down, likely to $100 per genome, and lower.

The benefits of genome mapping for some individuals are clear. Nearly every newborn today is screened for up to 40 genetic disorders—that's more than 4 million babies per year in the U.S. (although few of these tests sequence DNA). Before genetics, for example, a baby born with two damaged PKU genes would become mentally retarded. Now, babies that screen positive for this condition go on special protective diets. Carefully reading the genome has saved thousands from this and other painful conditions.

Over 1,500 disease-related genes have been discovered, knowledge that has improved medical diagnosis, treatment, and prognosis. Among the genes routinely sequenced in adults are the BRCA1-2 and neu/HER2 genes for breast cancer, multiple genes for colorectal cancer, the LQT1-12 genes for cardiac arrhythmias, and genes that cause a person to form blood clots more easily (like the factor V Leiden and prothrombin genes).

The message is not "Here's your destiny. Get used to it!" Instead, it's "Here's your destiny, and you can do something about it!" Diseases result from a combination of genetic vulnerability and lifestyle. If you know you have high risk of certain diseases, it's in your interest to know and practice the lifestyle that reduces your risk—and the younger, the better.

Personal genomics also helps doctors choose treatments, by identifying genes that make some medication options clearly superior to others. While "pharmacogenomics" is in its infancy, it is already helping many patients. Genetic tests are used to determine whether certain drugs are prescribed, and in what dose, for HIV-AIDS (the drug abacavir), psychosis (clozapine), blood thinning (warfarin), the heart condition called long QT syndrome (beta blockers) and cancer (imatinib, irinotecan, 5-fluorouracil, mercaptopurine, or tamoxifen). Recently, a gene variant has been identified that powerfully reduces the effectiveness of the popular anticlotting drug, clopidogrel. For the roughly 30 percent of people who carry the gene variant, higher doses of the drug are required: prescribing usual doses exposes the patients to serious, even life-threatening risks.

As low-cost genomics revolutionizes biological research, it promises significant public benefits. When the personal genomes and medical histories from much larger numbers of people become available, we expect much greater progress in identifying rare genetic variations that cause common diseases like cancer and heart disease. A growing number of people are volunteering to help this effort through programs such as PatientsLikeMe, The Personal Genomes Project, and regional biobanks. By sharing their medical histories and genetic information they hope to speed the search for cures and preventatives. These altruists deserve our support.

A common concern about new technologies is that they can broaden the gap between rich and poor. Genomics is no exception, but there is reason to believe that the poor could benefit from advances in the field. Infectious diseases are rampant in the Third World, and are a powerful barrier to people raising their standard of living and getting an education. Low-cost genomics enables the monitoring of new and old disease-causing microbes and the spread of drug resistance. This in turn permits deployment of optimal treatments.

It's a good thing to make genomes available to researchers, but potential problems exist as well. There is no more personal information than the sequences of your genes. Protecting the privacy of that data is essential to the future of genomics. If companies, health-care providers and governments collect and store our genomes and medical data, can they profit by controlling access to our genomes or cells? Do we have the time or know-how to control such access personally? If one's personal genome were known to insurers or employers, it could lead to discrimination. To address this, the Genetic Information Nondiscrimination Act of 2008 (GINA) prohibits health plans and health insurers from charging higher premiums, or making hiring or promotion decisions, based solely on individuals' genetic information. (GINA does not cover long-term care or life insurance, because of the concern that the people who purchased such insurance would be those who had learned from genetic studies that they were at risk for major diseases.)

As "the first genomic generation" we will set the rules that many future generations may follow. Will we treat our genomes like our faces, which we share publicly even though they reveal details about our health, ancestry, and personality? Or will we be forced to hide them from view? Knowing our DNA could make us think of ourselves more mechanically, and yet increase our humanity by embracing our diversity. It could render us less mysterious, yet more awe-inspiring. Our genomes are a vast future resource. How we handle them will define us as a species—not as a fuzzy average, but with our individualism evident in detail.

Church ["the Edison of postmodern Genomics - AJP] is professor of genetics at Harvard Medical School. He has advised 15 of the 20 companies developing rapid genome sequencing technologies and five DNA diagnostics companies.

Dyson Sees Opportunities in Personalization

Published: 13 Dec 2009 18:24:03 PST

Forbes: How do you think the health care sector is doing? How do you think the sector will do through year-end?

Esther Dyson: I don't actually follow public stocks, but I think there are huge opportunities in companies that offer services for individuals (sometimes via doctors or institutions, and sometimes directly). With computers and all kinds of diagnostic developments, it's now possible to collect a lot of data about each individual and to generate a lot of personalized analysis and advice. Turning that into personalized services is a huge opportunity. Some of the companies I have invested in include 23andMe (personal genome information) and Keas (personal health-and-wellness plans). I'm looking at similar start-ups in immunology and metabolism. I'm also an investor in PatientsLikeMe, which lets people with specific diseases (ALS, Parkinson's, et cetera) share their own personal information with one another--and with pharmaceutical companies, who pay for the privilege; ReliefInsite, an online service that lets users monitor their pain and share the data with care-givers; HealthWorldWeb, a platform for selecting doctors; Organized Wisdom, which delivers medical information in a form comprehensible to normal people; and Voxiva, a mobile (phone) health platform that you'll be hearing more about in January.

You've been part of several start-ups. How can you tell which company will be successful and which won't be?

No one can do that perfectly, and I don't pretend to. I try to pick ones in areas that I want to learn more about, so that even if I don't make money I will at least get an education. Obviously, I try to pick intelligent, dedicated people with great ideas; so does everyone! But mostly I invest in companies doing things that I would like to see done. Since I am short of both money and time, I know I'll have to miss lots of opportunities, so I try to pick only things that wouldn't happen (or wouldn't happen as quickly).

I pick things that interest me. But then I try to "sell" the sector to other people. I think health care is a huge opportunity--precisely because most people think it is a huge problem. But where I see the biggest opportunity is "outside" the health care establishment, helping people to stay healthy so they need less health care in the first place.

You mentioned on your Web site that you've been to some emerging markets and are interested in them. How do you advise retail investors to invest in those markets, if at all?

First of all, as in most markets, the insiders usually do better than the outsiders. So I think retail investors should mostly be very careful unless they take specific advice from someone very knowledgeable.

Personally, I have become something of an insider in the Russian IT market--but certainly not in Russian business overall. I am an active investor in several Russian IT companies, starting with Yandex, the Russian analogue of Google ( GOOG - news - people ), which has over 50% market share compared with Google's 20% share in Russia. I'm also involved with IBS Group, a leading IT services company that owns Luxoft, an outsourcer; TerraLink, a specialty IT outsourcer; Live Journal, the blogging platform that has about 90% of the Russian market; and UCMS, which is "white" rather than "green": It sells legal payroll services and helps companies who may not understand all the regulations even if they want to follow them. And I'm an investor in yet another outsourcer, Epam, via a Hungarian company they acquired, but I am not active there because it competes to some extent with Luxoft.

I am a big fan of India and its still-messy democracy, which is rapidly opening up. I'm also optimistic about China, but I don't know enough to be an active investor there.

How are your start-ups in the air and space areas going? What kind of opportunities do you see there?

The air market right now is a mess and I am staying out of it. It simply requires too much capital to be of interest to an angel investor--other than Coastal Aviation Technology, which sells schedule and route optimization software. Someday the whole general aviation market has to get more efficient, but that may take awhile.

As for space, I'm quite optimistic. But it's still not a fluid market; it's mostly a collection of billionaire-funded projects. I'm looking forward to that changing. Meanwhile, I am an enthusiastic investor in Space Adventures, where I am also a customer--for a six-month stint training as a backup cosmonaut in Russia's Star City from October of last year to March of this year. And I am an investor in XCOR Aerospace, which makes rockets and will be offering suborbital space flight soon, and in Icon Aircraft, which has a light-sport aircraft that every red-blooded extreme sports enthusiast will want to fly.

How are your angel investments doing?

Well, I have more money than I started with! And I certainly know a lot more than I did back when I worked for Forbes as a fact-checker and then a reporter from 1974 to 1977. I have to say that the art of fact-checking--being skeptical and asking questions until you understand and can vouch for the answers--is great training for being an investor!

Australia boots up GPU supercomputer [NVIDIA Tesla cluster for genomics - AJP]

By Aharon Etengoff
Wednesday, 09 December 2009 15:58

Tesla for Genomics YouTube [AJP]

Australia's national science agency has fired up a massive GPU supercomputer capable of delivering 256 Teraflops of peak performance.

The CSIRO supercomputer - which is powered by 64 Nvidia Tesla S1070 GPUs - includes 28 Dual Xeon E5462 compute nodes (or 1024 2.8GHz compute cores), 500 GB of SATA storage, a 144 port DDR InfiniBand Switch and an 80 Terabyte Hitachi NAS file system. According to Nvidia spokesperson Andrew Humber, CSIRO scientists have already reported "speedups" of 10-100X in their research applications by deploying the GPU-based system.

The new Tesla GPU cluster will be used to accelerate a number of projects, including genome research, the reconstruction of 3D medical images and modeling ocean nutrients.

Nvidia teams up with Microsoft for HPC

By Andrew Thomas
Monday, 28 September 2009 04:50

Nvidia is to work with Microsoft to use its Tesla GPUs for high performance parallel computing using the Windows HPC Server 2008 operating system.

Nvidia says it has developed several GPU-enabled applications for Windows HPC Server 2008, including a ray tracing application that can be used for advanced photo-realistic modeling of automobiles. The company is also working with Microsoft to install a large Tesla GPU computing cluster and is investigating applications that can be optimized for the GPU.

The two companies say they are looking at applications such as data mining, machine learning and business intelligence, as well as scientific applications like molecular dynamics, financial computing and seismic processing which can take advantage of the massively parallel CUDA architecture on which Nvidia's GPUs are based.

"The coupling of GPUs and CPUs illustrates the enormous power and opportunity of multicore co-processing," said Dan Reed, corporate vice president of Extreme Computing at Microsoft.

"Nvidia's work with Microsoft and the Windows HPC Server platform is helping enable scientists and researchers in many fields achieve supercomputer performance on diverse applications."

"The combination of GPUs and the Windows platform has been a great benefit to our VMD (Visual Molecular Dynamics) user community, bringing advanced molecular visualization and analysis capabilities to thousands of users," said John Stone, senior research programmer at the University of Illinois Urbana-Champaign.

"As we move toward even larger biomolecular structures, GPUs will become increasingly important as they bring even more computational power to bear on what will be highly parallelizable computational problems."

"The scientific community was one of the first to realize the potential of the GPU to transform its work, observing speedups ranging from 20 to 200 times while using a range of compute-intensive applications," said Andy Keane, general manager of Nvidia's Tesla business.

"Researchers are increasingly using Windows on workstations and in data centers due to strong development tools like Microsoft Visual Studio, its ease of system management and its lower total cost of ownership."


Singapore team completes genetic map of Han Chinese

By Judith Tan
Fri. Dec. 11
Straits Times, Singapore

RESEARCHERS in Singapore have unveiled the first genetic map of the Han or southern Chinese, the largest ethnic population in the world.

The study, conducted at the Genome Institute of Singapore (GIS), drew from 8,200 DNA samples from ethnically Han Chinese all over China and in Singapore.

Associate Professor Liu Jianjun, who led the human genetics group, said the genome mapping provides a start to solving mysteries such as why more than 95 per cent of Chinese are lactose intolerant and cannot take dairy products.

It also helps identify genes that make some Chinese more susceptible to diseases such as diabetes and nasopharyngeal cancer and tailor treatment or prevention for them, he said.

Prof Liu said the map was able to show that the people of northern China were genetically different from those in the south - 'a finding that reflected and was very consistent with the migration pattern of the Han Chinese'.

'We could also use it to arrive at a person's ancestral origin, including those born and raised in foreign countries.'

Using Singapore as an example, Prof Liu said that the ancestral roots of ethnic Chinese born and bred here could be traced back to China.

'By looking at the genome-wide DNA variation, we can determine whether an anonymous Singaporean is a Chinese, his ancestral origin, and sometimes, which dialect group of the Han Chinese he belongs to,' he said.

While a majority of the Han Chinese in Singapore are from the Cantonese and Teochew dialect groups, Prof Liu said a third group - the Hakkas - also have 'residual DNAs' that showed they could be from north China, he said.

'More importantly, our study provides information for a better design of genetic studies in the search for genes that make Han Chinese susceptible to diseases such as nasopharyngeal cancer and diabetes,' Prof Liu added.

GIS executive director Edison Liu said that while genome studies have provided significant insights into genes involved in common disorders such as diabetes, high cholesterol, allergies and neurological disorders, 'most of this work has been done on Caucasian populations'.

'Prof Liu's work with his Chinese counterparts helped define the genetic causes of some of these diseases in Asians,' he said.

The research, published in the American Journal of Human Genetics last month, is part of a larger ongoing project on the genome-wide association study of diseases among the Chinese population.

The project is a collaboration between GIS and several institutions and universities in China.

As a follow-up, Prof Liu's team is currently studying the incidence of nose cancer among the southern Chinese and will be publishing the results soon.

[It can be safely predicted that the "Petri dish" of Personalized Medicine will be Singapore - for more than one reason. First is the above study - that will enormously help personalized medicine for the ~72% of population of Singapore of (various) Chinese ancestry. The cardinal economic driver is, that Personalized Medicine can focus on prevention - which can be accomplished in countries much better, if they have government-centered health care system and thus health care is a social service, since the overall cost of health care is reduced. In countries where health care is largely profit-oriented business, prevention is much less in the interest of the health care system - since the sicker the population is, the higher the profit. Likewise, in countries where Big Pharma has a larger say, personalized medicine will be more discouraged - since it is known that about half of the medicines are actually ineffective (because of the "one size fits all") - but the interest of Big Pharma dictates selling the drugs to anyone, even if it is not effective. Where the government has to pay for the medicines, it changes the entire equation. Last - but not least - Singapore with one of the highest per capita GDP can afford to deploy personalized medicine even at an early stage when it is more expensive than at later stages (where mass production drives costs down). However, while Singapore will no doubt extend their model beyond the ethnic groups of Chinese ancestry (e.g. to groups of Indian ancestry), the tiny Singapore will develop a model for the world's largest markets; China and India. Based on the above, "barcode consumer shopping, with genome-based recommendation" (see also the continuation of the linked demo by a press release) might be developed by a spin-off from Silicon Valley to Singapore.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

Genome Patri - [First Indian Genome Sequenced]

10 December 2009, 12:00am IST

It took the US Human Genome Project more than a decade and $500 million to sequence genes drawn from several volunteers. A team of Indian scientists at the Institute of Genomics and Integrative Biology of the Council for Scientific and Industrial Research (CSIR) has reported mapping of the entire genome of a 52-year-old Indian male in just 10 months, at a cost of $30,000 or Rs 13.5 lakh. With this achievement, India joins the ranks of the few countries - the US, UK, Canada, China and South Korea - which have successfully sequenced the human genome. [It is interesting why and how almost the entire coverage overlooked that The Netherlands sequenced the first ever female human - thus the "club" is now at 7 - AJP] Earlier, CSIR scientists had mapped the genetic diversity of the Indian population and also completed sequencing the genome of the zebra fish, commonly used in laboratories as a model for researching human diseases.

There are greater chances of arriving at a better understanding of the genetic make-up and peculiarities of local populations with countries creating their own DNA sequences, as such knowledge is crucial in comprehending country-specific health trends and genetic traits that would otherwise remain largely a mystery. This will enrich knowledge of the different genetic variations that occur in different population groups as well as enable the identification of genes that predispose some to certain diseases.

The male, whose genome was decoded by CSIR scientists, is predisposed to heart disease and cancer, and this information has been gleaned from the sequencing of his DNA. Therefore, the technology is invaluable in diagnostics and could be useful in medical treatment as well. Drugs designed to target the affected genes could be formulated, though at present to do so would involve high costs and such an option would be out of reach of the average patient. However, as with all sci-tech breakthroughs, costs are bound to come down - as it has in the case of the genome mapping technology and various computer models - and it is only a matter of time before they become affordable.

Law and ethical resolutions tend to lag behind implementation of scientific and technological advances, and the relatively new field of DNA sequencing is no exception. What if an individual's genome patri were accessible to employers and insurance companies who might use the information against the employee or client? Should an individual choose to reveal details of his genetic 'horoscope' to his family and friends or keep it private? Would the knowledge impact the individual's own perspective of life and how he lives it? There's plenty of fodder for debate, especially since predisposition to a disease does not mean it would actually manifest in the person.

[The "Indian" genome, of course, is not singular - just as the "Chinese" population is composed of many ethnically diverse groups. The significance can not be overestimated of a gathering momentum of mapping the genomic composition of nearly half of the human population of the World, and in addition those blocks that are much less intertwined than e.g. European (let alone American) mixes. In India and China billions of people were saved from starvation by the first wave of Genome Based Economy (the "Green Revolution" in the seventies) - one reason why China, India and other Asian countries show globally unique appreciation of genomics. It is safely predicted that these populations are also too poor to be able to afford non-personalized ("one size fits all") modern medication, that is generally estimated to be effective for probably less than half of the individuals it is applied to.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

PricewaterhouseCoopers Projects 11 Percent Annual Growth in Genomically Guided Personalized Medicine

December 08, 2009
By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – A new report by PricewaterhouseCoopers estimates the $232 billion personalized medicine market will grow 11 percent annually.

According to the professional services firm, the trend toward tailoring drugs based on clinical factors and genomic variation will create opportunities and challenges for the pharmaceutical and biotech industries. As far as the opportunities, however, the market for "a more personalized approach to health and wellness will grow to as much as $452 billion by 2015," PricewaterhouseCooper gauges.

The estimates in the report, titled "The Science of Personalized Medicine: Translating the Promise into Practice", factor in market opportunities beyond drugs and devices to included demand for data storage, data sharing, as well as increased consumer demand to know their own health risks.

Based on these considerations, PricewaterhouseCoopers expects the $24 billion drug and diagnostic market will grow by 10 percent annually, reaching $42 billion by 2015. "The personalized medical care portion of the market – including telemedicine, health information technology and disease management services offered by traditional health and technology companies – is estimated at $4 billion to $12 billion and could grow tenfold to over $100 billion by 2015 if telemedicine takes off," the report notes.

The report sees direct-to-consumer genomics firms, such as the services offered by 23andMe and Navigenics, adding to the growth of the personalized medicine market by empowering individuals with real-time information regarding medical risks.

"Market research analysts estimate the current size of the global market for genetic testing at $730 million, with a 20 percent annual growth rate. Though a relatively small portion of this market, DTC testing is expected to grow rapidly in response to consumer demands and declining prices," the report notes.

The economic downturn has hit many DTC genomics firms hard. 23andMe recently laid off some employees and raised the price of its genomic risk scans [see GenomeWeb Daily News sister publication PGx Reporter 11-04-2009]. Although many industry observers have often doubted whether the information offered by DTC gene testing firms is worth the price, other experts have recognized that the consumer-driven research model is one that empowers individuals and may catch on as people get more comfortable learning about genetic risk.

"Medical science and technological advancement have converged with the growing emphasis on health, wellness and prevention sweeping the country to push personalized medicine to a tipping point," said David Levy, global healthcare leader at PricewaterhouseCoopers, in a statement. "We are now seeing a blurring of the lines between traditional healthcare offerings and consumer-oriented wellness products and services. The market potential is enormous for any company that learns to leverage the science, target individuals and develop products and services that promote health."

One thing all healthcare stakeholders agree on is that the trend toward personalized medicine will be disruptive to the traditional way of doing things. Personalized medicine has certainly altered Big Pharma's traditional drug development model. And although drug companies are still eyeing the largest possible market for their drugs, most heads of research and development at large drug companies have unwittingly admitted that the blockbuster model is dying [see PGx Reporter 12-02-2009].

"We need to replace our current focus on treating disease with a better approach that is personalized, preventive, predictive and participatory, the basic tenants of personalized medicine," Gerald McDougall, principal in charge of personalized medicine and health sciences at PricewaterhouseCoopers, said in a statement. "Greater collaboration around personalized medicine should be a key strategy for health reform."

Within industry, 2009 was a year of Rx/Dx collaborations with several major drug firms including Pfizer, GSK, Bristol-Myers Squibb, Amgen, and AstraZeneca all inking collaborations with diagnostics firms to personalize investigational treatments, mainly for the individualization of cancer treatments using genomic markers.

In the regulatory arena, the US Food and Drug Administraiton updated the labels for the anticoagulant Plavix with gene-response data and affected a class labeling change for EGFR-inhibiting monoclonal antibodies in the treatment of colorectal cancer (ie. Erbitux and Vectibix).In updating the label for Erbitux and Vectibix the FDA considered retrospective clinical trial data from the sponsors, which offers alternative study models to the long and costly prospective, randomized-controlled study designs for companies looking to get develop personalized drugs [see PGx Reporter 10-07-2009].

The report also discusses how this projected growth in the personalized medicine market will impact technology companies. Tech firms "some with little or no health expertise, are capitalizing on emerging opportunities to manage vast quantities of genetic and other health data and build IT infrastructure and connectivity solutions," the report notes.

When it comes to genomically guided personalized care, physician education is mandatory, the report notes. "Universities will have to update their programs," it said.

"Primary care providers may have to build new service lines around prevention and wellness in order to replace revenues lost from traditional medical procedures," according to the report. "When they do, they can expect to face low-cost competition from non-healthcare companies skilled in consumer marketing and consumers armed with knowledge of their options."

Payors will also need change their reimbursement schemes. Insurance premiums are currently calculated with the general population in mind, but "personalized medicine targets small populations which are far less stable and predictable from an actuarial standpoint," the report notes.

"How payors approach personalized medicine will be critical, as their reimbursement schemes will influence the business models of pharma and diagnostics companies as well as providers who depend on third-party payment," according to the report. "Payors that want to embrace the new science will have to rethink how they define coverage."

[China is growing at 11% next year - and the only sector of the the US economy that can match the World's fastest expanding economy is Personalized Medicine. Personalized Medical Care with Telemedicine (e.g. "Your Genome: There's an App for that") is at $4-$12 Bn and will explode tenfold by 2015 to $100 Bn, PricewaterhouseCoopers predicts.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

Chromosomal Deletions Found in Severely Obese Kids

By Kristina Fiore, Staff Writer, MedPage Today
Published: December 07, 2009
Reviewed by Robert Jasmer, MD; Associate Clinical Professor of Medicine, University of California, San Francisco and
Dorothy Caputo, MA, RN, BC-ADM, CDE, Nurse Planner

[Obese kid from Dr. Farooqui' research - AJP]

Children who become severely obese at a young age may be missing a large segment of DNA, including genes that play a role in regulating hunger, researchers say.

Obese youngsters were twice as likely as controls to have large and rare deletions at 16p11.2, a region on chromosome 16 (P<0.001), Sadaf Farooqi, PhD, of the University of Cambridge, and colleagues reported in Nature.

Youngsters with this phenotype rapidly gained weight in the first years of life, and their excess weight was predominantly fat mass, the researchers wrote. They had hyperphagia, or an increased appetite, and their fasting plasma insulin levels were disproportionately elevated compared with controls.

"People with deletions involving this gene had a strong drive to eat and gained weight very easily," Dr. Farooqi said in a statement.

The study is the first to document rare copy number variants (CNVs) -- large chunks of DNA deleted from genes -- associated with severe early-onset obesity.

The rising prevalence of obesity is driven by environmental factors, but there is "considerable evidence" that weight is highly heritable, the researchers said.

So they studied 300 white patients whose severe obesity arose before age 10, and compared them with 7,366 controls. Of the 300 severely obese children, 143 also had developmental delay.

They searched each child's genome for mutations in copy number variants, which are believed to have a role in many diseases, including autism and learning disorders.

Generally, both cases and controls had a similar median number (53 and 55, respectively) of copy number variants, as well as similar sizes of the deletions (22 and 23.5 kilobases, respectively).

But rare deletions and large deletions -- those greater than 500 kilobases -- were significantly greater in patients compared with controls -- a two-fold increase, the researchers said (P<0.001) .

Obese children with developmental delay were also more likely to have large, rare deletions, they added.

Four patients with the 16p11.2 deletion reported in autism and mental retardation had mild developmental delay that required special educational support, and two also had autistic spectrum behavior.

The 16p11.2 deletions encompass several genes -- some that play a role in neurological diseases and immunity -- but all include SH2B1. This is known to be involved in leptin and insulin signaling, processes that regulate weight and adjust blood sugar levels.

The researchers added that since copy number variants exist in sections outside of SH2B1, there's a role for additional genes in the etiology of severe obesity.

Farooqi and colleagues concluded that studies looking for rare variants near susceptible genetic locations may prove fruitful in other common, complex diseases.

[Affordable full human DNA sequences are flooding R&D as well as Dicrect-to-Customers Genomic testing. This column is read by many who are at least marginally familiar with computer science. Whenever a code is "written" - the first check is for "syntax"; to see if the code conforms to basic structural requirements (e.g. if required instruction is "misspelled" even by a single letter - the syntax checker does not even try to compile the source code). It should be obvious that genome interpretation and genome regulation analysis will undergo similar several steps of "triage"; first checking for "syntax" (expected structural conformity), and in deeper analysis check for more elaborate causes of misregulation. SNP-s, CNV-s are already used for DTC testing since e.g. in the above case appetite suppressants are clearly warranted.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

China to develop third-generation genome sequencing instrument

www.chinaview.cn 2009-12-05 15:10:23

BEIJING, Dec. 5 (Xinhua) -- The Chinese Academy of Sciences and Inspur Group have started a joint project to develop the third-generation genome sequencing instrument, which might slash the cost of genome sequencing by 99 percent.

The instrument is expected to sequence a person's genomes in an hour at a cost of about 1,000 U.S. dollars, compared with six weeks and 60,000-100,000 dollars by the current second-generation instrument, said Yu Jun, deputy head of the Beijing Institute of Genomics with the Chinese Academy of Sciences.

The academy and the Inspur Group, a leading supplier of computing platforms and IT application solutions in China, announced the project here on Friday, according to a report by Beijing Daily newspaper on Saturday.

"The home-made third-generation genome sequencing instrument is not only conducive to life science research, but also concerns the genetic safety of China," Yu said.

The sequencing instrument is vital for gene science research and the made-in-China third-generation instrument will help the country get a leading edge in the field, he added.

[No USA expert in Fractal Genome Analysis will refuse any offer that he can not refuse.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

Complete Genomics And GATC Biotech Collaborate On Human Genome Sequencing Projects

December 2, 2009

Konstanz, Germany / b3c newswire / — GATC Biotech [Germany] and Complete Genomics, Inc., today announced the execution of a research collaboration agreement to sequence several human genomes from samples provided by GATC. The companies will collaborate to analyze the data obtained from the research project. Complete Genomics will sequence and assemble the genomes and provide GATC with detected variants including single nucleotide polymorphisms (SNPs) and indels. GATC will then perform additional bioinformatics analysis such as comparison of the variant data of different genomes. In addition, the company will refine the data with the aim of providing researchers and clinicians with relevant genomic details to advance their understanding of the genetic causes of disease. The pilot project has started and the first sequencing data will be evaluated by GATC soon.

"GATC aims to continually offer our customers the most innovative, efficient and integrated sequencing technologies and applications for their research projects. To achieve our 100-Human-Genome-Project goal, evaluating Complete Genomics' human genome sequencing technology is a logical step," explains Peter Pohl, CEO of GATC Biotech.

"We are pleased to be working with GATC Biotech on this project which will further demonstrate the value of our large-scale, high-quality, low-cost human genome sequencing service," said Dr. Clifford Reid, chairman, president, and CEO of Complete Genomics.

[A little more and a little less than a year ago, in YouTube-s "Google Tech Talk" (at 17:21) and "Churchill Club Panel" it was predicted that Silicon Valley might become not only the fulcrum of "Molecular" (Nano)-sequencing (that is the "supply side" of Genomics, taking care of "get info") - but data-centers will also emerge here to bring the "demand side" of Genomics in balance, by taking care of structuro-functional interpretation of genome regulation, based on sequence and methylation information of the whole DNA. Thus far, it did not happen, and Complete Genomics that has already started mass-producing affordable full DNA sequences must cut deals with Seattle and Germany to help out. Part of the challenge is, that meanwhile Eric Lander (et al.) demonstrated on the October 9th (2009) Science magazine cover article, that THE DNA IS FRACTAL. The algorithm, postmodern hologenomics software and HPC genome-hardware challenge (and opportunity) is both enormous and historical. With the needs met, Genomics may progress along a sustainable path. If not, no matter how "affordable" full DNA sequences might be (with George Church already talking about "the zero dollar full DNA sequence" - the enormous value of understanding physiological and pathological function of genome regulation will not be realized, steering Genomics on an unsustainable supply-demand mismatch path. The call was issued a year ago - and this Genome Informatics specialist will not hesitate to mobilize efforts, humanly possible.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

China: Nine nations in one?

[Socio-political-economical map - AJP]

Anyone who’s been trawling through the China-related web this week will surely have stumbled across the ‘Nine Nations of China’ map that surfaced on Atlantic Monthly. Patrick Chovanec, from Tsinghua University, posted his map amidst the inescapable excitement of Obama’s visit to China, reminding the US President that China is "a mosaic of several distinct regions, each with its own resources, dynamics, and historical character."

The regions Chovanec feels China could be divided into:

The Frontier, made up of Inner Mongolia, Ningxia, Gansu, Qinghai, Xinjiang and Tibet represented the mysterious desert-filled and mountainous bulk China’s land, inhabited by only 6% of its population.

South of that lies the Shangri-La region of Yunnan, Guizhou, Guangxi, a so-called paradise on earth consisting of kaleidoscopic forests, diverse ethnicities and, sadly, a front-door for illicit drugs, as it borders Burma’s Golden Triangle.

China's Back Door, meanwhile, holds on to Hong Kong, Macau, Guangdong, and Hainan for its lush jungles and economic successes

... whilst the neatly tucked-away Refuge on Sichuan, Chongqing remains an area with little investment but substantial brain drain.

The Crossroads, covering Anhui, Jiangxi, Hubei and Hunan, remain China’s transport and communications hub, neighbored by

The Straits of Fujian and Taiwan.

Up along the eastern coast is the likely Metropolis of Shanghai, Jiangsu, Zhejiang, followed by... The Yellow Land, or China’s political heart (Beijing, Tianjin, Shandong, Hebei, Henan, Shanxi, Shaanxi),

And finally, the elusive northeastern wilderness of Liaoning, Jilin, Heilongjiang. A.k.a. The Rust Belt.

As blogger Jeremiah Jenne pointed out, the idea is hardly earth shattering; not only due to the wonderful Wikipedia age of enlightenment, but also thanks to the efforts of an anthropologist by the name of Skinner, who produced a similar map in 1977, and whom Chovanec failed to cite. Jenne shows here just how similar the ‘Nine Nations’ and ‘Nine Subregions’ of China are. Danwei’s Jeremy Goldkorn and Shanghai Scrap’s Adam Minter also responded with a gentle reminder that Chovanec could have cited his predecessor. Chovanec responded later to note that the regional descriptions were his own and that he had cited Skinner, but the citations were edited out by The Atlantic under space considerations.

Attribution/citation grappling aside, both Chovanec and Jenne’s (and even Skinner's) basic argument is that we still tend to view China as one giant power, irrespective of the obvious diversities within its borders. However, Dan of China Law Blog, took a slightly different view:

"My problem I see with this map is that it is exactly that. A map. And as a map, it distinguishes among regions geographically and that is not how I view many aspects of China. Just by way of an example, I see Beijing having commonalities with Shanghai just because they are two powerful and relatively sophisticated big cities."

Which leads us to an interesting question - this One China can definitely be carved up into various divisions in order to understand it better, but what divisions could or should be in the final map?

First Chinese genetic map unveiled

It's a very significant day in science for China: the first genetic map of the Han Chinese has been published by the American Journal of Human Genetics. [See title with hyperlink and abstract below - AJP]. The study was conducted at the Genome Institue of Singapore, and draws from 8,200 DNA samples from ethnically Han Chinese all over China and in Singapore.

Through genetic variations, the map draws a historical picture of the migration of the Han from north to south. By assessing the 0.3% variations in genetic structure, scientists are able to conclude whether someone is ethnically Han, where their ancestral place of origin is, and can even tell what dialect group of Han they belong to, as genetic variation follows changes in dialect.

More importantly, the genetic map will help scientists to understand and how genes can make people more susceptible to disease, and will help to find medical methods to treat and prevent them. All things considered, a genome map for China is a great step forward for the country. It's also an interesting and novel way of looking at the "Nine Nations of China", but we admit it's not quite as entertaining a map as the ones Chinese people make for themselves.

Genetic Structure of the Han Chinese Population Revealed by Genome-wide SNP Variation
The American Journal of Human Genetics, 25 November 2009

Jieming Chen1, 12, Houfeng Zheng3, 4, 5, 12, Jin-Xin Bei6, 7, Liangdan Sun3, 4, 5, Wei-hua Jia6, 7, Tao Li8, 9, Furen Zhang10, Mark Seielstad1, 2, 11, Yi-Xin Zeng6, 7, Xuejun Zhang3, 4, 5 and Jianjun Liu1, 2, 3, 5, ,

1 Human Genetics, Genome Institute of Singapore, Singapore 138672, Singapore

2 Centre for Molecular Epidemiology, (Yong Loo Lin) School of Medicine, the National University of Singapore 117597, Singapore

3 Institute of Dermatology and Department of Dermatology at No.1 Hospital, Anhui Medical University, Hefei, Anhui 230032, P.R. China

4 Department of Dermatology and Venereology, Anhui Medical University, Hefei, Anhui 230032, P.R. China

5 The Key Laboratory of Gene Resource Utilization for Severe Diseases, Ministry of Education and Anhui Province, Hefei, 230032, P.R. China

6 State Key Laboratory of Oncology in Southern China, Guangzhou 510060, P.R. China

7 Department of Experimental Research, Sun Yat-sen University Cancer Center, Guangzhou 510060, P.R. China

8 The Department of Psychiatry & Psychiatric laboratory, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, P.R. China

9 The Department of Psychological Medicine and Psychiatry, Institute of Psychiatry, King's College London, London SE5 8AF, UK

10 Shandong Provincial Institute of Dermatology and Venereology, Shandong Academy of Medical Science, Jinan, Shandong 250022, P.R. China

11 Department of Epidemiology, Harvard School of Public Health, Boston, MA 02115, USA


Population stratification is a potential problem for genome-wide association studies (GWAS), confounding results and causing spurious associations. Hence, understanding how allele frequencies vary across geographic regions or among subpopulations is an important prelude to analyzing GWAS data. Using over 350,000 genome-wide autosomal SNPs in over 6000 Han Chinese samples from ten provinces of China, our study revealed a one-dimensional “north-south” population structure and a close correlation between geography and the genetic structure of the Han Chinese. The north-south population structure is consistent with the historical migration pattern of the Han Chinese population. Metropolitan cities in China were, however, more diffused “outliers,” probably because of the impact of modern migration of peoples. At a very local scale within the Guangdong province, we observed evidence of population structure among dialect groups, probably on account of endogamy within these dialects. Via simulation, we show that empirical levels of population structure observed across modern China can cause spurious associations in GWAS if not properly handled. In the Han Chinese, geographic matching is a good proxy for genetic matching, particularly in validation and candidate-gene studies in which population stratification cannot be directly accessed and accounted for because of the lack of genome-wide data, with the exception of the metropolitan cities, where geographical location is no longer a good indicator of ancestral origin. Our findings are important for designing GWAS in the Chinese population, an activity that is expected to intensify greatly in the near future.

[Note that Singapore played an important role in this landmark study. Singapore is likely to emerge as the first country that can afford to save money by not administering medications to those for whom that medication is ineffective. Personalized Medicine has just gained a Global Economical Engine.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

Singapore achieves breakthrough in study of 3D Whole Genome Mapping [Structurally recursive DNA]
Singapore, Nov 5, 2009:

 [Recursive DNA - AJP]

A team of scientists at A*STAR’s Genome Institute of Singapore (GIS), led by Senior Group Leader and Associate Director of Genomic Technologies Dr Yijun Ruan and Senior Research Scientist Dr Edwin Cheung, has made a major technological breakthrough in the study of gene expression and regulation in the genome’s three-dimensional folding and looping state through the development of a novel technology called the ChIA-PET. Their results were published in the November 2009 issue of the prestigious journal, Nature under the title “An Oestrogen Receptor α-bound Human Chromatin Interactome”.

Ever since the human genome was found to be organized in a three-dimensional (3D) manner rather than in a two-dimensional linear [There is no such thing as a two-dimensional linear - AJP] fashion, scientists have been challenged to find an effective method to study the regulation of gene activity which took into account the complexities of its 3D structure. Using ChIA-PET technology, the GIS scientists have successfully met the challenge and confirmed the presence of genome-wide long-range chromatin interactions.

Using the oestrogen receptor-α (ERα) as a model, the scientists investigated how the human genome was organized in response to oestrogen signalling to control the expression of genes in breast cancer cells. They discovered that extensive ERα-bound long-range chromatin interactions in the human genome were involved as a primary mechanism for regulating estrogen-mediated gene expression.

First author of the research paper, Dr Melissa Fullwood, a PhD student when she worked on this study at the GIS, said, “Many studies have found that regions of the genome which are not near genes are very important in controlling disease. In thinking about how this can happen, many scientists hypothesized that chromatin interactions – 3-dimensional loops in DNA – might be what allow these regions to remotely talk to genes.

The subsequent discovery of chromatin interactions between specific genes and specific enhancer sites generated a lot of interest to find chromatin interactions throughout the entire genome. Our study is one of the first to be able to address this ‘Holy Grail' of genomics.”

Dr Fullwood was also one of three winners of the inaugural L’Oréal Singapore for Women in Science National Fellowship awards, presented in August 2009 for outstanding work by female researchers with the potential to contribute to science.

Prof Edison Liu, Executive Director of the GIS, said, “Our institute had been working to develop this technology to answer a fundamental question in cancer. These results show us that higher order DNA interactions on a genome scale can explain some of the contradictions in older studies. This work will pave the way for the development of highly specific anti-hormone treatments in breast cancer. “

Prof Edward Rubin, Director of the Genomics Division at the Lawrence Berkeley National Laboratory, University of California, Berkeley, Director of the U.S. Department of Energy Joint Genome Institute, and member of the GIS Scientific Advisory Board added, “The study represents a true scientific tour de force. It shows on a massive genome wide scale the interactions between a specific set of enhancers and the genes they regulate. The approach and results shown here will certainly be well received by the large community studying gene regulation.”

The ChIA-PET methodology and the ERα-bound human chromatin interaction map represent the starting point of an entirely new field for scientists to study how the human genome is folded in order to communicate the codes in regulating gene expression. ...

An oestrogen-receptor--bound human chromatin interactome

Nature 462(7269):58 (2009)

Melissa J Fullwood, Mei Hui Liu, You Fu Pan, Jun Liu, Han Xu, Yusoff Bin Mohamed, Yuriy L Orlov, Stoyan Velkov, Andrea Ho, Poh Huay Mei, Elaine G Y Chew, Phillips Yao Hui Huang, Willem-Jan Welboren, Yuyuan Han, Hong Sain Ooi, Pramila N Ariyaratne, Vinsensius B Vega, Yanquan Luo, Peck Yean Tan, Pei Ye Choy, K D Senali Abayratna Wansa, Bing Zhao, Kar Sian Lim, Shi Chi Leow, Jit Sin Yow, Roy Joseph, Haixia Li, Kartiki V Desai, Jane S Thomsen, Yew Kok Lee, R Krishna Murthy Karuturi, Thoreau Herve, Guillaume Bourque, Hendrik G Stunnenberg, Xiaoan Ruan, Valere Cacheux-Rataboul, Wing-Kin Sung, Edison T Liu, Chia-Lin Wei, Edwin Cheung and Yijun Ruan

Genome Institute of Singapore, Agency for Science, Technology and Research, Singapore 138672.

Genomes are organized into high-level three-dimensional structures, and DNA elements separated by long genomic distances can in principle interact functionally. Many transcription factors bind to regulatory DNA elements distant from gene promoters. Although distal binding sites have been shown to regulate transcription by long-range chromatin interactions at a few loci, chromatin interactions and their impact on transcription regulation have not been investigated in a genome-wide manner. Here we describe the development of a new strategy, chromatin interaction analysis by paired-end tag sequencing (ChIA-PET) for the de novo detection of global chromatin interactions, with which we have comprehensively mapped the chromatin interaction network bound by oestrogen receptor (ER-) in the human genome. We found that most high-confidence remote ER--binding sites are anchored at gene promoters through long-range chromatin interactions, suggesting that ER- functions by extensive chromatin looping to bring genes together for coordinated transcriptional regulation. We propose that chromatin interactions constitute a primary mechanism for regulating transcription in mammalian genomes. DOI: 10.1038/nature08497

[How many billions of dollars have we spent on "sequencing"? With many hundred of millions of dollars in investment an entire industry is being built to provide a "supply" of affordable DNA sequences. Turns out, the "demand" for sequences will be next to nothing until comparable investment occurs for the effort of interpretation of genome expression regulation (that is recursive both in its function and as we see here, also in its structure). Presently, even the basics of the structural organization are ill-defined. For instance, the strand of a DNA may be loosely called "a curved line" - but is certainly not two (or one or three) dimensional. FractoGene has long stated (2002) that the dimension is not even an integer - but fractal.

Full R&D and Market analysis how this development may make the industry of "Genome Revolution" unsustainable is upon request. Pellionisz; HolGenTech_at_gmail.com]

Knome Launches First Platform-Agnostic Human Genome Sequencing and Analysis Service for Researchers

- New service for scientists enables the next generation of genetic discovery -

CAMBRIDGE, Mass., Nov. 18 /PRNewswire/ -- Knome, Inc., a recognized pioneer in the personal genomics field, today announced the launch of KnomeDISCOVERY, the first fully integrated human genome sequencing and data processing service for researchers. The new offering meets rapidly emerging demand from biomedical researchers for a one-stop service that bundles affordable, research-tailored access to a broad range of next-generation sequencing platforms with discovery-supportive data management and analysis. KnomeDISCOVERY is expected to catalyze important genetic insights into rare and common human diseases, and to accelerate the development of effective treatments.

In 2007, only three complete human genomes were available for research. In 2010, by contrast, Knome expects to see the sequencing of thousands of genomes as government and private foundations invest hundreds of millions of dollars in grants to leverage the empirical power of individual genome sequencing.

As the cost of sequencing drops, and the tide of genome data rises, data management and analysis are emerging as pressure points that require significant computational infrastructure and expertise. Through KnomeDISCOVERY, individual research groups can leverage Knome's unparalleled sequencing platform access and experience in whole genome and exome analysis.

The new service is ideal for two types of researchers:

Researchers with expertise in medical genomics who want to streamline data management and preliminary analysis in forthcoming mass sequencing projects - Leveraging high-volume access to sequencing platforms, Knome handles the logistical hurdles of rapid-turnaround sequencing, and carries out the important but computationally intensive process of "background" genome analysis, freeing researchers to focus on specific question-driven hypothesis testing that can yield novel discoveries in genetic medicine; and

Clinically trained researchers with extensive expertise in specific diseases, for whom mass sequencing approaches are novel and unfamiliar tools - Knome's expertise in analyzing whole genome data can directly help these researchers pinpoint novel alleles that contribute to a disease of interest. Knome takes a "fine-toothed" approach to genomic data analysis, grounded in a thorough understanding of genome structure and function; protein biochemistry; population/evolutionary genetics; statistical analysis; and basic disease etiology, as refined by close consultation with the researcher. This approach can quickly identify potentially disease-relevant candidate alleles for researchers to consider for follow-up empirical assessment.

"This is a pivotal moment for genetic research, as the pace of novel genotype-phenotype discovery accelerates," said Jorge Conde, CEO of Knome. "Scientists understand the value of whole-genome and exome sequencing, but few have requisite access to the full complement of state-of-the-art sequencing machinery, data management systems and bioinformatics expertise. Having sequenced and analyzed the genomes of more individuals than any company in the world, Knome is now offering the research community a fast and cost-efficient way to move from DNA to discovery."

Knome works with researchers to create solutions tailored to their scientific aims, as well as their budget, timing and volume requirements. Pricing for KnomeDISCOVERY starts at less than $12,000 per sample, depending on desired sequence coverage and degree of custom consultation required, with turn-around in as little as eight weeks.

Knome's bioinformaticians use kGAP, the company's proprietary analysis platform, to deliver a thorough accounting of both novel and previously known sequence variants. Knome's analysis includes annotation of published allele-disease associations, as well as sophisticated prediction of potential functional effects of newly discovered variants, helping guide researchers to regions of the genome that are potentially relevant to a disease of interest.

"We are excited to bring the scientific research community the type of genome sequencing and data analysis services that have previously only been available to a small number of individuals at large institutions," said George Church, Professor of Genetics at Harvard Medical School and co-founder of Knome. "It is our hope that by making these types of services broadly accessible to many more scientists, the process of discovery will be greatly accelerated."

[In June, Boston-based Knome Inc. has teamed up with SeqWright, of Houston, Texas -AJP]

[We are almost there! "Genome Computing" is presently provided as a service - and for researchers, based on traditional (serial) computer platform. Rapid future steps seem to be clear: a) Deploy existing (hybrid) serial/parallel computers (widely used in defense, financial, encryption, graphics and other markets) to the emerging "genome computing market". b) Proceed how PC-s developed markets, both for "individual use" (as home computers - in our case as "Personal Genome Computers", as well as in racks, grids (e.g. the Linux-farms of huge grids by some of the largest service providers - also looking into hybrid solutions). c) Connect "Genome Computing" with the 20+ "molecular sequencer vendors" (with Complete Genomics already starting the flow of reasonably affordable full human DNA sequences, similar to Mercedes Benz did with the automobiles - and Pacific Biosciences will commence a year from now in mass-production mode - perhaps comparable to Ford's easily affordable, assembly-line solution of personal transportation). d) While also building massive "data-centers" with the use of Genome Computer grids, connect them with the World's largest Medical Centers, to make sure that the mass-produced genomes are statistically related to precise data on medical conditions. e) Provide the business model for the "Personal Genome Computer" users to be empowered to actually use their results in their consumer activities. f) Provide the business model that "Genome Computer Centers" and "Personal Genome Computers" absorb - but not for free - the emerging "instrinsic algorithms" and other extremely valuable means of understanding genome function (above all, genome regulation). The region(s) that can collaborate to implement the above, will lead the "Genome Based Economy".

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

NSF Funds Petascale Algorithms for Genomic Relatedness Research

November 17, 2009
By a GenomeWeb staff reporter

[Fractal Frenzy taking off - self-assembly of a fractal hierarcy from bipeptide nanorods. - AJP]

NEW YORK (GenomeWeb News) – Scientists at three universities will use funding from the American Recovery and Reinvestment Act to develop computational biology tools that researchers will use with next-generation computers to study genomic evolution, according to Georgia Tech.

The $1 million grant from the National Science Foundation's PetaApps program, which funds development of computer technologies for petascale machines that can conduct trillions of calculations per second, will include Georgia Tech, the University of South Carolina, and Pennsylvania State University.

These researchers will develop new algorithms in an open-source software framework that will use parallel, petascale computing to study ancestral genomics in an open source code called Genome Rearrangements Analysis under Parsimony and other Phylogenetic Algorithms (GRAPPA).

"GRAPPA is currently the most accurate method for determining genome rearrangement, but it has only been applied to small genomes with simple events because of the limitation of the algorithms and the lack of computational power," explained David Bader, a lead investigator on the grant and executive director of high-performance computing at Georgia Tech's College of Computing.

GRAPPA was recently used to determine the evolutionary relation of a dozen bellflower genomes one billion times faster than a method that did not use parallel processing or optimization.

The researchers in this program will use it to test their algorithms by analyzing a collection of fruit fly genomes. They expect their algorithms will provide "a relatively simple system to understand the mechanisms that underlie gene order diversity, which can later be extended to more complex mammalian genomes, such as primates," according to Georgia Tech.

They think that the algorithms will make genome rearrangement analysis reliable and efficient.

"Ultimately this information can be used to identify microorganisms, develop better vaccines, and help researchers better understand the dynamics of microbial communities and biochemical pathways," Bader said.

[There are essentially two kinds of algorithms. One is for "brute force" (calling for parallel computing up to billion times faster execution of simple steps; this is how computers play chess). The other is "intrinsic algorithms" such as Z=Z^2+C for a Mandelbrot set, - or using natural neural networks (of the human brain) for pattern recognition of chess strategies. Since NIH research is dominated by massive data production, NSF is caught in a dilemma or either supporting the "brute force" approaches, or investing in research towards "intrinsic algorithms". The answer suggested here is to hedge the bets, investing in BOTH.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

How a medical revolution may transform Northern Virginia

By Steven Pearlstein
Wednesday, November 18, 2009

[Dietrich Stephan - a loss to Silicon Valley, but a gain to Northern Virginia - AJP]

Even as Washington policymakers struggle to reform the country's health-care system, about 20 miles away in Fairfax County, Dietrich Stephan is hatching a plot to revolutionize it.

The current system, as everyone knows, is the world's costliest machine for healing you when you get sick, largely by using drugs and devices and surgical procedures that have proven themselves effective with most other people with the same ailment. But what if there were a way, based on your genetic makeup, to anticipate whether you're likely to come down with cancer, heart disease or Alzheimer's and prevent it with a fix specially designed for you?

It's called personalized medicine. And while people have been anticipating it for a decade, ever since the humane genome was mapped out, it's been slow in coming. Now Stephan -- working with the Inova hospital juggernaut, the scientists at George Mason University, and the researchers and health policy experts at George Washington University -- thinks he can foment this health-care revolution and create a new economic engine for Northern Virginia.

Although it's been in the works for months, the official announcement came Monday when Gov. Tim Kaine, with his Republican successor looking on, announced that Virginia had come up with $25 million to finance operations at the new Ignite Institute. If its supervisors approve, Fairfax County will throw in $150 million in financing guarantees to construct a state-of-the-art, 300,000-square-foot research lab somewhere along the Route 28 corridor. At full strength, the institute will have an annual operating budget of $100 million and 400 employees working closely with a new Center for Personalized Medicine at Inova Fairfax Hospital, part of a $1 billion overhaul that Inova has planned for its flagship campus.

You'd be right, of course, to be a bit skeptical. For decades, we've heard how, thanks to the innovation gushing out of the National Institutes of Health and the Johns Hopkins medical complex, the fertile crescent between Rockville and Baltimore was destined to become the Silicon Valley of biotech. Watching that develop has been about as exciting as watching grass grow.

And just as Maryland has plenty of competition from other biotech clusters, Northern Virginia is the latest entry in the individualized-medicine sweepstakes, along with the Translational Genomics Research Institute (a.k.a. T-Gen) in Phoenix, where Stephan himself held the No. 2 position as director of research. Similar efforts are underway at the Institute for Systems Biology at the University of Washington, the Mayo Clinic and Duke University, along with the Broad Institute in Boston, which boasts not only the cachet and intellectual horsepower of Harvard and MIT but also a $400 million gift from Los Angeles real estate billionaire Eli Broad.

Closer to home, genome pioneer J. Craig Venter has his own genomic research empire, with headquarters in both Rockville and San Diego. And from the commercial sector, competition comes not only from every major drug and biotech company, but also from hot start-ups like Navigenics and 23andMe, which for a fee will tell you the diseases to which your genetic makeup is inclined.

Navigenics, in fact, was a spinoff of T-Gen, and Stephan was one of its co-founders. His experiences at both places, and at the genome labs of the National Institutes of Health, convinced him that the best place to launch this revolution is not a pure research lab, or a medical complex or a commercial start-up, but an entity that straddles the divide between nonprofit inquiry and for-profit commercialization and is driven by the everyday collaboration of researchers and clinicians.

Stephan considered locating his new venture in San Francisco or Boston, each of which had the necessary academic, medical and venture capital infrastructure. But in Northern Virginia he found a place where he would not be overshadowed by more-established players, and where he found public and private partners who, like himself, were ambitious and entrepreneurial and eager to break into the next big thing. If he, and they, have any competitive advantage, it is that the shift to individualized medicine will raise a myriad of questions about privacy, medical ethics and financing that will require difficult decisions from policymakers in Washington. Being close will give Stephan and his partners a front-row seat from which to participate in those conversations.

It's way too early to say whether the Ignite Institute will be able to attract superstar talent or big-time funding, or whether its partnerships with Inova and the universities will bear fruit, or whether through new company start-ups it will be able to generate lots of jobs, wealth and tax revenue for the region. But it says a lot about Virginia and Fairfax County that, even in the midst of economic downturn and budget shortfall, they saw the potential, seized the opportunity and invested in the future. Having also won funding for Metrorail's extension out to Dulles and won the headquarters competitions for Hilton, SAIC, CSC and Volkswagen of America, Northern Virginia is now primed to emerge from the economic doldrums and once again lead the region's growth.

[The larger question is not how "Northern Virginia" will be transformed by the Genome Revolution. As shown in Genome Based Economy, Juan Enriquez had long predicted that regions, countries will rise or fall back depending on if they can take advantage of a new era - just like "Digital" made a historical difference. In the very interesting "line-up" of US regions, there is a conspicuous gap - Houston. The major ingredients are all there; the World's largest hospital system (Baylor with affiliated institutions), high-tech to develop novel computing solutions (Dell, Texas Instruments, HP) - and an enormous wealth and "Texas-style" aggressive attitude to further catapult the existing capabilities, unmentioned above. In the Churchill Club panel this researcher drew attention to the "laid back" attitude of Silicon Valley that "Genome Based Economy" will happen to us, here, might result to a disappointment. Loosing Dietrich Stephan from Navigenics and Silicon Valley to Northern Virgina should be a "wake-up call". Our loss is their gain.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

Finding new ways to grow in Silicon Valley

Nov. 16, 2009

While established Silicon Valley companies weather the recession as best they can, newly emerging tech pioneers are showing the way back to economic growth.

Many of yesterday's moneymakers remain mired in hiring freezes and job losses. Two of the high-tech areas that lost jobs in 2008 — semiconductor and computer equipment manufacturing — have seen the pace of job losses intensify in recent months.

However, companies in fields such as biotechnology, nanotechnology and genomics are helping the tech industry in Silicon Valley find ways to flourish anew.

"The stock performance of major life science firms in the region is amazingly strong considering the overall market is down by 20 to 40 percent in the last 12 months," said Matthew Gardner, president and chief executive officer of BayBio, a nonprofit trade association serving the life-science industry in Northern California.

"We also see an enormous amount of investment coming into the industry, especially for the next generation of companies in genomics and related technologies," he said.

Mr. Gardner said these investments will inevitably translate into job growth.

BayBio estimates that the San Francisco Bay Area has the largest cluster of life-science firms in the world with more than 1,300 companies and 30 added every year, directly employing about 100,000 people. In the whole of California, there are about 1.4 million people employed in this sector directly and indirectly.

...Mike Williams, president and chief executive of West Valley Staffing Group, a high-tech staffing company that conducts its own quarterly surveys on hiring trends in Silicon Valley, said several of his clients are beginning to talk about hiring now.

"In the third quarter of 2009, there has been a moderate increment in recruitment needs, especially in computer and Internet-based technology," Mr. Williams said. "We look forward to the fourth quarter and project continued growth and uptick, especially in Silicon Valley. We get a good, optimistic feel from our clients that there is significant amount of pent-up demand in this sector," he said.

Among the fast-growing biotech firms are Gilead Sciences Inc., a research-based biopharmaceutical company; and Genentech Inc., a company that is considered the founder of biotech industry.

In genomics and the technology around it, firms moving forward despite the current economic meltdown include Complete Genomics, a firm focusing on complete human genome studies and genomic medicine; Pacific Biosciences Inc., a company involved in commercializing DNA sequencing technology and sequencing of individual genomes as part of routine medical care; and Fluidigm Corp., a firm dealing with molecular diagnostics, personalized medicine and wildlife conservation....

The current upswing in confidence reflects the fact that the emerging fields, — including biotechnology, scientific research and design, and internet-based technologies — are becoming established as sources of potentially lucrative jobs.

"The early part of 2009 was encouraging in terms of job growth in the Silicon Valley high-tech industry. Though there are some job losses in certain sectors since late spring this year, companies in biotechnology, scientific research and design, pharmaceutical research and Internet search tend to remain stable," said Mr. Mann.

...Among the high-tech giants avoiding large-scale job cuts is Intel Corp., the worlds largest semiconductor chip maker responsible for the rapid growth of the PC industry through its microprocessors in the '90s.

... Between 2001 and 2008, there was 32 percent job growth in the private sector pharmaceutical industry, and a 26 percent increase in jobs in biotechnology and other life sciences. The report indicates that despite the high cost of doing business here, the Silicon Valley region continues to be the epicenter of innovation and that the high-tech industry with its broadened scope remains the prime source of job growth in future, as well.

In Silicon Valley, the economic downturn has an upside, too. Robert I. Sutton, professor of management science and engineering at Stanford University and author of four books on management science and innovation, says the valley is rich with early-stage start-ups now because talent and real estate have become cheaper and these companies will be the engine of job creation, as in the past.

[Silicon Valley has an enormous potential for the "Genome Revolution" and "Genome Based Economy". Maybe the "wake-up calls" from other regions (see article above) will trigger the needed coalescing of the two parts of Genome Informatics.

Full R&D and Market analysis of this development is upon request. Pellionisz; HolGenTech_at_gmail.com]

Why Can't Chimps Speak? Key Differences In How Human And Chimp Versions Of FOXP2 Gene Work

["Net Talk" - A network of genes governs the development of a Neural Net that produces speech - AJP]

ScienceDaily (Nov. 12, 2009) If humans are genetically related to chimps, why did our brains develop the innate ability for language and speech while theirs did not?

Scientists suspect that part of the answer to the mystery lies in a gene called FOXP2. When mutated, FOXP2 can disrupt speech and language in humans. Now, a UCLA/Emory study reveals major differences between how the human and chimp versions of FOXP2 work, perhaps explaining why language is unique to humans.

Published Nov. 11 in the online edition of the journal Nature, the findings provide insight into the evolution of the human brain and may point to possible drug targets for human disorders characterized by speech disruption, such as autism and schizophrenia.

"Earlier research suggests that the amino-acid composition of human FOXP2 changed rapidly around the same time that language emerged in modern humans," said Dr. Daniel Geschwind, Gordon and Virginia MacDonald Distinguished Chair in Human Genetics at the David Geffen School of Medicine at UCLA. "Ours is the first study to examine the effect of these amino-acid substitutions in FOXP2 in human cells.

"We showed that the human and chimp versions of FOXP2 not only look different but function differently too," said Geschwind, who is currently a visiting professor at the Institute of Psychiatry at King's College London. "Our findings may shed light on why human brains are born with the circuitry for speech and language and chimp brains are not."

FOXP2 switches other genes on and off. Geschwind's lab scoured the genome to determine which genes are targeted by human FOXP2. The team used a combination of human cells, human tissue and post-mortem brain tissue from chimps that died of natural causes.

The chimp brain dissections were performed in the laboratory of coauthor Todd Preuss, associate research professor of neuroscience at Emory University's Yerkes National Primate Research Center.

The scientists focused on gene expression -- the process by which a gene's DNA sequence is converted into cellular proteins.

To their surprise, the researchers discovered that the human and chimp forms of FOXP2 produce different effects on gene targets in the human cell lines.

"We found that a significant number of the newly identified targets are expressed differently in human and chimpanzee brains," Geschwind said. "This suggests that FOXP2 drives these genes to behave differently in the two species."

The research demonstrates that mutations believed to be important to FOXP2's evolution in humans change how the gene functions, resulting in different gene targets being switched on or off in human and chimp brains.

"Genetic changes between the human and chimp species hold the clues for how our brains developed their capacity for language," said first author Genevieve Konopka, a postdoctoral fellow in neurology at the David Geffen School of Medicine at UCLA. "By pinpointing the genes influenced by FOXP2, we have identified a new set of tools for studying how human speech could be regulated at the molecular level."

The discovery will provide insight into the evolution of humans' ability to learn through the use of higher cognitive skills, such as perception, intuition and reasoning.

"This study demonstrates how critical chimps and macaques are for studying humans," noted Preuss. "They open a window into understanding how we evolved into who we are today."

Because speech problems are common to both autism and schizophrenia, the new molecular pathways will also shed light on how these disorders disturb the brain's ability to process language.

The National Institute of Mental Health, the A.P. Giannini Foundation and the National Alliance for Research on Schizophrenia and Depression funded the study.

[It is very clear that the dynamics of genomic networks and the resulting actual neural nets that generate "Nettalk" (one of the earliest application by Sejnowski and Rosenberg of Werbos' "backpropagation" neural net algorithm) are profoundly similar. "Speech networks" and "Cerebellar networks" (both in their genomic network and resulting actual neural networks) are perhaps the most suitable platforms to learn how networks of genes, obviously differently regulated, result in neural networks that produces speech - or in case of the cerebellar neural nets, spactime coordination. The Principle of Recursive Genome Function will, therefore bring the "Neural Net field" into Genome Informatics not only because the NN algorithms can be directly used - but also because of specific neural nets can be targeted. Cerebellar neural networks of the chimp are virtually identical to that of the homo sapiens (in fact, spacetime coordination of monkeys for certain tasks can even be superior) - while their "nettalk" capability is different. (Not entirely different, though, since the primate gorilla Koko was able to convey and comprehend "sign language" - but her actual neural networks did not enable her for a similar level of vocalization and carry as high level abstraction as ours.

Full R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

Back-up by Half a Century - Genome Regulation (1961) becomes parallel (2008)

Genomeweb - This week in Genome Biology
November 04, 2009

In Genome Biology this week, scientists at the Shanghai Institutes for Biological Sciences have studied how after gene duplication, [exonic] splicing enhancers and silencers can affect the generation of new splicing isoforms and therefore, new gene functions. Using a computational approach, they found that these enhancers and silencers diverge especially fast shortly after gene duplication and that this divergence "results in exon splicing state transitions, and that the proportion of paralogous exon pairs with different splicing states also increases over time, consistent with previous predictions." [The paper by Zhenguo Zhang (zhangzg_at_sibs.ac.cn), Li Zhou (zhouli_at_sibs.ac.cn), Ping Wang (pwang01_at_sibs.ac.cn), Yang Liu (yliu05_at_sibs.ac.cn), Xianfeng Chen (xfchen_at_sibs.ac.cn), Landian Hu (ldhu_at_srcb.ac.cn), Xiangyin Kong (xykong_at_sibs.ac.cn) concludes that the findings suggest that splicing requirement but not protein sequence mostly determines the changes of Exonic Splicing Enhancers and Exonic Splicing Suppressors.- AJP]

In other work, researchers at the RIKEN Yokohama Institute show that knocking down a swath of transcription factors in differentiating human THP-1 cells proves that they're interdependent. Using a matrix RNAi system to knock down the 78 transcription factor genes in monocytic THP-1 cells and then using qPCR to monitor gene expression changes, they identified 876 cases "where knockdown of one transcription factor significantly affected the expression of another." Using expression profiling data from the FANTOM4 study, they could classify these genes into three groups: pro-differentiative (229), anti-differentiative (76), or neither (571). [The paper by Yasuhiro Tomaru (tomaru_at_gsc.riken.jp), Christophe Simon (simon_at_gsc.riken.jp), Alistair RR Forrest (forrest_at_gsc.riken.jp), Hisashi Miura (hisabou_at_gsc.riken.jp), Atsutaka Kubosaki (kubosaki_at_gsc.riken.jp), Yoshihide Hayashizaki (yosihide_at_gsc.riken.jp), Masanori Suzuki (msuzuki_at_gsc.riken.jp) "identified 876 significant edges from 7488 possible combinations in the 78 x 96 matrix enabling us to draw a significant perturbation network. Out of these significant edges, 654 were activating edges and 222 were repressing ones". -AJP]

Case Western Reserve University's Thomas LaFramboise is senior author on work that reports a "highly sensitive and configurable method" for finding rare CNVs in SNP array data. In the paper, they applied their method to hundreds of samples and were able to not only detect known CNVs, but also previously unreported ones. [The paper states (with proper quotations): "studies have been published associating copy number variation in the genome with a variety of common diseases. Recent examples include Alzheimer disease, Crohn’s disease, autism, and schizophrenia". Noteworthy that the authors used the neural net algorithm of "Hidden Markov Model" - AJP]

In collaborative work between scientists at the University of California, San Diego, and Virginia Commonwealth University, scientists led by first author Tom Conrad found that when growing E. coli is subjected to lactate minimal media, the bacteria shows a range of genetic adaptations. Whole genome resequencing of 11 adapted strains found that in 7 of these there was an 82 base-pair deletion in the rph-pyrE operon. This mutation, they write in the abstract, "conferred ~15% increase to the growth rate when experimentally introduced to the wild-type background and resulted in a ~30% increase to growth rate when introduced to a background already harbouring two adaptive mutations." [The paper effectively elevated the "Operon regulation" to parallel systems; "genome sequencing of 11 endpoints of Escherichia coli that underwent 60-day laboratory adaptive evolution under growth rate selection pressure in lactate minimal media. Two to eight mutations were identified per endpoint. Generally, each endpoint acquired mutations to different genes. The most notable exception was an 82 base-pair deletion in the rph-pyrE operon that appeared in 7 of the 11 adapted strains." - AJP]


[Belabored in "The Principle of Recursive Genome Function", Genome Regulation by the "Operon" (Jacob and Monod, 1961) was well on its way when Nobel Prize to Watson, Crick and Wilkins (1962) provided dominance to Crick's flabbergasting "Central Dogma" that excluded information-flow from proteins to DNA, barely prevailing by Crick's 1970 re-assertion of his misconception (fortified by Ohno's false axiom that even if there was a feedback, it would only find "Junk DNA", 1972). Thus, genomics increasingly became a "gene hunt" (even for genes, as we know now, will never be found since the 140,000 genes just don't exist..) - with genome regulation theory and research slipping into a remote back seat.

In last week's Cold Spring Harbor meeting on Genome Informatics, the scare a month ago (at the 2nd Personal Genomes meeting, when the "data-avalanche of ~50 personal genomes hit hard) was escalated into "a sense of urgency about the need for workable solutions to keep data analysis moving for large-scale projects. Michele Clamp, senior computational biologist at the Broad Institute and a conference co-organizer said: "informatics is the bottleneck".

Indeed, The Principle of Recursive Genome Function, and now Fractality of DNA on the Science cover by the Science Adviser to the President, not only fractal iterative recursion, but all "neural net algorithms" (by definition, parallel) are "in".

Full R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

Getting Results Back
November 05, 2009

As part of Genomics Law Report's series, "What ELSI is New?", Daniel MacArthur considers the issue of whether researchers should return medically relevant, or even just interesting, results to study participants. He says that "in the absence of convincing evidence that disclosure of results causes harm, I would argue that the default position should be that research participants have complete access to their own genetic data if they request it." MacArthur says this is not only an ethical imperative but a move that could improve study recruitment and retention rates by providing a benefit to the participant.

Similarly, 23andMe's Anne Wojcicki says that she is "disappointed" that Kaiser Permanente will not be returning data to the 100,000 participants in its Research Program on Genes, Environment and Health, that will study genetic and environmental factors affecting common diseases. "Kaiser should afford the participants the respect they deserve by allowing them to decide for themselves whether they want to see their own genome," she writes at the Spittoon, expanding on her remarks at the TEDMED meeting.

As the Genomics Law Report points out, Kaiser Permanente's Cathy Schaefer responds at the Robert Wood Johnson Foundation Pioneering Ideas blog that all the participants know when they sign up for the study that their results won't be given back to them. "We also inform participants that if we discover something in their data or samples that may be important to their health, we will contact them to learn if they want to have the information," Schaefer adds. She says they aren't returning full results since the role of some variants in disease isn't fully known and that some results aren't actionable.

[There are two ways to resolve this obvious infringement of the "Freedom of Information Act". This researcher would not belabor the low road of litigation of fundamental rights of individuals to the information that is derived from their uttermost private property, their genomes. (This is not merely a matter of "respect" research participants deserve, but their right).

Rather, in full agreement with 23andMe, the "high road" of business aspects are emphasized here, that by Kaiser not returning full results actually MAKE the results "not actionable" (since one can not act on an information if it is not provided).

DTC companies have vital business reasons to return results (downloadable full raw SNP files) - though presently the percentage of downloads are very small. In the opinion of this researcher, the DTC business model will to a large extend depend upon the "actionability" of returned electronic files, for automation of "Preventive health care by customers". An already filed USPTO submission (patent pending) was flashed out in a Google Tech Talk YouTube a year ago, to utilize (even partial) personal profiles in a closed business model - viewed by ~6,500.

Full R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

Complete Genomics cracks the door open to JunkDNA analysis in mass-production

Institute for Systems Biology to Work With Complete Genomics to Conduct Large-Scale Huntington`s Disease Study

Reuters, Mon Nov 2, 2009

Project to Sequence an Unprecedented 100 Human Genomes in Just Six Months

SEATTLE & MOUNTAIN VIEW, Calif.--(Business Wire)--

[Count a normal range of cag repeats in exon1 of the HD gene - AJP]

The Institute for Systems Biology (ISB) and Complete Genomics Inc. announced today that they are embarking on a large-scale human genome sequencing study of Huntington`s disease (HD). ISB has engaged Complete Genomics to sequence 100 genomes, the majority of which will be used to investigate this disease, with samples from affected individuals, family members, and matched controls to study modifiers of disease presentation and progression.

This will be the largest complete human genome disease association study conducted to date, and will be the first 100-genome study produced by Complete Genomics` newly expanded sequencing facility. The comparison of healthy and diseased complete human genome sequences will enable genomewide association studies with a focus on rare single nucleotide polymorphisms (SNPs), and insertions and deletions that are incompletely accessible with current genomewide SNP chip technologies. These will include rare variants in protein coding regions of the genome (the "exome") as well as in regulatory regions.

"It is when we start to look at genomics research on this scale that our sequencing technology really comes into its own and we have the potential to make truly revolutionary discoveries," said Dr. Clifford Reid, chairman, president and CEO of Complete Genomics. "I am delighted that we have the opportunity to partner with ISB in this effort to discover the genetic variants responsible for modulating the presentation and progress of Huntington`s disease."

ISB President Dr. Leroy Hood said, "We were pleased with the quality of the raw sequencing data and variations reports that Complete Genomics generated for our four-genome pilot project earlier this year. Its sequencing technology has the requisite accuracy, consistency and low price point to enable us to begin conducting this large-scale genomic study in this important patient population."

Huntington's disease is a devastating, hereditary, degenerative brain disorder for which there is, at present, no effective treatment or cure, according to the Huntington`s Disease Society of America. The Society adds that HD affects one out of every 10,000 Americans, slowly diminishing the affected individual`s ability to walk, think, talk and reason.

For this study, ISB will supply the purified DNA samples and Complete Genomics will sequence and identify variations for each genome. ISB will then do the genetic analysis at the sequence level.

[Hungtinton disease is caused by a well-known "CAG" repeat on the Exon1 of the Huntington gene, when in a precisely known location the number of "CAG"-s is (sometimes much) more than 40. (Below 28 repetitions the available and inexpensive genetic testing declares the customer "normal", that is, free of the disease). Why does, therefore, an "affordable mass production of full human DNA sequencing" focus on a disease for which only the "Exome" (protein-coding regions, exons, a tiny fraction of the 1.4% of human genes) would seem suffice? (Especially, since the rather large number of "nucleotide run" diseases are most often caused in "run" in the intronic, non-coding regions, see e.g. in Friedreich' Spinocerebellar Ataxia). A most potent recent driver is a possibility, that "small interfering RNAs targeting heterozygous single-nucleotide polymorphisms (SNPs) is a promising therapy for human trinucleotide repeat diseases such as Huntington's disease". Shown by a survey, most (if not all...) of such nucleotide-run diseases appear to be genome-regulation diseases. Complete Genomics shoots for the eminently deliverable "repeat count" (computationally one of the simplest task), deferring analysis based on an algorithmic understanding of genome regulation for later. It is commonly acknowledged, that not only software wasn't developed for analysis of short repetitive sequences, but “Conventionally they are treated as “junk” that accumulated during evolution in higher organisms. Many bioinformatics tools are installed by default to filter and remove repeats and low complexity sequences before performing any analyses on the rest of sequences.” To gear for appropriate tools, first "the scientific community will have to re-think long held beliefs" (see The Principle of Recursive Genome Function, resulting in now recognized fractal properties of the DNA) and by means of appropriate genome computing architecture run the novel algorithms by software that is suitably quick. If the "interpretation" of "affordable full genome sequences" will lag (as it certainly seems to be the case), sustainability of the entire sequencing industry might suffer.

Full R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

TedMed Explores Future of Health Care

Event Attracts Martha Stewart, Goldie Hawn, Local Innovators Seeking Answers to Global Challenges

Posted date: 11/2/2009
San Diego Business Journal Staff

Amid a glittering, eclectic gathering of technologists, futurists, media personalities, leading health care practitioners, business leaders and scientists, all mixed with a sprinkling of Hollywood stardust, the four-day TedMed conference was held last week at the Hotel del Coronado.

This assemblage of thinkers, inventors, communicators and implementers came together with the intention of harnessing the diverse knowledge of the event’s 50 speakers in search of new solutions to global health care challenges.

Organized by Richard Saul Wurman, who founded the TED conferences in 1984, the conference set not only a high standard for its speakers, but an equally high tariff for attendees, who paid $4,000 apiece to attend. Total attendance, including the speakers, was limited to 450 people.

Speakers included San Diego-based scientific pioneer J. Craig Venter, founder and president of Synthetic Genomics Inc., Dean Kamen, founder and president of DEKA Research and Development Corp., who holds more than 440 health care related patents, Dr. Sanjay Gupta, neurosurgeon and chief medical correspondent for CNN, Martha Stewart, who unveiled her new outpatient facility for geriatric care at Mount Sinai Medical Center in New York, Dr. Scott Parazynski, a veteran astronaut of seven spacewalks, Helena Foulkes, executive vice president and chief marketing officer of CVS Caremark, the largest pharmacy health care provider in the U.S., and Academy Award winning actress Goldie Hawn, who represented The Hawn Foundation, which she established in 2003 to help children reach their highest potential.

Transforming Delivery Of Health Care

Innovators from San Diego County were well represented with Qualcomm Inc. and Life Technologies Corp. showcasing inventions designed to transform the future delivery of health care, while cutting costs and reducing hospital stays.

With 4 billion cell phones, or three times the number of mobile devices than land lines in the world today, Qualcomm CEO Paul Jacobs said the timing couldn’t be better for introducing patients to new digital health tools.

“In the developing world, most people will only have a cell phone as their connection into the global telecommunications network,” he said.

While Jacobs said he doesn’t envision developing countries as a market for wireless, sensor-driven Band-Aids designed to monitor health, he said they still may benefit from digital medical cell phone technologies.

Dr. Eric Topol, a cardiologist and chief medical officer of the newly formed West Wireless Health Institute in San Diego, said the devices offer a “way to innovate out of the health care crisis today.”

With a wireless device attached to his chest, Topol demonstrated to the audience that he could monitor his blood pressure, pulse, temperature and other states of health using a discreet device placed under his shirt. [In a store where you make consumer choices "Ask NOT what the genome can do for you - ask what you can do for your genome" - by preferring items that fit or fix your genome - AJP]

With an estimated $37 billion a year spent on heart failure in the United States, Topol said constant monitoring could save on costs and readmittance rates, which run as high as 27 percent in heart failure patients.

[PDA-s are turned into PGA (Personal Genome Assistant) by HolGenTech computing architecture for the Genome Based Economy. This is a "Double Disruption" - Preventing, rather than treating diseases, and through your PGA tha doctor "seeing you" - from anywhere in the World, instead of you having to travel to see the "doctor" (who may not be limited to the knowledge of a single person, but could be the electronic repository of the world's "state-of-art" of providing instantaneous answers to your needs).

Full R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

What do they know that we don't know? - Celebs for Prevention

[Watson and Venter]

[Collins before and after]

[Sergey Brin and 23andMe Founder Anne Wojcicki]

The above are not all Nobel-winners, though Dr. Watson received the prize for work he did over half a Century ago. Two are definite candidates, and even a third might get a Peace Nobel award.

The fact is, however, that all of them induced demonstrable Genome-based Prevention, in some cases proven to be successful.

Though the first full human sequencing used five donors of DNA, for one of them (Craig Venter) the analysis compelled him to take "statins" (to control cholesterol), that he did not stop when full sequencing of the DNA of him and, separately, also Jim Watson's were fully sequenced and interpreted.

At the recent "Personal Genomes" 2nd Meeting in Cold Spring Harbor, a public question was addressed to Watson "did you specifically benefit from your full DNA analysis in your preventive health maintenance program?". He went public with at least two specific lessons, that he closely follows. His full DNA sequencing revealed - what he did not know before - that Dr. Watson is partially lactose-intolerant (the gene necessary to produce enzymes to metabolize lactose is intact in one chromosome - but is defective in the other chromosome of the pair). He was aware of some annoying cramps and belly aches throughout his life - but when he switched to the preventive health maintenance program, consuming lactose-free soy milk and other substitutes - now his cramps and belly aches are gone. (Could have saved 80 years of belly aches...). The other example he cared to go on public record was that he was taking a beta-blocker (to control his blood pressure), but his DNA information revealed that this drug personally affects him with an undue sleepiness. Doctors adjusted medication, and now this bothersome side-effect is gone.

Dr. Francis Collins (M.D., Ph.D., Head of NIH) has not had his full DNA sequenced (yet, to our knowledge). However, in the Consumer Genetics conference in Boston (May, 2009) he gave a speech, endorsing DTC genomic testing, backed by his example, that - using a pseudoname - he provided saliva-specimen to the leading 3 DTC companies (DeCodeMe, 23andMe, Navigenics), and made some very useful and largely supportive comparative analysis of their services. A couple of days ago, in Nature's blog 23andMe has 30,000 “active” genomes, launching “Relative Finder” soon
in a Personalized Medicine Colloquium, he disclosed the information and the above pair of his portraits with note: "Collins discovered that he carries two copies of the most common risk factor of type II diabetes. Collins, whose laboratory investigates the underlying genetic basis of adult-onset diabetes, said he was "surprised" by these findings since his family has no history of the disease. Upon learning the test results, Collins got off his Harley-Davidson and instigated a regular exercise regime. The svelter NIH director said he has now lost 20 pounds." With his vast knowledge and discipline, and given that late-onset Diabetes type 2 can be suppressed by diet, exercise and keeping body fat in tight control, his chances that he will never suffer from Diabetes are virtually certain.

The third "celeb" case is Google founder Sergey Brin (and his wife, Founder of 23andMe). As Sergey disclosed over a year ago in his blog, and now Anne Wojcicki talked publicly at TED, "Wojcicki said her husband recently found out that he was genetically predisposed to becoming a Parkinson’s patient thanks to a 23andMe’s analysis. Wojcicki said that just knowing about the increased likelihood has helped him to stay motivated to stay in shape, eat right and take better care of his health overall."

[If 23andMe did not accomplish anything else (in addition to be "The Invention of the Year"), Anne just convincing Sergey to take the test with the Company they created, would be worth every penny. Sergey wrote in his blog, that though family history showed Parkinson's, he was on the side of the school of thought that Parkinson's is mostly "sporadic", caused by environmental factors - and hereditary factors could be safely neglected. Fortunately, Anne thought it was worth the (current list price of) $399 to check for markers of currently 116 conditions. The fact that Sergey "eats right" can not go wrong, anyway - though high-tech celebs may wonder how navigate in a more automated way in the vast see of nutritional, nutripharmaceutical, cosmetic (etc) consumer goods.

One does not even have to be a celeb - but a clever Mom to be First in 21st Century-style empowerment of kids (the ultimate consumers...): "eat your veggies - preferably organic - since a penny of Prevention is worth a pound of Cure".

Full R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

23andMe has 30,000 “active” genomes, launching “Relative Finder” soon

Wednesday - October 28th, 2009 - 06:39pm EST by Brian Dolan | 23andme | Illumina | iPhone | medical apps | personal genetics | preventive medicine | TEDMED |

At the TEDMED event here in San Diego this week, personal genomics company 23andMe co-founder Anne Wojcicki announced that the company now had more than 30,000 “active” genomes in its database and that it would soon launch a “Relative Finder” service for its users.

As part of the new service, users can explore connections to other users of the site to determine how related they are to each other. 23andMe is offering free genotyping for TEDMED attendees, so Wojcicki joked that this time next year we can all find out how related we are to each other.

Wojcicki also announced that 23andMe now has more than 30,000 “active” genomes in its database right now — a figure the company has played close to the chest since its founding in 2006. About 70 percent of the site’s users have filled out at least one survey that 23andMe uses to enrich its own research. “In less than two years, we have created one of the world’s largest databases” for genomic databases in the world, Wojcicki said.

Another impressive metric that 23andMe mentioned at TEDMED: Since May of last year, in partnership with the Michael J. Fox Foundation, 23andMe has enrolled over 3.000 Parkinson’s patients within one month. In December, 23andMe plans to announce some of the results from analyzing those Parkinson’s patients’ genomes. Wojcicki noted that timeframe makes it one of the quickest turn arounds for a clinical study.

Wojcicki said her husband recently found out that he was genetically predisposed to becoming a Parkinson’s patient thanks to a 23andMe’s analysis. Wojcicki said that just knowing about the increased likelihood has helped him to stay motivated to stay in shape, eat right and take better care of his health overall.

[The world's largest genomic databank is impressive on its own right. Finding out who is a relative or a stranger will be automated. Should it not be also automated "how to eat right" with certain hereditary predilections? Less than a year ago it was flashed out in YouTube "Google Tech Talk" 34:43 ...

Full R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

ParaHaplo: A program package for haplotype-based whole-genome association study using parallel computing

Kazuharu Misawa and Naoyuki Kamatani (RIKEN Center for Genomic Medicine, Tokyo, Japan)

Since more than one million of single nucleotide polymorphisms (SNPs) are analyzed in genome-wide association study (GWAS), multiple comparisons are problematic. To cope with multiple-comparison problems in GWAS, haplotype-based algorithms were developed to correct for the multiple comparisons at multiple SNP loci in linkage disequilibrium.

The permutation test can also control problems inherent in multiple testing; however, the calculation of exact probability and the execution of permutation tests are both time-consuming. Faster methods for calculating exact probabilities and executing permutation tests are required.

Methods: We developed a set of computer programs for the parallel computation of accurate P values in haplotype-based GWAS.

Our program, ParaHaplo, is targeted to workstation clusters using the Intel Message Passing Interface (MPI). We compared the performance of our algorithm with that of the regular permutation test on CHB and JPT of HapMap.

Results: ParaHaplo can detect smaller difference between two populations than SNP-based GWAS.

We also found that parallel-computing technique made ParaHaplo 100-fold faster than non-parallel version of the program.

Conclusion: ParaHaplo is a useful tool for haplotype-based GWAS. The executable binaries and program sources of ParaHaplo are available at http://sourceforge.jp/projects/parallelgwas/?_sl=

Credits/Source: Source Code for Biology and Medicine 2009, 4:7

[While a veritable crowd of 4,600 registered attendants are hard at work in Hawaii (If It Makes You Feel Any Better, the Chairs Are Really Uncomfortable) one risks the statement that advancements in our understanding (Holo)Genome regulation will have to be at the algorithmic level - such that (with fast enough parallel/serial hybrids) the hyperescalating amount of data can be effectively processed (see YouTube almost a year ago).

Full R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


Personal Genome Sequencing Identifies Mendelian Mutations

Bio-IT World
By Kevin Davies

October 22, 2009 | HONOLULU – The only downside of holding a major scientific conference in Hawaii is that, at some point, one has to step inside and attend a talk or two. But those among the more than 4600 registered attendees at the 2009 American Society of Human Genetics (ASHG) convention who ventured indoors were treated to some excellent talks on the opening day. In particular, two groups presented impressive results of next-generation sequencing studies that conclusively show how it is possible to identify previously unknown mutations responsible for Mendelian diseases.

James Lupski (Baylor College of Medicine) is an authority in the area of structural variants underlying genetic disorders (and has been for two decades). In the early 1990s, his team characterized the novel sub-chromosomal duplication that gave rise to a common form of peripheral neuropathy called Charcot-Marie-Tooth (CMT) disease. Since then, mutations in some 40 genes have been shown to give rise to CMT-like diseases. But none of them accounted for one particularly interested patient: Lupski himself.

Earlier this year, Richard Gibbs, director of the Baylor Genome Center, offered to sequence Lupksi’s entire genome in the hopes of finally identifying the mystery mutant gene. (Gibbs and Lupski were part of the team that interpreted the first personal genome delivered by next-gen sequencing, James Watson, in 2007.) Using the Life Technologies/Applied Biosystems SOLiD platform, Gibbs and colleagues sequenced Lupski’s DNA to 30-fold coverage.

Not surprisingly, the sequencing produced thousands of single-nucleotide polymorphisms (SNPs) considered putative disease-causing mutations. Gibbs applied a series of filters, removing SNPs already catalogued in the database (and thus considered too common to be the basis of a rare genetic disorder) as well as those found in HapMap samples. Lupski detailed how 6 SNPs in his genome were correlated with known behavioral disorders, 32 were cancer associated (Lupski is a cancer survivor), and 47 were implicated in common diseases.

In the end, Lupski and colleagues found different deleterious mutations in his two inherited copies of a gene called SH3TC2. The gene encodes a protein expressed in the membrane of Schwann cells that could have a role in the myelination of nerve fibers.

Miller Time

Recently, University of Washington geneticist Jay Shendure and colleagues published a report in Nature showing that exome sequencing of genes from patients with a known genetic disorder (Freeman-Sheldon Syndrome) could indeed separate the mutation signal from the noise of other variants. Exome sequencing has the advantage of sequencing just 1 percent of the DNA in the whole genome, but the analysis is limited to mutations that affect protein-coding regions.

Next, Shendure and colleagues set out to replicate their success with a Mendelian disorder where the root cause has not been found. The Shendure team used Agilent arrays to enrich the exome sequences and Illumina GA II for sequencing.

The investigators selected Miller Syndrome, a developmental disorder characterized by limb defects, first described by Marvin Miller and colleagues from the same university. Shendure’s team sequenced the exomes from four individuals with the disorder including a pair of siblings. When two filters were applied – removing variants observed in dbSNP and sequenced HapMap samples – the Shendure team was left with putative mutations in just one gene: DHODH (dehydroorotate dehydrogenase). Mutations in the same gene were subsequently found in six other children with the syndrome.

The two affected siblings studied in the original study also had respiratory infections reminiscent of cystic fibrosis. When the researchers applied a computational program to predict deleterious changes, mutations in another gene were implicated: DNAH5. Interestingly, this gene is associated with Kartagener’s syndrome, a disorder that has cystic fibrosis-like characteristics. Shendure’s conclusion is that the siblings in this unfortunate family inherited not one but two recessive Mendelian traits.

Shendure would not be drawn on the cost of exome sequencing per individual, but noted that an exome could be sequenced on two lanes of the current Illumina flow cell, which could soon be reduced to a single lane.

In subsequent discussion, David Valle (Johns Hopkins) pointed out that interpretation of these results was greatly facilitated by the wealth of medical expertise brought to bear on evaluating the biological significance of specific candidate genes. Nevertheless, as genome sequencing costs continue to drop, these studies (and others recently published in the area of cancer) strongly suggest that whole- or even partial genome sequencing can identify causal alleles associated with rare genetic diseases. Doubtless this is only the beginning.

[These landmark results (previewed at the Personal Genomes conference in Cold Spring Harbor this September) represent highly visibly personalized (Dr. Jim Lupski) "proof of concept" that full exome and ultimately full DNA sequencing does result in identification of causes, by means of "structural variants", at least for "Mendelian diseases" (where the structural variants are within the exons, the directly coding-parts of the genes) - but undoubtedly the search will be extended to intergenic and intronic ("non-coding") regions, and will certainly not be limited to Single Nucleotide Polymorphisms (SNP-s, single missense or nonsense mutations of one of the A,C,T,G letters, when instead of one amino-acid is generated, the codon codes for a different one, or turns a coding codon into a premature stop-codon). Genome Centers such as at Baylor (Houston) and U. Washington (Seattle) will have to cope now with the extraordinary challenge of both "brute force" and "targeted, algorithmic" search and the compute-walls they present.

Full R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


'Personalized nutrition' is goal of Nutrigenomics initiative

High Planes Journal

Imagine a physician or dietitian handing you a set of individualized nutritional guidelines based on your unique genetic makeup--one that could help you ward off such diseases as cancer, diabetes and Alzheimer's. [A consesus of the Cold Spring Harbor "Personal Genomes" meeting, Sept. 14-17, 2009 was, that practicing physicians (let alone dietitians) for the basic reason of lack of their time, will simply not be able to keep up with the hyperescalating science of [holo]genomics. A distinguished Panelist emphasized that to deal with the already "out of hand" complexity, the conclusions must be automated. Thus, rather than expecting a dietitian to interpret your personal genome and handhold you through your daily life of what foods, food additives, cosmetics, etc. you should opt for (or against), your handheld "Personal Genome Assistant" that you already might have as your "smart phone" will do this handholding for you. The technology was introduced at the firts ever Consumer Genetics conference, see here - AJP]

That's the ultimate goal of the Nebraska Gateway for Nutrigenomics, a new research initiative at the University of Nebraska-Lincoln. It aims to use genome-based technologies to figure out what makes individuals and some ethnic groups susceptible to certain diseases and develop nutritional strategies to overcome those susceptibilities.

"In the old days," nutrition scientist Tim Carr said, "we used to say 'my grandfather ate bacon and eggs everyday and still lived to 103.' Those of us in the business would say, 'that's just genetics,' and we'd dismiss it.

"We're no longer dismissing it. We're trying to figure out how that works," Carr added.

Individuals' risk for certain diseases depends in part on their genetic makeup. Although those genetic makeups can't be altered, how they behave can be manipulated by diet. That's what nutrigenomics is all about.

"Looking at the genetic makeup of individuals, you can identify certain risk factors and make dietary recommendations," said Janos Zempleni, a molecular nutritionist who heads UNL's nutrigenomics initiative.

In February, a review team comprising scientists from several other universities noted that UNL is well positioned to be a leader in this burgeoning research field because it can integrate its plant-genomics expertise with its nutrition and food-science expertise. As food and nutrition scientists determine how diet interacts with the genome, agricultural scientists will be able to develop crops and livestock to put those findings into action.

"Since Nebraska is where America's diet begins, it is appropriate that UNL would be a leader in the nutrigenomic field," the team said in its report.

UNL food scientist Vicki Schlegel, another member of the research team, put it this way: "You're making agriculture a pharmacy, basically."

Schlegel imagines a day when states might carve out niches for certain kinds of health-boosting crops.

"We might say, 'in Nebraska we grow crops for heart health,'" she said. "Colorado might say, 'we grow crops to fight diabetes.'"

The U.S. Department of Agriculture's well-known diet guidelines are based on nutritional needs, said Schlegel, who specializes in neutraceuticals. "What we're talking about is a step beyond. This is considering foods from a more complex perspective."

"This is a huge shift in thinking," Carr said. "We are going from one-size-fits-all recommendations to a realization that one size doesn't fit all."

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


The rise of epigenomics - Methylated spirits
The human genome gets more and more complicated

From The Economist print edition
Oct 15th 2009

[This cartoon is perhaps a simple minded mechanical exaggeration. Reality is not as bad as it seems. Maybe the YouTube "call for Information Theory" a year ago to gear up Information Technology for the "Dreaded DNA Data Deluge" helped; Professionals already put forward serious theories of hologenome regulation. The one to beat is a software enabling approach that is now accepted without any objection voiced by the leading community. Thus, we no longer have to rely solely on loose metaphors as above but can advance to a stage similar to Bohr' atomic model. When he submitted his concept, his colleagues came back with this answer: "We all agree that your theory is crazy. The only thing we can not tell yet, if it is crazy enough to be true" - AJP]

IT WAS, James Watson claimed, something even a monkey could do. Sequencing the human genome, that is. In truth, Dr Watson, co-discoverer of the double-helical structure of DNA back in the 1950s, - [1953 AJP] had a point. Though a technical tour-de-force, the Human Genome Project was actually the sum of millions of small, repetitive actions by cleverly programmed robots. When it was complete, so the story went, humanity’s genes—the DNA code for all human proteins—would be laid bare and all would be light.

It didn’t quite work out like that. Knowing the protein-coding genes has been useful. It has provided a lexicon of proteins, including many previously unknown ones. What is needed, though, is a proper dictionary-an explanation of what the proteins mean as well as what they are. For that, you need to know how the genes’ activities are regulated in the 220 or so [many more - AJP] different types of cell a human body is made from. And that is the purpose of the American government’s Roadmap Epigenome Programme, results from which are published this week in Nature by Ryan Lister and Mattia Pelizzola of the Salk Institute in California, and their colleagues.

Epigenomics studies the distribution over the genome’s DNA of control molecules called methyl groups. [Therein the controversy; the methyl groups arise through the epigenomic channels, but acually sit on the genome. Thus, a separation of genomics from epigenomics cuts methylation into half. Clearly in HoloGenomics the two sides of the coin should always be considered in their interaction, with methylation as "the egede" of the coin - AJP]- These can attach themselves to cytosine, one of the four chemical bases that form the “letters” of the genetic code. In so doing, they help control a process called transcription, in which a copy of a gene is made in the form of a molecule called RNA, the first stage in the translation of a gene into a protein. The presumption is that the pattern of methylation, by controlling which proteins are manufactured, helps determine what type of cell is produced. A cell with its haemoglobin genes switched on to overdrive, for example, will become a red blood cell. One that churns out actin and myosin, which link up to form units that can expand and contract, will become a muscle cell. And so on. Dr Lister and Dr Pelizzola have tested this idea by describing the first two reasonably complete human epigenomes.

Waving or drowning?

The cells they chose to look at were embryonic stem cells, which retain the potential to turn into a variety of other cell types, and fetal lung fibroblasts, which are the end of one line of cell specialisation. They read the methylation patterns of these cells using a chemical trick that turns methylated cytosine (letter C) into another base, called uracil. In nature, this base is found in RNA, rather than DNA, but it is just as susceptible to being recorded by one of Dr Watson’s mechanical monkeys as the others are. Altogether, the researchers were able to read and compare about 90% of the genomes of their two types of cell.

Their first discovery was that the stem cells were more methylated than the lung cells—5.8% of cytosines, compared with 4.3%. Moreover, the difference was largely accounted for by something strange. Previous studies have shown that methylated cytosines are usually next to a letter called guanine (G). It is a common characteristic of the so-called promoter regions of genes, where transcription begins, that they contain long, repetitive sequences of alternating Cs and Gs. If these areas become methylated, it tends to suppress transcription of the gene in question. A quarter of the methylated cytosines in stem cells, however, were not followed by guanines. Nor were they found in the promoter regions of genes, but rather in the transcribed parts of the genes themselves. They also had the opposite effect from methylated cytosines found in promoter regions. The genes they occurred in tended to be transcribed more than usual, not less. In particular, a lot of genes involved in processing RNA were activated in the stem cells in this way.

One unexpected discovery made during the decade since the genome project was finished is that there are thousands of small genes whose RNA copies are not translated into proteins. Instead, the RNA acts in its own right. In plants, for example, it is one of the things that switches other genes on and off at their promoter sites. Whether it does so in mammals has yet to be established. But it might. In any case, unusual patterns of RNA processing in stem cells are something that will require further examination.

The complexities of methylation, then, are myriad - as are the complexities introduced by all these unexpected small genes. Reading the human genome in the first place may, indeed, have been work for mechanical monkeys. [This might be a too simplistic exaggeration - AJP]. Interpreting the result will require the finest minds that humanity can muster.

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


Salk-Led Team Generates First Map of Human Methylome

October 14, 2009

By Andrea Anderson

[Entering the practical era of HoloGenomics (Genomics + Epigenomics, expressed in Informatics). Figure from Ecker et al. Nature paper. Note the extensive methylation of non-coding LINE repeats - a key in Recursive Genome Function - AJP]

NEW YORK (GenomeWeb News) – In a paper appearing online today in Nature, American and Australian researchers reported that they have mapped cytosine methylation across the genomes of two human cell lines.

"This paper documents the first complete mapping of the methylome, a subset of the entire epigenome, of two types of human cells — an embryonic stem cell and a human fibroblast line," Linda Birnbaum, director of the National Institute of Environmental Health Sciences, who was not involved in the study, said in a statement. "This will help us better understand how a diseased cell differs from a normal cell, which will enhance our understanding of the pathways of various diseases."

The work was done as part of a National Institutes of Health Roadmap Project on epigenetics.

The team mapped methylated cytosines to single base resolution across the genomes of an embryonic stem cell line and a differentiated lung fibroblast line, also incorporating information on messenger and small RNA transcripts and chromatin marks in the cells. When they compared the methylomes, the researchers found a slew of differences, including cell specific cytosine methylation patterns.

The study also opens the door for more extensive characterization of human epigenomes, senior author Joseph Ecker, a plant biology researcher at the Salk Institute for Biological Studies and director of the Salk Institute Genomic Analysis Lab, told GenomeWeb Daily News. "The real goal is to compare differentiating cells," he said. [To further refine, a key goal of HoloGenomics as a new field is the algorithmic, thus software-enabling theoretical understanding of hologenome-regulation - AJP]

Last spring, he and his co-workers reported that they had used a combination of sodium bisulfite sequencing and other high-throughput sequencing methods to characterize the methylome, transcriptome and small RNA transcriptome of the model plant Arabidopsis.

For the current study, Ecker and his team applied a similar approach to tackle the human epigenome, using bisulfite sequencing with the Illumina Genome Analyzer II to map cytosine methylation patterns across the genome in two human cell lines — the differentiated fetal lung fibroblast line IMR-9 and the human embryonic stem cell line H1.

Nearly all of the methylation in the differentiated fibroblast cells, was "CG methylation," occurring at sites at which cytosine is followed by guanine.

In contrast, roughly a quarter of the cytosine methylation in the stem cell was not at sites where cytosine was followed directly by guanine. This "non-CG methylation" was known to exist in stem cells, Ecker explained, but its prevalence was hard to judge in past studies that looked at only small portions of the genome.

"Non-CG methylation is not completely unheard of — people have seen it in dribs and drabs, even in stem cells. But nobody expected that it would be so extensive," co-lead author Mattia Pelizzola, a post-doctoral researcher in Ecker's lab, said in a statement. "[N]on-CG methylation was often considered a technical artifact."

By looking at the number of methyl-cytosine reads at each site, the researchers were also able to map methylation levels across the genome. For instance, the team found that differentiated fibroblast cells contained partially methylated areas that frequently corresponded with decreased gene expression, Ecker noted.

They also reported that non-CG methylation in stem cells was often depleted at transcription start sites as well as enhancer and transcription factor binding sites. In contrast, the researchers did not see this periodicity in the differentiated cells, Ecker said.

When they targeted some of the non-CG methylation loci with bisulfite sequencing in another human embryonic stem cell line called H9, the researchers found similar non-CG methylation patterns at conserved positions.

"The exclusivity of non-CG methylation in stem cells, probably maintained by continual de novo methyltransferase activity and not observed in differentiated cells, suggests that it may have a role in the origin and maintenance of this pluripotent state," the team concluded. "Essential future studies will need to explore the prevalence of non-CG methylation in diverse cell types, including variation throughout differentiation and its potential re-establishment in induced pluripotent states."

Consistent with the potential link between non-CG methylation and pluripotency, the same sites were non-CG methylated in the team's pilot experiments looking at an induced pluripotent stem cell line created by reprogramming fibroblast cells.

In the future, Ecker said, the team hopes to track changes in the epigenome, including genome-wide chromatin marks, methylation, and more, as they coax embryonic stem cells into a variety of differentiated cell types.

The single-base resolution human methylome data is available online through the Human DNA Methylome web site [see below - AJP].

Human DNA methylomes at base resolution show widespread epigenomic differences

Ryan Lister1,9, Mattia Pelizzola1,9, Robert H. Dowen1, R. David Hawkins2, Gary Hon2, Julian Tonti-Filippini4, Joseph R. Nery1, Leonard Lee2, Zhen Ye2, Que-Minh Ngo2, Lee Edsall2, Jessica Antosiewicz-Bourget5,6, Ron Stewart5,6, Victor Ruotti5,6, A. Harvey Millar4, James A. Thomson5,6,7,8, Bing Ren2,3 & Joseph R. Ecker1

Genomic Analysis Laboratory, The Salk Institute for Biological Studies, La Jolla, California 92037, USA
Ludwig Institute for Cancer Research,
Department of Cellular and Molecular Medicine, University of California San Diego, La Jolla, California 92093, USA
ARC Centre of Excellence in Plant Energy Biology, The University of Western Australia, Crawley, Western Australia 6009, Australia
Morgridge Institute for Research, Madison, Wisconsin 53707, USA
Genome Center of Wisconsin, Madison, Wisconsin 53706, USA
Wisconsin National Primate Research Center, University of Wisconsin-Madison, Madison, Wisconsin 53715, USA
Department of Anatomy, University of Wisconsin-Madison, Madison, Wisconsin 53706, USA


DNA cytosine methylation is a central epigenetic modification that has essential roles in cellular processes including genome regulation, development and disease. Here we present the first genome-wide, single-base-resolution maps of methylated cytosines in a mammalian genome, from both human embryonic stem cells and fetal fibroblasts, along with comparative analysis of messenger RNA and small RNA components of the transcriptome, several histone modifications, and sites of DNA–protein interaction for several key regulatory factors. Widespread differences were identified in the composition and patterning of cytosine methylation between the two genomes. Nearly one-quarter of all methylation identified in embryonic stem cells was in a non-CG context, suggesting that embryonic stem cells may use different methylation mechanisms to affect gene regulation. Methylation in non--CG contexts showed enrichment in gene bodies and depletion in protein binding sites and enhancers. Non--CG methylation disappeared upon induced differentiation of the embryonic stem cells, and was restored in induced pluripotent stem cells. We identified hundreds of differentially methylated regions proximal to genes involved in pluripotency and differentiation, and widespread reduced methylation levels in fibroblasts associated with lower transcriptional activity. These reference epigenomes provide a foundation for future studies exploring this key epigenetic modification in human disease and development.

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


Personalized Medicine - New Leroy Hood startup raises $30 million in financing

October 14, 2009

Last month, we reported that Dr. Leroy Hood's latest startup company, Integrated Diagnostics, had raised $7.5 million of a $30 million round. At the time, representatives for Hood and his Institute for Systems Biology didn't want to talk about the round. But today, the famed biotechnology pioneer is pulling back the covers on the company and officially announcing more than $30 milling in funding from InterWest Partners, The Wellcome Trust and dievini Hopp Biotech Holding.

That's an interesting syndicate of investors, combining funds from traditional venture capital with capital from non-profit entities. InterWest is a huge Silicon Valley venture fund, while the Wellcome Trust is the UK's largest charity supporting biomedical research. Meanwhile, dievini Hopp Biotech is investing as part of a collaboration agreement between Hood's ISB and the Grand Duchy of Luxembourg.

Last year, Luxembourg announced a $200 million partnership with three U.S. biomedical research groups to improve health care in the tiny European country and boost economic development in the biotech sector.

"This venture is an important element of our long-term plan to advance Luxembourg’s research and commercial capabilities in systems biology and personalized medicine," Patrizia Luchetta, deputy director, Board of Economic Development for the Grand Duchy of Luxembourg said in a release.

According to my numbers, Integrated Diagnostics' massive funding round would tie it for the third largest venture financing deal of the year in Washington state.

And the investment is a big bet on Hood, a renowned scientist who helped create biotechnology companies such as Angen, Rosetta Inpharmatics and Applied Biosystems. With Integrated Diagnostics, Hood and his team of researchers are looking to create new diagnostic tools and biomarkers to help usher in a new era of personalized medicine.

Hood explains the company's ambitious plans this way:

“Just as the DNA sequencer allowed us to decode the human genome, the technology behind Integrated Diagnostics will allow us unprecedented insight into preventing and treating diseases like cancer, diabetes and Alzheimer’s by analyzing the proteins that appear in their earliest stages. I have had the good fortune to found several successful biotechnology companies. I believe Integrated Diagnostics will prove to be among the most significant. By taking a systems approach to monitoring an individual’s health we will be able to provide physicians and patients an early warning system for preventing and treating diseases.”

Integrated, which is based in Seattle, was co-founded by Caltech chemistry professor Jim Heath, Battelle Memorial Institute Vice President and ISB professor David Galas and ISB scientific director of special projects Paul Kearney.

More scientific analysis on what Hood and his team are doing from a story in last year's MIT Technology Review.

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


Science adviser to Prez. Obama; Eric Lander (Dir. of Broad Institute of Harvard and MIT) et al. say The DNA is Fractal

[Science adviser to President; Eric Lander et al. say: The DNA is Fractal - AJP]

[See Science cover article, popularized below. Roots of the concept, the age-old Hilbert and Peano curve is referenced only through citing Mandelbrot's classic. Application to DNA folding in modern times around 1990 by Alexander Grosberg (Moscow, now New York University) is cited. Early concepts of DNA using fractal recursion and self-similarity are not belabored - Pellionisz; HolGenTech_at_gmail.com]


Fractal globule architecture packs two meters of DNA into each human cell, avoids knots

Scientists decipher the 3-D structure of the human genome

[Fractal is structuro-functional - AJP]

CAMBRIDGE, Mass. -- Scientists have deciphered the three-dimensional structure of the human genome, [further - see Pellionisz' FractoGene since 2002, Post-ENCODE in Pellionisz Principle of Fractal Iterative Recursion, 2008, and as recently as in Cold Springs Harbor, 2009] paving the way for new insights into genomic function and expanding our understanding of how cellular DNA folds at scales that dwarf the double helix.

In a paper featured this week on the cover of the journal Science, they describe a new technology called Hi-C and apply it to answer the thorny question of how each of our cells stows some three billion base pairs of DNA while maintaining access to functionally crucial segments. The paper comes from a team led by scientists at Harvard University, the Broad Institute of Harvard and MIT, University of Massachusetts Medical School, and the Massachusetts Institute of Technology.

"We've long known that on a small scale, DNA is a double helix," says co-first author Erez Lieberman-Aiden, a graduate student in the Harvard-MIT Division of Health Science and Technology and a researcher at Harvard's School of Engineering and Applied Sciences and in the laboratory of Eric Lander at the Broad Institute. "But if the double helix didn't fold further, the genome in each cell would be two meters long. Scientists have not really understood how the double helix folds to fit into the nucleus of a human cell, which is only about a hundredth of a millimeter in diameter. This new approach enabled us to probe exactly that question."

The researchers report two striking findings. First, the human genome is organized into two separate compartments, keeping active genes separate and accessible while sequestering unused DNA in a denser storage compartment. Chromosomes snake in and out of the two compartments repeatedly as their DNA alternates between active, gene-rich and inactive, gene-poor stretches.

"Cells cleverly separate the most active genes into their own special neighborhood, to make it easier for proteins and other regulators to reach them," says Job Dekker, associate professor of biochemistry and molecular pharmacology at UMass Medical School and a senior author of the Science paper.

Second, at a finer scale, the genome adopts an unusual organization known in mathematics as a "fractal." The specific architecture the scientists found, called a "fractal globule," enables the cell to pack DNA incredibly tightly -- the information density in the nucleus is trillions of times higher than on a computer chip -- while avoiding the knots and tangles that might interfere with the cell's ability to read its own genome. Moreover, the DNA can easily unfold and refold during gene activation, gene repression, and cell replication.

"Nature's devised a stunningly elegant solution to storing information -- a super-dense, knot-free structure," says senior author Eric Lander, director of the Broad Institute, who is also professor of biology at MIT, and professor of systems biology at Harvard Medical School.

In the past, many scientists had thought that DNA was compressed into a different architecture called an "equilibrium globule," a configuration that is problematic because it can become densely knotted. The fractal globule architecture, while proposed as a theoretical possibility more than 20 years ago, has never previously been observed [a fractal model of a Purkinje neuron was, in fact, constructed more than 20 years ago, specifically stating "3.1.3. Neural Growth: Structural Manifestation of Repeated Access to Genetic Code. One of the most basic, but in all likelihood rather remote, implication of the emerging fractal neural modeling is that it corroborates a spatial "code-repetition" of the growth process with the repetitive access to genetic code." As documented in the "acknowledgement", the study was conducted in the framework of grant application "Neural Geometry" to NIMH Program "Mathematical/Computational/Theoretical Neuroscience". However, since "recursive genome function violated both prevailing erroneous axioms, "Crick's Central Dogma" and "Ohno' Junk DNA", the grant was denied and an existing NIH grant discontinued - AJP]

Key to the current work was the development of the new Hi-C technique, which permits genome-wide analysis of the proximity of individual genes. The scientists first used formaldehyde to link together DNA strands that are nearby in the cell's nucleus. They then determined the identity of the neighboring segments by shredding the DNA into many tiny pieces, attaching the linked DNA into small loops, and performing massively parallel DNA sequencing.

"By breaking the genome into millions of pieces, we created a spatial map showing how close different parts are to one another," says co-first author Nynke van Berkum, a postdoctoral researcher at UMass Medical School in Dekker's laboratory. "We made a fantastic three-dimensional jigsaw puzzle and then, with a computer, solved the puzzle."

Lieberman-Aiden, van Berkum, Lander, and Dekker’s co-authors are Bryan R. Lajoie of UMass Medical School; Louise Williams, Ido Amit, and Andreas Gnirke of the Broad Institute; Maxim Imakaev and Leonid A. Mirny of MIT; Tobias Ragoczy, Agnes Telling, and Mark Groudine of the Fred Hutchison Cancer Research Center and the University of Washington; Peter J. Sabo, Michael O. Dorschner, Richard Sandstrom, M.A. Bender, and John Stamatoyannopoulos of the University of Washington; and Bradley Bernstein of the Broad Institute and Harvard Medical School.

This work was supported by the Fannie and John Hertz Foundation, the U.S. Department of Defense, the National Science Foundation, the National Space Biomedical Research Institute, the National Human Genome Research Institute, the American Society of Hematology, the National Heart, Lung and Blood Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, the Keck Foundation, and the National Institutes of Health.

[It was predicted in 1989 and the burden shouldered that "The road towards developing a full structuro-functional fractal model ... is certainly long and-as most geometrizations in natural sciences-probably uphill. Nevertheless, it opens up several possibilities and thus appears well worth pursuing". From the fractal structure of DNA-strand (instead of a disarmingly naive Euclidean 1-dimensional straight line) towards fully developing the Theory of Fractal Iterative Recursion of Genome Function - with all its manifold implications, to make up for the set back by half a Century since Crick's Central Dogma in 1956, calls for a little brute force, but for a lot of intelligent support of the creative struggle. - Pellionisz; HolGenTech_at_gmail.com]


Knome Personal Genomics Service Expands to Include 93,000 Rare Mutations from the Human Gene Mutation Database Distributed by BIOBASE.

October 08, 2009 08:00 AM Eastern Daylight Time

CAMBRIDGE, Mass. and WOLFENBÜTTEL, Germany--(BUSINESS WIRE)--Knome, a recognized pioneer in the personal genomics industry, announced today that it is incorporating information from The Human Gene Mutation Database (HGMD® - Cardiff University, UK) distributed by BIOBASE, into its genome interpretation services. With over 93,000 mutations and disease-related polymorphisms in more than 3,500 genes, HGMD is one of the world’s most comprehensive databases of medically relevant genetic variation. Typically used by large research institutions and pharmaceutical companies, this is the first time the genomic data in the HGMD is being made available as part of a consumer offering.

Under the agreement, Knome will include HGMD in its KnomeXplorer™ genome browser software, and in its genome interpretation update service. The data will be continuously updated with the latest findings and distributed to Knome’s clients, giving them direct access to the most up-to-date information on a real-time basis. “Incorporating HGMD into our service will significantly expand the dataset used in our core analysis and enhance the content we provide to clients in our update service,” said Jorge Conde, CEO of Knome.

Unlike entry-level consumer genomic services that limit their focus to variants that are common in the general population, Knome informs its clients of even very rare variants that they carry, many of which are thought to be elusive causes of relatively common diseases, such as colorectal cancer and diabetes, that can dramatically affect health and quality of life. Every person likely carries many such variants, each one of which may -- due to its potentially debilitating effects in carriers or their children -- remain too rare to be included in entry-level SNP-chip based services. For example, the incorporation of HGMD adds an additional 1,698 mutations in 76 genes for colorectal cancer, and 1,435 mutations in 110 genes for diabetes, into Knome’s automated interpretation software.

Michael Tysiak, CEO of BIOBASE said, “Combined with their sequencing capabilities and interpretation services, the addition of our rare mutation database will enable Knome to provide even deeper insight into each client’s genetic profile and deliver greater customer value.”

Financial terms and conditions of the agreement were not disclosed.

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


Beyond the Genome [Does Fractal Iterative Recursion makes sense to you? - AJP]

By Brandon Keim October 7, 2009

[Recursive? - AJP]

When scientists finished sequencing the human genome, the answers to diseases were supposed to follow. Six years later, that promise has gone unfulfilled. Genetics just isn’t that useful for predicting who gets sick, and why. The blueprint of life turned out to be an intriguing parts list.

“It’s much more complex than we had thought. There aren’t going to be easy answers,” said Teri Manolio, director of the National Human Genome Research Institute’s Office of Population Genomics. “The genome is constantly surprising us. There’s so much that we don’t know about it.”

Manolio is the lead author of a Nature article entitled “Finding the missing heritability of complex diseases.” Published Wednesday, it’s part of a major change in how scientists see the genome.

In April, several articles in the New England Journal of Medicine featured researchers arguing over why genome-wide association studies — in which thousands of genomes are compared in a hunt for disease-linked patterns — had found so little. Several months later, a massive hunt for schizophrenia genes was described as the field’s “Pearl Harbor.” At a conference this summer at the Jackson Laboratories, the shortcomings of gene-centered explanations were a starting point for talks by some of the world’s most prominent geneticists.

It’s not that genes are suddenly unimportant. Researchers are just acknowledging their variations as pieces of an extraordinarily complicated puzzle, along with how genes are turned on, how many copies are made of each, the shape of the genome itself, and how all of the genome’s protein products mix and interact.

Wired.com talked to Manolio about the future of genomics research.

Wired.com: What do you mean by “missing heritability”?

Teri Manolio: We know that diseases cluster in families. In some diseases, the risk might be two or three times higher than normal, or 30 times higher, for a relative of someone with a disease. But when we do these genomic studies, we find maybe a 50 percent increase in risk. That gap is what’s missing.

Wired.com: The numbers can get tricky. If you’ve found that someone with a certain genetic variant has double the risk of developing a disease, but the heritable risk is a hundred-fold, then we’ve only connected two percent of the heritability to genetics?

Manolio: That’s a fair way of putting it. The gap varies. In some diseases, we’re describing half of the genetic heritability. But that’s unusual. Only macular degeneration has numbers that high. In many diseases, it’s around five percent.

Wired.com: How much of the gap is caused by our inability to link genetics to conditions, and how much has non-genetic causes?

Manolio: There’s a lot of thought that this might be DNA and environment together. If you’re not exposed to adverse environmental factors, then you may never develop a given disease. With a bad enough environmental exposure, you may get a disease regardless of your genetic makeup.

Wired.com: What about aspects of our DNA that we’re just starting to study, like variations in the number of copies we have of each gene, or how genes are activated or physically arranged inside a cell?

Manolio: All of those have been suggested. At least so far, it doesn’t look like copy numbers explain a huge amount of this. But there are other places to look, and I suspect that the answer is going to be, “all of the above.”

Wired.com: How does all this fit with what the public expected of genomics? It seems we had different expectations than the scientific community.

Manolio: Well, to be honest, I think we were a bit naive about things, too. We’d hoped that when we identified where all the genes are, and all the coding regions and all the variations one could have, then that would explain everything. Those were the hopes, and then reality came crashing in.

Wired.com: What about personalized genomics testing? That’s been the big consumer application of genomics so far.

Manolio: Since we’re not explaining a huge mount of the inherited tendencies between people, then the information you get from a genotyping company may not be very apparently useful for predicting your risk of disease in the future. That’s what emerges from many of these studies: There are likely many other factors that increase your risks, and these factors are known and explain more than genomics does now. Genomics is a promising research tool, but right now it’s really a research tool.

Wired.com: How do we find the missing heritability?

Manolio: We’ll follow multiple avenues of research. We have to be humble about how this works.

Wired.com: Do we have the tools?

Manolio: Our sequencing is in good shape — the costs are coming down, we can get everyone’s base pairs read — but interpreting them is a real challenge. Technologies for epigenetics research are still developing. And there will be other needs coming down the pipeline. [For instance, some algorithmic theories of genome function - AJP].

Wired.com: Want to put a timetable on the research?

Manolio: I don’t think we can. In the next few years, we’ll see lots of variants associated with diseases. Many will be further investigated, and their functions determined. That’s one of the missing links here: what’s the function of all these things? [Seems like a perfectly clear question - AJP]. We have over 400 variants identified in a whole variety of traits, but only in a few do we understand how they change a gene’s function, and how that may change biology. But these are great clues to biology.

Wired.com: Is that a better way of thinking about genetics — not in terms of answers, but clues?

Manolio: Absolutely. And if you’re a glass half-full person, then four years ago, we had practically no associations that we could replicate in multiple populations. Now there are hundreds. All of these are clues, and that’s wonderful. We just need to be patient in figuring out what they mean.. [Well, the "patients", suffering from "Junk DNA diseases", like Alzheimer's do not seem to be patient with billions of dollars poured into "big science projects" coming up with "a more and more intriguing 'parts list' - yet little interest in putting the parts together" - AJP]

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


Beckman Prepares for Genomics Services Gold Rush

Bio-IT World CHI's Network October 7, 2009

By John Russell

October 6, 2009 | Like many others, Beckman Coulter is betting the mainstreaming of genomics-based research and the future growth of personalized medicine will create a large, robust market for genomics services. Last March, the services giant purchased Cogenics and combined it with DNA sequencing specialist Agencourt Bioscience, acquired earlier, to create Beckman Coulter Genomics (BCG). As research and medical communities rush to dig gold from mounting genomics data, BCG hopes to sell the needed picks and shovels.

“If you start with the Cogenics business model, you really go to a biobank and partner with research partners or clinical partners to be able to bring samples in, to process them to do DNA and RNA extraction, to bank the nucleic acids, and then to perform a battery of tests,” says Susan Evans, general manager of Beckman Coulter Genomics. “So although we are still staunch advocates of the value of sequencing and the tremendous future of next-gen and third-gen sequencing, it was very important to develop a more comprehensive offering for our strategy.“

Today, BCG has roughly 250 staff worldwide. The company is transitioning away from the Cogenics name but will keep Agencourt as a product brand “for the time being because it’s used in the name of our nucleic acid products classification products.” BCG is headquartered in Beverly, Mass.

“We’ve all watched the trend toward outsourcing versus creating core centers internally,” says Evans, “and certainly we still believe we’re seeing more outsourcing. Also, the sequencing technology is changing so rapidly that companies are looking for an opportunity to come to a service laboratory like ours which has the expertise and experience and are going out and acquiring the new platforms. It allows them to use the technology well before they might choose to acquire a system themselves or actually get to the point of saying ‘I do not want to acquire a system.’”

Beckman, of course, has long been a leading manufacturer of biomedical testing instrument systems, tests and supplies as well as a services provider. The Fullerton, Calif.-based company reported 2007 annual sales of $2.76 billion “with more than 78 percent of this amount generated by recurring revenue from supplies, test kits and services.”

Beckman jumped firmly into the genomics services in 2006 with the purchase of Agencourt for $100 million. It quickly spun out the sequencer equipment business, Agencourt Personal Genomics, selling it to Applied Biosystems Inc. (ABI was later itself acquired by Invitrogen in 2008 for $6.5 billion.)

At Beckman, the strong Agencourt brand was retained and turned into a center of excellence for nucleic acid products and services. This year, Beckman acquired Cogenics from Clinical Data. Services currently available through BCG include sequencing, sample preparation, genotyping, gene expression, biological efficiency and safety testing, with support for all levels of regulatory compliance, including Clinical Laboratory Improvement Amendments (CLIA).

Moreover, BCG offers three commercially available next-generation sequencing platforms: Roche 454 Genome Sequencer FLX with Titanium, Applied Biosystems SOLiD and Illumina Genome Analyzer. While pricing for many sequencing platforms is declining, Evans notes the platform cost is only part of the buy-versus-outsource question. She says needed know-how, sample prep expertise, bioinformatics, and other issues increasingly incline companies to outsource. In line with this, she says BCG can collaborate on study design questions with research organizations.

The addition of Cogenics brought new customers and strengthened BCG’s global reach. Beckman Coulter is a global business, but most of Agencourt’s had been U.S.-based. The acquisition added a facility in the U.K., another in Germany, and co-marketing partnership in China. “Another important component is that Cogenics' North Carolina facility has CLIA-licensed laboratory and we do primarily work for our pharmaceutical customers in that lab. But it also creates strategic opportunity for us to think of additional ways we can use that lab and different ways we can leverage Beckman Coulter product lines and opportunities,” says Evans.

It’s also fair to say a large chunk of Agencourt’s customers were academic. “We have a very diverse customer population from academic to biotech and biopharma but probably stronger on the research, academic, and industrial (bioag) side of the business. [The] Cogenics strategy was also to value a broader customer base but to focus on partnerships with pharmaceutical companies and especially to utilize the clinical genotyping and gene expression resources and clinical genotyping for clinical trial support. They also, of course, had an almost 20-year business supporting pharma biological safety testing.”

BCG is now focused on integrating the two businesses and driving revenue. Says Evans, “Our near-term initiatives are a combination of business efficiency and operation consolidation. So like any integration, we’re looking for best practices across each of these laboratories. We’re looking for increased automation. We’re looking for ways to enhance our information flow from lab to lab and be able to integrate support for our customers. We’re now five months into [that effort] and will continue over the next number of months. On the other side it’s a redesigned sales team and establishing the broader product offering and stronger messaging. We are expecting growth and have planned for growth.”

Watching the evolution of the broadly-defined genomics services business will be interesting. It’s long been a patchwork of big and small players. Recently CROs have shown interest in expanding their genomics services offering.

Julie Moore, director of global strategic marketing for BCG, says “We’ve just seen Covance acquire the Rosetta business in Seattle. That’s an interesting expansion for their business. I think there’s increasing overlap between the CROs and the typical genomics services providers such as us.” Consequently, CROs, which constitute an important customer segment for BCG, can sometimes become competitors.

So far, the genomics services market shows few signs of the consolidation striking other biotech markets. “We’ve seen a lot of small mom and pop shops popping up with the Sanger sequencing now that it’s more available and cheaper,” says Moore.

BCG’s customer set, predictably, represents a jumble of opportunistic sales and longer-term relationships. Evans conceded they don’t know how this will evolve over time, although deeper, long-term relationships with big customers are clearly desirable. Moore adds that BCG has strong relationships with some of the top biopharma community, some of those based on Cogenics’ very solid safety testing service for biologicals.

Having broadened its products and services portfolio, BCG is now wading more forcefully into the services market, trumpeting its newly-gained global presence, its heritage of next-gen sequencing expertise, and the stability of being part of services industry powerhouse Beckman Coulter

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


IBM Looks to Make DNA Analysis Cheap and Easy

October 6, 2009

IBM scientists are developing a technology that will enable physicians and other researchers to quickly and easily read and analyze strands of DNA, an advancement that could lead to greater personalized health care. The scientists are looking to create a “DNA transistor” in which a DNA molecule is threaded through a 3-nm “nanopore” and the genetic information is analyzed. The key challenge is finding a way to control the speed in which the DNA runs through the nanopore. Health care providers armed with an individual’s genetic information can more easily determine diseases to which the patient is predisposed, and which treatments would work best.

IBM scientists are developing a chip that can easily and quickly read strands of DNA, a development that could lead to more personalized health care.

The goal of the research project, announced Oct. 6, is to create the capability to analyze a person’s genome information for as little as $1,000, which could mean better diagnosis and treatment for patients.

“What is the next big thing in biotechnology?” Gustavo Stolovitzky, an IBM researcher on the project, asked in a video posted by IBM. “The answer is kind of simple, if you are in the field: You need to know how to sequence DNA fast and cheap.”

IBM researchers—from such fields as nanofabrication, microelectronics, physics and biology—are looking to do this through a silicon-based “DNA transistor.” The technique sends a DNA molecule through a 3-nanometer-wide hole—or “nanopore”—in a chip. As it’s squeezed through the nanopore, an electrical sensor reads the DNA and analyzes the genetic data within.

A nanometer is about 100,000 times smaller than the width of a human hair.

Click here to see how IBM is using DNA to build next-generation chips.

If physicians know a person’s individual genetic code, they can determine whether the person is predisposed to certain diseases, which treatments will work best and whether a particular patient will have an adverse reaction to medicine, according to IBM. It could also lead to faster discovery and testing of new products.

“Personalized medicine will become a reality,” Stolovitzky said.

The key challenge is finding a way to control the speed in which the DNA molecule travels through the nanopore, according to IBM researchers. They have created a multilayer metal/dielectric nano-structure that contains the nanopore and uses voltage between the metal layers to control the electric field in the nanopore.

The goal is to trap the DNA in the nanopore. Researchers believe that by turning these gate voltages on and off, they can slow the movement of the DNA through the nanopore at a rate—about one nucleotide per cycle—that would make DNA readable.

“We want to control the passing of DNA through the nanopore,” Stolovitzky said.

If successful, the project could lead to handheld devices that could easily and cheaply read and analyze DNA.

“The DNA transistor is one of those technologies that will, in the longer run, achieve sequencing very cheap and very fast,” Stolovitzky said, cautioning that there still is a lot of work to be done before IBM can say the project is a success.
[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


IBM CEO also wants to resequence the health-care system

October 6, 2009 by Chris Seper

CLEVELAND, Ohio — IBM’s chief executive officer says electronic medical records need to go open source.

Sam Palmisano told the attendees of the Cleveland Clinic Medical Innovation Summit that health care can barely be called a “system.” “If the health system was a patient we wouldn’t be able to read its vital signs,” he told an audience of about 400 physicians, researchers, medical investors and business leaders.

To become a true system, improve results, cut medical errors and trim costs, Palmisano said the medical industry must agree to universal, open and non-proprietary standards for electronic health records. In essence, he said, electronic records have to be like generic drugs and the data must flow throughout the system in which no provider owns the process.

“When you enforce standards then you get scale,” Palmisano said. “If you could force in your industry standards then costs will drop. They will plummet.”

The open-source argument was part of Palmisano’s four-pronged approach to creating a true system in health care, which also included emphasizing wellness, creating a new code of ethics and enhancing broad collaboration between health-care stakeholders.

Many of the concepts Palmisano proposed in his speech aren’t new. But it’s the first time the IBM chief laid out his company’s specific vision for health care.

Palmisano spoke on the same day IBM announced plans to sequence the personal genome and do it for a rock-bottom price of, ultimately, $100. IBM joins nearly 20 other companies pursuing genome sequencing, and success in the field — and for a low cost — would press the fast-forward button on personalized medicine, clinical testing of new products, and determine individuals’ predisposition for specific diseases.

“To bring about an era of personalized medicine, it isn’t enough to know the DNA of an average person,” said Gustavo Stolovitzky, an I.B.M. biophysicist, told The New York Times. “As a community, it became clear we need to make efforts to sequence in a way that is fast and cheap.”

Palmisano barely touched on genome sequencing in his speech in Cleveland, instead focusing on health reform in a room full of people directly responsible for various aspects of the health of medicine.

Along with the open-source approach, Palmisano said health care also needed to:

Emphasize wellness. IBM, for example, pays for its employees’ doctor visits. It also plans to expand an incentive program for employee to leave healthy.

Increase collaboration, which would require sharing of health information and data between patients, health-care providers and insurance companies as well as redistribute the payments and responsibilities in the health system. Palmisano argued that transferring a health system into a cloud computing model — “think of if it worked liked Google” — with per-transaction charges would streamline the system.

Install aggressive ethics and public policies that accommodate the invasive nature of modern medicine. “We’re entering a different world ladies and gentlemen,” he said. “The idea of a computer chip in your body, pills you take to monitor your health, sharing data with an insurance company and your employer — I know not everyone is happy with that. Not everyone wants to be a human petri dish.”

Palmisano highlighted health care’s failings — a lack of electronic-record adoption, constant data re-entry and unnecessary testing — to question its status as a true system. He compared electronic health records as the UPC symbol or ATM machine of the health industry, allowing patients to receive better care by centralizing data and eliminating the opportunities for errors.

“Everyone agrees on its purpose: American health care must be patient-centric,” Palmisano said, but added: “Nothing is connected.”

As a company, IBM is eager to become the system manager. Palmisano noted that IBM manages Malta’s water system and the traffic systems in cities in Australia to Sweden.

But asking an IBM to take over a government’s health system has its downsides. Dan Pelino, IBM’s general manager for healthcare and life sciences, noted after Palmisano’s speech that IBM also manages the entire health system for Denmark.

When asked what would happen if Denmark ever wanted to switch vendors, Pelino said: “They probably couldn’t switch vendors.”

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

I.B.M. Joins Pursuit of $1,000 Personal Genome

New York Times

Published: October 5, 2009

One of the oldest names in computing is joining the race to sequence the genome for $1,000. On Tuesday [October 6, 2009 - AJP], I.B.M. plans to give technical details of its effort to reach and surpass that goal, ultimately bringing the cost to as low as $100, making a personal genome cheaper than a ticket to a Broadway play.

The project places I.B.M. squarely in the middle of an international race to drive down the cost of gene sequencing to help move toward an era of personalized medicine. The hope is that tailored genomic medicine would offer significant improvements in diagnosis and treatment.

I.B.M. already has a wide range of scientific and commercial efforts in fields like manufacturing supercomputers designed specifically for modeling biological processes. The company’s researchers and executives hope to use its expertise in semiconductor manufacturing, computing and material science to design an integrated sequencing machine that will offer advances both in accuracy and speed, and will lower the cost.

“More and more of biology is becoming an information science, which is very much a business for I.B.M.,” said Ajay Royyuru, senior manager for I.B.M.’s computational biology center at its Thomas J. Watson Laboratory in Yorktown Heights, N.Y.

DNA sequencing began at academic research centers in the 1970s, and the original Human Genome Project successfully sequenced the first genome in 2001 and cost roughly $1 billion [$3 Billion - AJP].

Since then, the field has accelerated. In the last four to five years, the cost of sequencing has been falling at a rate of tenfold annually, according to George M. Church, a Harvard geneticist. In a recent presentation in Los Angeles, Dr. Church said he expected the industry to stay on that curve, or some fraction of that improvement rate, for the foreseeable future.

At least 17 startup and existing companies are in the sequencing race, pursuing a range of third-generation technologies ["next" or "n-th" generation is vague. Better terms for the novel approaches are "nano-sequencing" or "single molecule sequencing" - AJP]. Sequencing the human genome now costs $5,000 to $50,000, although Dr. Church emphasized that none of the efforts so far had been completely successful and no research group had yet sequenced the entire genome of a single individual.

The I.B.M. approach is based on what the company describes as a “DNA transistor,” which it hopes will be capable of reading individual nucleotides in a single strand of DNA as it is pulled through an atomic-size hole known as a nanopore. A complete system would consist of two fluid reservoirs separated by a silicon membrane containing an array of up to a million nanopores, making it possible to sequence vast quantities of DNA at once.

The company said the goal of the research was to build a machine that would have the capacity to sequence an individual genome of up to three billion bases, or nucleotides, “in several hours.” A system with this power and speed is essential if progress is to be made toward personalized medicine, I.B.M. researchers said.

At the heart of the I.B.M. system is a novel mechanism, something like nanoscale electric tweezers. This mechanism repeatedly pauses a strand of DNA, which is naturally negatively charged, as an electric field pulls the strand through a nanopore, an opening just three nanometers in diameter. A nanometer, one one-billionth of a meter, is approximately 80,000 times smaller than the width of a human hair.

The I.B.M. researchers said they had successfully used a transmission electron microscope to drill a hole through a semiconductor device that was intended to “ratchet” the DNA strand through the opening and then stop for perhaps a millisecond to determine the order of four nucleotide bases — adenine, guanine, cytosine or thymine — that make up the DNA molecule. The I.B.M. team said that the project, which began in 2007, could now reliably pull DNA strands through nanopore holes but that sensing technology to control the rate of movement and to read the specific bases had yet to be demonstrated.

Despite the optimism of the I.B.M. researchers, an independent scientist noted that various approaches to nanopore-based sequencing had been tried for years, with only limited success.

“DNA strands seem to have a mind of their own,” said Elaine R. Mardis, co-director of the genome center at Washington University in St. Louis, noting that DNA often takes a number of formations other than a straight rod as it passes through a nanopore.

Dr. Mardis also said previous efforts to create uniform silicon-based nanopore sensors had been disappointing.

One of the crucial advances needed to improve the quality of DNA analysis is to be able to read longer sequences. Current technology is generally in the range of 30 to 800 nucleotides, while the goal is to be able to read sequences of as long as one million bases, according to Dr. Church, who spoke in July at a forum sponsored by Edge.org, a nonprofit online science forum.

Other approaches to faster, cheaper sequencing include a biological nanopore approach being pursued by Oxford Nanopore Technologies, a start-up based in England [and backed by Illumina - AJP], and an electron microscopy-based system being designed by Halcyon Molecular, a low-profile Silicon Valley start-up that has developed a technique for stretching single strands of DNA laid out on a thin carbon film. The company may be able to image strands as long as one million base pairs, said Dr. Church, who is an adviser to the company, and to several others.

“To bring about an era of personalized medicine, it isn’t enough to know the DNA of an average person,” said Gustavo Stolovitzky, an I.B.M. biophysicist, who is one of the researchers who conceived of the I.B.M. project. “As a community, it became clear we need to make efforts to sequence in a way that is fast and cheap.”

[Since the IBM project "started in 2007" one wonders why is it played up now, and why is emphasis on sequencing, or even the easy affordability of full human DNA sequences, when the often quoted George Church ("the Edison of PostModern Genomics") has been openly talking about "zero dollar sequencing" -- and puts clear emphasis on the huge gap between the cost of sequencing (~$0) and the tremendous value of understanding genome function - in connection to diseases, their prevention, and if appear their diagnosis, therapy and cure in the oncoming "personalized medicine", and beyond, to "genome-based personalization in the Genome Based Economy"? The answer is probably the third paragraph (highlighted in purple also there - AJP). "The name of the game to Genome Based Economy is understanding genome function - in terms of Genome Informatics". No wonder, therefore, that 2007 was the year not only of "starting this IBM nano-sequencing project", but also when INTEL stepped into same by massive investment to Pacific Biosciences. Thus, a truly sensational announcement would e.g. be; "Intel scoops from IBM its Watson Center genome informatics pioneer Isidore Rigoutsos" (who published in 2006, after much struggle, his breakthrough paper on "pyknon" architecture of the DNA, putting our understanding genome function on an entirely new basis) - Pellionisz; HolGenTech_at_gmail.com, Oct. 5, 2009]


$100 million in grants thrill Medical Center [Houston]

Houston Chronicle

Sept. 30, 2009, 11:39PM

Three Texas Medical Center institutions will receive more than $100 million in research grants as part of a $5 billion stimulus package President Barack Obama announced Wednesday to fight disease and create jobs.

The program, which Obama called “the single largest boost to biomedical research in history,” will fund cutting-edge research, the hiring of researchers and other staff, and laboratory and equipment upgrades.

Targets include cancer, heart disease and autism, with an emphasis on genetic causes.

“This is huge,” said Susan Hamilton, senior vice president and dean of research at Baylor College of Medicine. “It couldn't have come at a better time, given the economic uncertainty and the slowdown in government funding the last decade.

“It'll save research jobs and create new ones, provide money for training, allow institutions to improve labs.”

Under the program, some grants of which are still to be awarded, Baylor has received more than $37 million, the most in the state; the University of Texas M.D. Anderson Cancer Center more than $29 million; and the UT Health Science Center at Houston nearly $24 million.

The National Institutes of Health is awarding the grants.

Baylor and M.D. Anderson are waiting on the awarding of grants involving the Cancer Genome Atlas project, an effort to understand the genetic underpinnings of cancer.

The program includes $175 million for the national project, and Baylor and M.D. Anderson are expecting more than $10 million apiece.

Richard Gibbs, director of Baylor's human genome sequencing center, said the cancer genome project is “going fabulously” and should produce diagnostic tests that replace existing ones in a few years.

Gibbs said that when all is said and done, the additional funding will double the Baylor's center's $50 million budget.

The biggest individual winner of the stimulus funding was Eric Boerwinkle, a UT-Houston geneticist who has spent much of his career studying the reason why heart disease runs deep in some families but not others.

The awards announced Wednesday included more than $12 million for a research project he's leading into genetic susceptibilities to heart, lung and blood diseases.

$14 million grant

Still to be announced is Boerwinkle's grant for the project's second year — an additional $14 million. It is the biggest single grant in Texas and one of the biggest in all states.

Work on grants typically doesn't begin for months after they're awarded, but Boerwinkle said his “official start date is today.”

“We're hitting the ground running,” said Boerwinkle. “That's a must when you need to spend a $26 million budget in two years.”

Boerwinkle said spending would come on equipment, hiring and the preparation of research material.

Under the program, proposals had to spell out how the project would improve the economy.

Risks and rewards

The program also called for high-risk, high-reward projects, a contrast with the more conservative approach usually favored by the NIH.

Officials at Baylor, UT-Houston and MD Anderson all agreed with Obama that there's never been a single biomedical research boost like this before.

Dr. Peter Davies, UT-Houston's vice president for research, said the institution ultimately will receive $45 million from the program in the next two years, a significant chunk considering it got $91 million in traditional funding from the NIH in 2008.

Others getting money

Other local institutions awarded grant money through the program include the UT Medical Branch at Galveston, with $10. 5 million; the Texas A&M Health Science Center, with $6.4 million; the University of Houston; with $3.4 million; and Rice with $3.2 million.

The program is funded through the $787 billion federal economic stimulus program the president signed into law in February.

Obama said the new grants will support 2,000 projects and create tens of thousands of jobs

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com].


Obama, Collins Laud $5B in NIH Stimulus Funds, Much for Genomics

September 30, 2009

By a GenomeWeb staff reporterNEW YORK (GenomeWeb News) – The National Institutes of Health has awarded more than 12,000 grants totaling around $5 billion so far under the economic recovery and stimulus package, the White House said today.

President Barack Obama commuted to Bethesda this morning to announce the funding as a milestone, to unveil a $175 million grant for cancer genomics, and to tour the NIH campus.

In late-morning speeches before a crowd of NIH staff, President Obama, Health and Human Services Secretary Kathleen Sebelius, and NIH Director Francis Collins loosely outlined how the $5 billion in grants over two years — nearly half of NIH’s total $10 billion appropriation under the American Recovery and Reinvestment Act — will stimulate research and create jobs.

A number of genomics-focused programs will be funded under the stimulus package, including $175 million over two years for The Cancer Genome Atlas, a joint effort between the National Human Genome Research Institute and the National Cancer Institute, according to a fact sheet released today by the White House.

“This ambitious effort promises to open new windows into the biology of all cancers, transform approaches to cancer research, and raise the curtain on a more personalized era of cancer care,” Collins said in a statement, describing the TCGA funding as “an excellent example of how the Recovery Act is fueling discoveries that will fundamentally change the way we fight disease and improve our lives.

"We are about to see a quantum leap in our understanding of cancer," Collins said. [This should be taken literally, since the fractal recursive iteration of protein synthesis, because of structural variants in the genome can result in a loss of quantum equilibrium - with the hologenomic entropy hyperescalating - AJP]

NCI and NHGRI will also each commit $50 million in non-Recovery Act funds to the Genome Atlas over this two-year period, according to NCI.

"We know that this kind of investment will also lead to new jobs: tens of thousands of jobs conducting research, manufacturing and supplying medical equipment, and building and modernizing laboratories and research facilities," Obama said in a statement. [Maybe in addition to maintaining or improving the material infrastructure and amassing data that are already close to impossible to interpret, there will be some small change left for "gray matter". After all, if the center of excellence of intellect of "Coppenhagen group" - working out quantum theory - did not exist first, it would have been foolish or outright dangerous to engage in mega-projects to release nuclear energy - AJP]

At the event, Collins told the NIH assembly that the grants "will fund trailblazing research into treating and preventing our most scary diseases.

“Since arriving [at NIH] six weeks ago I’ve spent a lot of time reviewing some of these grants — I wanted to see what was there — and they propose some of the most innovative and creative directions for research that I have ever seen in 16 years at NIH,” the new NIH director told the crowd.

More than $1 billion of the grant funding is dedicated to using technologies developed through the NIH’s genomics programs, specifically through the Human Genome Project, the White House said.

For cancer, heart disease, and many other areas, researchers will use Recovery Act funding for genomics and genetics-based research approaches to pursue knowledge about these diseases.

According to the White House, over the two years of recovery funding NIH stimulus grants will support studies including:

• Seeking to use microRNAs to predict which patients have tumors that will spread throughout the body;

• Conducting genomic sequencing of individuals with autism and their parents in order to find causes for the disease in the genome and in the environment and to develop and test diagnostic screening tools;

Cataloging genetic [genomic - AJP] changes associated with oral cancer in order to identify and guide treatment of pre-malignant lesions;

Sequencing the genomes of more than 10,000 individuals with known risk factors for heart disease in order to identify those risk genes; [that are affected by derailed hologenome regulation - AJP]

Comparing the genomes of individuals with high and low HDL cholesterol levels in order to accelerate development of drugs that reduce the risk of heart attack;

• Examining the genes [genomes - AJP] of more than 7,000 heart failure patients to identify variants that will enable doctors to identify those at risk for heart failure;

• Identifying genetic [genomic - AJP] markers for increased risk of hypertension, obesity, cardiac hypertrophy, and kidney failure in African Americans;

Finding markers that circulate in the blood that may signal the onset of a plaque rupture or of thrombosis;

Analyzing biomarker and genetic [genomic - AJP] data from international atrial fibrillation patient pools in search of markers to identify patients that will benefit from statin therapy.

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

NIH Grants $45M for Genome Science Centers [Bye-bye Crick's "Central Dogma"- AJP]

[Beginning to "re-think long-time beliefs" - AJP in peer-reviewed paper, presented to public, and peer-reviewed acceptance in Cold Spring Harbor]

September 28, 2009

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – The National Institutes of Health has pledged $45 million in grants to establish two new genomics centers at the University of North Carolina at Chapel Hill and at the Medical College of Wisconsin (MCW), as well as to continue funding existing centers at Johns Hopkins University and at the University of Southern California.

The two new Centers of Excellence in Genomic Science at UNC and MCW will pursue genomics studies of mental health and gene regulation, respectively. [Non-Euclidean "Geometrization of Neuro- and HoloGenome sciences" may be helpful - AJP]

Under the new grants, MCW will receive around $8 million over three years and UNC will reap around $8.6 million over five years from the National Human Genome Research Institute and the National Institute of Mental Health.

Johns Hopkins' genomics center will receive around $16.8 million over five years to continue epigenetics of disease studies and USC will use around $12 million over the same period to conduct computational and informatics-based research of genetic variation and disease.

"Our aim is to foster the formation of innovative research teams that will develop genomic tools and technologies that help to advance human health," NHGRI's Acting Director, Alan Guttmacher said in a statement. "Each of these centers is in a position to tackle some of the most challenging questions facing biology today."

The grant to UNC will support the Center for Integrated Systems Genetics (CISGen), where scientists will seek to identify genetic and environmental factors that underlie and contribute to psychiatric disorders.

CISGen will use mouse models and computational biology to study genetic and environmental factors of such disorders, and it will develop new mouse strains specifically to study relevant behavioral traits. These models will serve as a resource of genomic studies screening for genetic variants that are linked to human psychiatric disorders.

"We can use the mouse to narrow the search space from billions of possibilities to only hundreds or even dozens," CISGen co-director and UNC Assistant Professor Fernando Pardo-Manuel de Villena said in a statement. "It's like the PowerBall when you know four or five of the six numbers for sure."

"We chose the hardest problems out there, the ones that have been most resistant to scientific inquiry in humans," explained Patrick Sullivan, CISGen's other co-director and a distinguished professor at the UNC School of Medicine. "We chose to study mouse versions of psychiatric traits potentially relevant to autism, depression and anxiety, and antipsychotic drug side effects and response to treatment."

In Wisconsin, the new center is a collaboration between the Medical College of Wisconsin, the University of Wisconsin, Madison, and Marquette University.

The team at MCW will focus on developing tools for analyzing the proteins that bind to particular DNA regions in an effort to understand the relationship between changes in protein-DNA interactions.

"What is needed, and what we will develop in this center, is technology that is able to identify all of the proteins that are interacting with the genome, even if we do not know in advance what their function may be," said the center's co-Director, Michael Oliver, a professor at MCW's Biotechnology and Bioengineering Center and the Human and Molecular Genetics Center.

Other NIH-funded Centers of Excellence in Genomic Science include centers at the California Institute of Technology, Harvard Medical School, Stanford University, Arizona State University, Yale University, and the Dana-Farber Cancer Institute.

[When Crick's (false) "Central Dogma of Molecular Biology" (i.e. "information transfer from proteins to DNA 'never happens'" - see Pellionisz, 2009, Fig.7 ) was shaken in 1970, Crick upped the ante "such a finding would shake the whole intellectual basis of molecular biology". Also, Ohno came to the rescue with his "Garbage DNA" (1970) and "Junk DNA" misnomers (1972), implying that even if such recursion would happen, it would only find "Junk DNA" whose "importance is doing nothing" - see Fig. 3. While "shaking the intellectual basis" was thus fended off a generation ago, Francis Collins started ENCODE in 2003, to conclude in 2007 that "the scientific community will need to rethink some long-held views" (ref. in Pellionisz, 2008). Some lucky few did not have to "re-think", since never believed neither the "Central Dogma"nor the "Junk DNA" false axioms in the first place, and came up with the (recursive!) fractal model of brain cell as early as in 1989 - punished for double heresy by denial of NIH grant continuation - but laying down the theory of recursive fractal iteration of (holo)genome function. With Francis Collins now at the helm of new NIH, research along "new thinking" is amply rewarded for some. - Pellionisz; HolGenTech_at_gmail.com, Sept. 29, 2009]


David Duncan Has a Prescription for American Health Care

Thursday, 17 September 2009

[David Ewing Duncan - survival of the fiercest in prevention - AJP]

The 13th Annual Scientific Meeting of the Heart Failure Society of America (HFSA) today featured a discussion by David Ewing Duncan titled "One Man's Quest for Personalized Medicine." Over the course of one year, Duncan submitted himself to hundreds of tests that could predict and even prevent future illness.

One test made predictions about the health of Duncan's heart. By gathering data on his cholesterol levels, a heart CT scan, a genetic profile, and more, the test showed personalized predictions that were specific to Duncan's genes and physiology. One scenario showed Duncan's risk of heart attack at 70% over the next twenty years. That forecast changed dramatically if Duncan maintains his current weight instead of gaining the typical pound per year for a man over 40. With a steady weight, the risk fell to about 2 percent and with cholesterol lowering statins, Duncan's risk fell to zero.

"Many of these tests aren't ready for the general public, but they do give us an interesting glimpse into the future of medicine," said Duncan. "People are curious as to where this technology is going and we may never know unless we provide the organized push needed to learn more."

This specific test would cost nearly $1000 as demand increases but Duncan points out that a diagnostic cardiac catheterization can cost more than $25,000 and a heart bypass operation runs well over $85,000. The cost also has to be weighed against the 80 million Americans suffering from heart disease and the fact that nearly $450 billion was spent last year in direct and indirect costs for treatment of heart disease.

"David Duncan's journey through the American Health care system as a healthy person illustrates one of the major problems facing the American health care system today. That is, as health care professionals we recognize that it is imperative to identify patients who are at risk for developing heart disease, and then implement personalized strategies to reduce this risk," said Dr. Douglas Mann, HFSA President. "However, the upfront costs associated with screening large populations of patients with some of the emerging technologies may not be sustainable in the future. As we move forward in our efforts to prevent heart disease and heart failure, it will be essential to obtain outcomes data that identify the most cost effective and accurate strategies for identifying the patients who are at risk of developing heart disease and heart failure, as well as personalizing our approaches to these patients."

["Survival of the fiercest to prevent" - this might be the slogan in the era of "personalized preventive health care". Dave was threatened by a condition that mobilized all his efforts to prevent it from happening. The article below exemplifies a massive effort to prevent another severe condition. It is clear to most everyone that the health care system as we used to know it has run out of resources and disruptive paradigm-shifts will govern the future outcome. Whichever of the major life-threatening conditions would be able to mobilize the most effective PREVENTION will secure productive and happy longevity for the threatened groups. - Pellionisz; HolGenTech_at_gmail.com, Sept. 28, 2009]


New Survey Finds Alzheimer's Disease a 'National Priority,' with Voter Support Across Party Lines for Congressional Action and Faster FDA Review of New Therapies

- Survey also finds concern over cost of care and strong voter willingness to make Alzheimer's an issue at the polls -

WASHINGTON, Sept. 24 /PRNewswire/ -- A new voter survey sponsored by the ACT-AD Coalition (Accelerate Cure/Treatments for Alzheimer's Disease) finds that three-quarters of Americans nationwide and across party lines say it is personally important to them to find a cure or to prevent Alzheimer's disease, while a similar proportion of the national electorate say they look to Congress to make it "a national priority" to speed up the Food and Drug Administration's (FDA) review process in specific ways for therapies that slow, halt or reverse the disease. Voters in large numbers also said that they would not be able to cover the personal cost of Alzheimer's care and that they were ready to reward or punish elected officials at the polls based on their willingness to act on Alzheimer's now.

The survey was conducted jointly by the bipartisan team of Lake Research Partners (D) and American Viewpoint (R). Findings were presented today at the Rock Stars of Science Capitol Hill Briefing, sponsored by Geoffrey Beene Gives Back((R)), Research!America , Alzheimer Association, Wyeth, and Elan; and made possible with the cooperation of The Congressional Biomedical Research Caucus and the Alzheimer's Caucus. The briefing united leaders in medical research including the Director of the National Institutes of Health, Francis S. Collins, MD, PhD, and rock star Joe Perry from Aerosmith, to rally lawmakers to increase funding for medical research priorities like Alzheimer's, cancer, HIV/AIDS and genomics.

According to Celinda Lake, President of Lake Research Partners, "There is clear voter support for action on Alzheimer's Disease. This survey sends a message to elected officials that Alzheimer's has captured the nation's attention, and that it may prove to be an important electoral issue."

Dan Perry, President of ACT-AD, a coalition of national organizations seeking to accelerate development of potential cures and treatments for Alzheimer's, believes that the survey reflects "the beginning of an Alzheimer's challenge from the American voter. We are on the verge of becoming the next generation of Alzheimer's casualties, and yet we have access to the same number of treatments to slow or stop the disease that our parents and grand parents had - none. Add to this treatment vacuum the fact that the recession leaves Americans with lower personal savings and a near-bankrupt healthcare system that is ill-prepared to manage the coming Alzheimer's explosion. It should come as no surprise that Americans are telling their representatives to find answers to this problem before it is too late."

Survey Findings

According to the survey, voters nationwide and across the political spectrum believe:

Alzheimer's is a personal and national priority.

76 percent of voters nationwide say it is personally important to find a cure for Alzheimer's and 77 percent believe it is personally important to prevent Alzheimer's disease. Sentiment is similar across party lines [Note that Prevention takes precedence over Cure! - AJP].

79 percent want Congress to make it "a national priority" to speed up the FDA's review process for therapies that slow, halt or reverse the disease.

FDA review policy on Alzheimer's should reflect this priority.

In the past, the FDA has accelerated its review programs for life threatening diseases like HIV/AIDS and cancer in order to bring urgently needed therapies to patients without sacrificing safety. The survey suggests that American voters now support the same priorities for Alzheimer's therapies.

47 percent of voters nationwide think the FDA should make all possible Alzheimer's treatments available and allow patients and doctors to decide about risks and benefits, and another 28 percent believe promising drugs for Alzheimer's deserve the same priority status and fast track review by the FDA as promising drugs for other life-threatening diseases.

A minority of 15 percent think the FDA should continue to use current procedures of delaying a therapy until it is determined to be completely safe.

Without treatment breakthroughs, Americans cannot cover the cost of Alzheimer's care.

56 percent of voters nationwide said that they are not confident that they would be able to cover the cost of long-term Alzheimer's care if they or a loved one were diagnosed, with over a third (35 percent) saying they are not at all confident about covering the cost.

Financial assistance will be needed to pay for Alzheimer's.

72 percent strongly favor expanding Medicare coverage to include Alzheimer's therapies and services in non-traditional settings like the patient's home.

71 percent strongly favor tax deductions for long term care insurance.

68 percent strongly support allowing parents under 65 who have been diagnosed with Alzheimer's to be claimed as dependents by their children.

63 percent strongly support tax incentives to caregivers whose parents have Alzheimer's.

American voters across party lines are ready to make Alzheimer's drug review an issue at the polls....

About ACT-AD

ACT-AD is a growing coalition of more than 50 national organizations representing patients, providers, caregivers, consumers, older Americans, researchers and employers seeking to accelerate development of potential cures and treatments for Alzheimer's. The Coalition is directed by an Advisory Council made up of representatives from Alliance for Aging Research (AAR), Alzheimer's Foundation of America (AFA), American Society on Aging (ASA), National Alliance for Caregiving (NAC), National Association of Area Agencies on Aging (n4a), National Consumers League (NCL), Research!America, and the Society for Women's Health Research. The Coalition is supported by educational grants from Wyeth, Elan, Pfizer, Eli Lilly, and Medivation.

[People have spoken. We all are likely to die - but it is not indifferent at all if that inevitable event will take with Alzheimer's, Parkinson's, Cancers (etc) decades of suffering and misery of not only the affected, but devastates families and ultimately societies. By proper focus on prevention of the most dreadful ways of passing away, we can have a populus that is active and happy till the inevitable comes upon us. - Pellionisz; HolGenTech_at_gmail.com, Sept. 27, 2009]


Habits to Help Prevent Alzheimer's - How to lower your risk.

Los Angeles Times
September 17, 2009

People may be able to reduce their risk of developing Alzheimer's disease, according to two recently published studies that are the latest in a long line of research. But does that hold for everyone? And by how much can you lower the risk? Here's a look at the facts.

Alzheimer's afflicts 5.3 million Americans and that number is predicted to grow to nearly 8 million in the next 20 years, according to a 2009 report by the Alzheimer's Assn. Because the disease has no cure, medical researchers continue to focus on preventing or delaying the disease.

Two weeks ago, a paper in the journal Dementia and Geriatric Cognitive Disorders reported that people with even moderately elevated cholesterol in their 40s have twice the risk of developing Alzheimer's disease in their 60s, 70s and 80s, adding blood cholesterol to a variety of already-known risk factors for the disorder.

High blood pressure, diabetes, obesity, smoking and high-fat diets have all been associated with increasing one's risk. Last week, a paper in the Journal of the American Medical Assn. reported that people eating a so-called Mediterranean diet and exercising regularly were at lower risk -- by as much as 50%.

And in earlier studies, other lifestyle factors -- such as doing the daily crossword puzzle or other intellectually stimulating activities, maintaining an active social life and getting a college education -- have been associated with lowered Alzheimer's risk.

The recent cholesterol study was large and long -- 9,844 Californians were followed for three decades -- and the data are striking. People with high cholesterol -- 240 or higher -- were 57% more likely to develop Alzheimer's disease. Those with borderline range cholesterol -- 200 to 239 -- were 23% more likely.

Still, this is an association at best. No one can say that high cholesterol causes Alzheimer's disease: Other factors linked to it in some way could be to blame. Also not known is whether lowering cholesterol -- for instance by taking statin drugs -- would be protective.

"An association is hypothesis-generating -- it allows us to begin looking at why that relationship might exist," says Dr. Jeffrey Cummings, director of the Mary S. Easton Center for Alzheimer's Disease Research at UCLA. One possible clue comes from animal studies: Neurobiological studies have found that high cholesterol in the blood may trigger more of the brain-clogging substance beta-amyloid protein

The diet and exercise study reported last week was smaller and shorter. In it, 1,880 elderly New Yorkers were followed for an average of 5 1/2 years. It found exercise alone was linked to as much as a 50% reduced risk, diet alone by as much as 40%.

This is not the first study to suggest that diet and physical activity may be protective. The Mediterranean-type diet "combines several foods and nutrients potentially protective against cognitive dysfunction or dementia, such as fish, monounsaturated fatty acids, vitamins B12 and folate, antioxidants (vitamin E, carotenoids, flavonoids), and moderate amounts of alcohol," the authors wrote.

There have been very few studies that meet the gold standard of human trials, in which people would be randomly assigned to either receive an intervention or not, then followed into their senior years to see if they develop Alzheimer's. Of trials that have been completed, no clear preventive treatment has been identified.


Certain naturally occurring neuroprotective substances are stimulated by physical activity, Cummings says. "So there are direct neurobiological effects of exercise that go beyond just better blood flow."

These effects of lifestyle on Alzheimer's are not yet proven. But -- in contrast to long-term drug treatments -- there is virtually no downside to recommending them, experts say.

Cummings says he often fields questions from families of his patients about what they can do to prevent the disease from happening to them. He recommends supplements of vitamins C and E and omega-3 fatty acids, exercise three times per week for 30 minutes and taking care of one's cardiovascular risk factors such as blood pressure and cholesterol.

Even in people with genetic predisposition for developing Alzheimer's (those who carry the apolipoprotein E-e4 gene have a doubled risk), lifestyle changes can make a difference, Cummings says.

"My experience is that people who know that they're at genetic risk take the environmental interventions much more seriously."

Debra Cherry, executive vice president of the Alzheimer's Assn. California Southland Chapter, says that when she served on the Healthy Brain Initiative, a government workshop seeking evidence-based recommendations for reducing risk, the strongest case made was for aerobic activity.

"I don't know if anyone will ever be able to do a randomized, controlled study, but the evidence is pretty strong that aerobic exercise protects again heart disease and brain disease," she says. "And there's very little risk to doing it."

[R&D and Market analysis of this "game changer" is upon requiest. Pellionisz; HolGenTech_at_gmail.com, Sept. 27, 2009]


Point of Inflection at the Cold Spring Harbor Laboratory' "Personal Genomes" Conference

[James Watson and Andras Pellionisz in Cold Spring Harbor Laboratory, Sept. 17, 2009]

A presentation of the embraced "Fractal Approach" according to the "Principle of Recursive Genome Function" that thinks outside the DNA>RNA>PROTEIN box. A list of organizers, keynote speaker, list of presenters and list of some attending without presentation is available here.

[Opening Keynote Speech and Introduction on Monday, September 14. was given by Prof. George Church (Harvard) "Setting the Tone for Personal Genomes" - envisioning a "zero dollar sequencing" with emphasis shifted to understanding of function of genome-epigenome, with the use of an unprecedented avalanche of data, analyzed both by brute force and somewhat smarter ways. Keynote Speech on Tuesday, September 15. was by C. Thomas Caskey (University of Texas Health Science Center) "Inflection point for genome science". Dr. Caskey put our field into the stunning historical perspective - likening advancement of technology to that of the aviation and satellite revolution, by the revolutionary work of Howard Hughes (personally witnessed by Dr. Caskey...). As the following masterpiece quad of articles by Bio-IT World (Kevin Davies) will cover, "Genome Computers" will mark the point of inflection, with the cost of sequencing plummeting and the amount of information skyrocketing. Core is an algorithmic understanding of HoloGenome regulation.

It may help recalling not only precedents of "point of inflection" in different fields (aviation and satellites, as even this columnist transitioned Neural Network algorithms learnt from the little brain of birds to fly F15 fighters with parallel computer control for NASA) - but also at least two precedents in Genomics itself.

It is arguable that the Venter-Collins duel of Government and Private industry in the "Human Genome Project" to yield an official "tie" hinged upon computing (summed up by the wry humor of Craig, answering the question "what is the difference between God and you?" - "We had computers!"). While Dr. Collins focused on "God's language" in an extremely wide coalition (that, by definition, resulted in a computing infrastructure that perhaps could be characterized by the word "scattered", Dr. Venter's sprint was a laser-beam focused effort to create, almost from nothing, the world's most powerful and advanced genome computing environment (both in terms of hardware, software and algorithms).

Another example is the triad of "Shotgun Sequencers" (Applied Bio's SOLiD, Roche's 454, and Illumina's Genome Analyzer). In retrospect, since we are at the point of inflection towards nanosequencing by Complete Genomics, Pacific Biosciences, Oxford Nanopore, Helicos (with no end of the list in site), one could perhaps risk the analysis that it would have been better if AB/454/Illumina would not have to develop themselves their "on rig" IT (since none of the trio was, or even aspired to become "a leading computer company") - but "Big IT" would have picked up the challenge, providing perhaps better, but certainly more uniform and standard, wide-range IT service.

Future will hinge to a great degree if "agnostic" but highly "genome informatics savvy" initiatives, e.g. such as HolGenTech's "brute force friendly but also algorithmic-preferred" initiative is scaled appropriately to face our common challenge - Pellionisz; HolGenTech_at_gmail.com, Sept. 23, 2009]


Genetic testing company 23andme may offer GPs a chance to try service

Mark Henderson, Science Editor
From The Times September 15, 2009

[Benji Brin - a mockup by MakeMeBabies.com - AJP]

Doctors could soon be offered discounted scans of their own DNA by a leading personal genomics company, to prepare them for the challenge of using genetic information in patient care, The Times has learnt.

The consumer genetics service 23andMe, backed by Google, is considering launching a cut-price version of its $400 (£240) test for medical professionals, to teach them to interpret genomic information that is now readily available to their patients.

Anne Wojcicki, 23andMe’s co-founder and chief executive, told The Times that she wants to encourage doctors to take her company’s test themselves so they are better placed to help patients who take it and then approach them for advice.

At present, doctors receive little specialist training in interpreting genetic tests that assess people’s inherited risk of developing certain diseases, which can now be bought directly by consumers without medical oversight or counselling.

This has raised fears that people who buy such information could be needlessly worried or falsely reassured by the outcome, and that their GPs [general practicioners] will be poorly placed to help them.

In an interview with The Times, Ms Wojcicki said the providers of such tests have a responsibility to make doctors familiar with them so they can better explain the results to concerned patients.

Clearly we need to engage with physicians to help them to understand this information,” she said. “One of the things we’ve talked about is we’d love to get physicians comfortable with their own genomes first, have them understand what does it mean, explore the data, see what does it look like, and then go to work with their patients.

“I think that’s probably the way to do it. Physicians should be genotyped. We are talking about ways we could potentially do that. It’s important for physicians to understand what the experience is like; 23andMe is going to start putting more effort into educational material.”

While no discounted product for doctors is yet available, she said the company would be happy to discuss arrangements with doctors who are interested. Ms Wojcicki’s suggestion comes amid growing concern that medical training in genetics is inadequate to prepare GPs to advise patients about DNA tests.

In July, a report from the Lords’ Science and Technology Committee found that such tests were “placing strain on the expertise of doctors, nurses and healthcare scientists, who at present are poorly equipped to use genomic tests effectively and to interpret them accurately, indicating the urgent need for much wider education of healthcare professionals and the public in genomic medicine.”

The Times also reported that the National Genetics Education and Development Centre has begun a review of medical education in genetics .

Ms Wojcicki is also seeking to win over the many doctors who are sceptical of her company’s genotyping service, which examines 550,000 of the 6 billion letters in the human genome for variants that are known to affect the chances of developing 116 diseases and traits, from baldness to bowel cancer.

While such tests have the potential to flag up health risks that an individual might take action to reduce, critics argue that the results provide only an incomplete picture of risk that can be confusing and misleading. [Tell me about anything in life that is "complete" - sometimes even clinically dead persons can be resuscitated - AJP]

Ms Wojcicki, who is married to Sergey Brin, the co-founder of Google, said that many of her customers had learnt valuable health information from the service. When Mr Brin took the test last year, he discovered he had inherited a mutation of a gene called LRRK2, which gives him a lifetime risk of developing Parkinson’s disease of between 20 and 80 per cent.

“The number one thing we get criticised for is that it’s too early, and there’s not enough information, but we get stories every week about how this product has been incredibly useful to someone’s life,” Ms Wojcicki said. “I’ve personally found that: I discovered that my husband was a carrier for Parkinson’s through this.” [Clearly, a newspapers' jargon - but "carrier the G2019S mutation in the leucine-rich repeat kinase 2 gene (LRRK2) that is significantly associated with Parkinsons' in his type of ancestry lineage" might be confusing for an average newspaper reader - AJP]

She accepted that the test offers only a partial snapshot of disease risk, and that it is often unclear what to do about results. Some criticisms of the service, however, were “like asking what was the value of looking in the mirror before plastic surgery, if there was nothing you could do. It’s a reflection of you, and I think with that information you can understand yourself a lot better.”

Ms Wojcicki said that it would be especially important for companies like hers to work with doctors to interpret genomic information, as the costs of DNA sequencing fall further. It is widely predicted that it will be possible to sequence anybody’s entire genome for less than £1,000 within a year or two, to reveal genetic variations that influence disease risk and response to drugs.

We want to help [doctors] to make sense of this, we want them to help consumers,” she said. “If you come in with results that tell you your risk of type 2 diabetes is marginally higher than average, how much do you need to worry about that? And how can we stress to individuals that genetics is only part of the story? We still have the environment here, you need to watch out for your potato chip consumption.

Your Comments

John Fairfield wrote:

What is the evidence that random genetic profiling will help people? None. Genetic profiling helps those with a family history or possibly with subsequent screening adherence.

We know VERY little about the interaction of environment with genes?

What makes you think that this private company will not sell your genetic data to insurance companies?

What happens if this test reveals I have a 50% chance of developing Alzheimer's disease. I may than be turned down for life insurance or have a very high premium to pay, but end up living healthily until I am 100.

Look at Mr Brin's result: 20-80% LIFETIME risk of Parkinson's. Look at the range!! Ok, so what can he do about it now...nothing!

Ms Wojcicki is misleading "I discovered that my husband was a carrier for Parkinson’s"...he is NOT! Lifetime risk is not an accurate indicator of carrier status. Besides, there may be multiple genes involved. Lifetime risk also depends on your ethnicity and where you live.

This company is all about making money out of people's fear.

P.S. I'm a doctor and I will not be taking any DNA test.

[On the same day of this article (Sept. 15), thus unaware of the above, we discussed in the "Ethics session" of the "Personal Genomes" conference at Cold Spring Harbor Laboratory.

As with every disruptive advancement of science and technology, the phases are "acceptance", "embracing" and "utilization". Clearly, blogger Dr. Fairfield is not in the acceptance phase yet (his accusation resembles hostility).

As we discussed, MD-s have at least two rational fear of a scientific-technological explosion. One, that it may diminish their power. The other is that - as one genomist researcher who is also an MD clearly stated the obvious at the meeting, general practicioners are so overloaded by clinical cases that will simply be unable to keep up with full-time scientists-technologists.

23andMe is doing absolutely the right thing - we all agreed on the cardinal importance of education and involvement of both medical profession and the general public.

For MD-s, ALL advancements of high-tech (CT, MRI, etc - run by not-so-highly-educated labtechs) obviously diminished MD power in a primitive sense, since high-tech imaging can see through bodies infinitely better than the naked eye of the Prof. Yet in almost no time at all (in a historical sense) technology was embraced by the utilization, and actually catapulted the power of MD-s (while creating lots of jobs for new labtechs).

Educating the public is somewhat different. While both George Church and this columnist ("pellionisz" on YouTube), as well as several others went public with the cardinal paradigm shift that "Your Genome is NOT your destiny" - putting the fear factor at rest -, the public will embrace DTC through its utility - when e.g. can use their existing smart phones to barcode-click products (and environments) if they "fit or fix their personal genomes" or, like potato chips for diabetics, should be avoided. As Dr. Amy McGuire pointed out by at least two of her slides, postmodern genomics will be driven much steeper by technology of private industry than the rate of e.g. government research or public health care. She also stressed, that "the process must be automated". Thus, "education of the public" does NOT require at all that we have to lift all to the level of Ph.D. in Genomics. Most users of PDA-s or Smart Phones have little idea how the technology works. What matters to them is, that providing a saliva-sample they can get (if they want to) the PRACTICAL CONCLUSIONS of analysis, such that they can barcode-click foods, additives, cosmetics, chemicals - even environments (without having to know at all what G2019S might mean). One does not have to show a Ph.D. diploma to get and love the convenience of a smart phone. - Pellionisz; HolGenTech_at_gmail.com, Sept. 25, 2009]


Kuberre: Think Outside the Box

Bio-IT World
By Kevin Davies
September 17, 2009 |

FPGAs move from financial services to life sciences.

At about 3 cubic feet, the box sitting in a corner of Kumar Metlapalli’s modest office in Andover, Mass., doesn’t necessarily strike me as a “next-generation supercomputing platform.” But the box, or HANSA, might be the biggest thing to hit high-performance computing in a long time.

The founder of Kuberre Systems, Metlapalli is an avid proponent of FPGAs (field programmable gate arrays), a chip that can be programmed to provide far greater specificity and efficiency than traditional CPUs, yet offer a more affordable solution than large grids with hybrid blades a super computers.

HANSA has a scalable architecture that can include from 4 to 64 FPGAs in a 9u cabinet ranging in price from $50,000 to $500,000. It combines a new hardware design and rich software stack for use in the HPC market with a memory that scales up to 256 GB. It delivers the equivalent of a 768-CPU server grid or a 1,536 core supercomputer, at 1/3 the cost, with 2% of the energy requirements, and 1% of the floor space.

While Kuberre has carved a niche in the financial services sector, cracking the life sciences market is both a top priority and a tough challenge. “We’re getting there, but we need to build those relationships,” Metlapalli admits.

Passage from India

An electrical engineer by training, Metlapalli moved to the United States from India in 1991, got his Masters degree in computer science from the University of South Carolina, specializing in image processing. He worked for XyVision before being recruited as a “quant” for Wall Street, predicting trends based on historical data.

The idea for HANSA, which means swan (think “Lufthansa”) but also stands for Hardware Accelerator for Numerical Systems and Analysis, came in 2006, while Kuberre was providing a unified platform for financial markets. Kuberre’s sister company in India had built an accelerator card for BLAST utilizing FPGAs. Metlapalli quickly targeted the benefits of FPGA technology to financial services, but recognized the danger of becoming a black box. Given the programming flexibility of FPGA platforms, Metlapalli opted to design a software stack on top of the FPGAs, so that users can write algorithms in their native languages (see, “Swan Structure”). “We can’t be building singleton solutions,” Metlapalli said. “The library must work across verticals and provide flexibility.”

The HANSA architecture makes use of the scaLAPACK library, originally written for supercomputers or clusters, which breaks up matrices into submatrices, and lets them compute and combine.

One application Metlapalli is convinced will work on HANSA is GWAS (genome-wide association studies.) GWAS calculations are giant matrices with hundreds of thousands of values (SNPs). On HANSA, Metlapalli says “one doesn’t have to take shortcuts. You have 256 GB memory to host the data you need, the compute power you need.

FPGAs have been around since the ’70s, but seen little application in life sciences. One exception is Scott Helgesen, who featured them in the original software for the first 454 Life Sciences sequencer. The massive parallelism of FPGAs is finding particular use in military applications such as digital signal processing and Fourier transforms.

FPGAs are one increasingly popular flavor of hybrid computing, which complements CPUs with chips such as a GPU or FPGA. Metlapalli says he’s flipped the role of the CPU, so that the CPU becomes the co-processor, and the majority of operations are performed on the FPGAs.

Unlike a CPU with millions of gates die cast, an FPGA has millions of gates controlled via software. “You’re programming the chip to perform what you want to perform, in the most optimal fashion. I take FPGA, put a software code on top of it, and everything runs on the hardware.” More logic in hardware translates to more acceleration. With FPGAs, “essentially you can transform one’s personality based on the application you’re solving.” A binary search might take 1000 gates, but because the FPGA has 1 million gates, one can optimally dedicate a particular number of cores for the search, while performing secondary searches in parallel.

Get a Life

Metlapalli doesn’t want to be constrained to a single industry. “To pick one vertical, we’d be doing a disfavor to the platform,” he says. “I want pharma to know this solution exists.”

One early prospect is an outsourcing vendor in India that works with most big pharmas. Kuberre is putting together a “business initiative document” under NDA. “They already have an idea of what they want to build,” says Metlapalli, indicating molecular comparisons using tools such as JCHEM. “Think of drug discovery as a funnel—the narrower you make the funnel, the faster the process. That requires more sophisticated computation.”

As for genome centers, Metlapalli says, “We strongly feel that the genome centers need a box like HANSA.” Metlapalli says he’s had encouraging discussions with Matthew Trunnell at the Broad Institute, but “the challenge has been allocating the research resources to look and see how the solutions will be built on this platform.”

With HANSA providing the equivalent of 2.5 racks of nodes, 80 inches tall, at your desk, Metlapalli is convinced that HANSA’s efficiencies can knock out clusters. The challenge is in “motivating these people and getting enough of their time to look at the box and build a solution on top.”

Despite all the hype over cloud computing, Metlapalli says HANSA offers a cost-effective alternative by providing the compute processing at the point of collection. “Take it, collect it, process it… It has the computational power to bypass cloud computing.

“If you have a cluster or cloud, HANSA could be one of the nodes on that. If you need a departmental supercomputer, this is what you need.” It provides the equivalent of 1500 processors or 700 blades.

For example, Metlapalli claims HANSA offers a 1000-fold improvement in the BLAST search algorithm. Based on work for a previous client on a single board with 4 FPGAs, Metlapalli saw an 80X performance boost. “We have 16 boards in HANSA. So it’s 16x80, or 1280 or so.” If you take out the latencies between the boards, maybe 1000X. But life science customers “really don’t care” because BLAST makes up a small piece of their workflow. “They’d rather know how HANSA can solve their own workflow issues.”

As a privately funded company, Kuberre runs “a very lean and mean operation,” which is why Metlapalli is reluctant to build demo units. Instead, he challenges potentially interested researchers: “Give us a problem you’re not able to solve. We’ll do the legwork, build the prototype. Tell us that you’re going to buy it! That’s all we need. Just need an hour’s worth of time, saying what the problem is, give us the sample data, this is how the algorithm should work. Boom! We’ll do the rest.”

Swan Structure

The HANSA architecture consists of four layers. On the bottom is the physical hardware—16 boards, each containing four FPGAs. (Each FPGA has 12 processors talking to one memory bank, 12 talking to the other.) The next layer consists of expandable firmware building blocks (for example a binary search algorithm), so users do not have to deal with VHDL. Then comes a C/C++/JAVA API layer, so one of these APIs could be used in multiple building blocks underneath it to execute the programs. The icing on the cake is the user’s own applications and custom algorithms.

“What we’ve done is provide level of flexibility they need to build their own algorithms in their own native languages,” says Metlapalli. “No one has thought about building a supercomputer utilizing so many FPGAs together in a single box, or how to utilize with a software stack to solve problems.”

Out of the 16 FP[G]A boards in the box, five could be doing Monte Carlo simulations, six doing intense numerical algorithms. The other boards might capture streaming data. “That’s what you can partition through the software. In one box, you’re dividing the personalities of HANSA into sub personalities.” One might be numerical algorithms, another might be pattern matching.

HANSA contains programming capability for C/C++, MatLab, R, and Java. “Imagine running 768 legacy C/C++ programs in parallel without having to make any changes to the legacy code, just do a recompile,” says Metlapalli. Users might want a core library such as BLAST, Smith-Waterman, etc. “We don’t want to build the entire conformation on the FPGA side. I want to provide a library so they can write their own algorithms.” Kuberre provides the ScaLAPACK Library for use out of the box. “But if you want your own custom algorithms, we’ll build those for you.” K.D.

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


SMRT Software Braces for the Pacific Biosciences Tsunami

September 22, 2009
Bio-IT World
By Kevin Davies

September 18, 2009 | Earlier this year, Pacific Biosciences founder and CTO Stephen Turner ran an animation of a real-time single-molecule sequence trace as a crawl at the foot of his slides for the duration of his talk, demonstrating not merely the impressive length of DNA reads the company could generate, but also its slightly hypnotic quality. “I do hope that some of you will watch the rest of the talk,” Turner said.

The cute animation was devised by a member of Scott Helgesen’s software group at PacBio, which is not surprising. A decade ago, Helgesen and Brad Carvey composed the opening dragonfly CGI sequence for Men in Black, before Helgesen traded New Mexico for New England and a job with 454 Life Sciences.

Helgesen is now part of the software team, headed by VP Kevin Corcoran, that PacBio is depending on to handle the data from PacBio’s single-molecule sequencer next year. The third-generation sequencing system eavesdrops on a grid of DNA polymerase enzymes, tethered to the bottom of nanoscopic wells, as they synthesize DNA strands in real time. As each fluorescently tagged base is snared by the polymerase prior to being incorporated in the new DNA strand, its signal is detected. The method is dubbed single molecule, real time (SMRT).

The vital job of capturing that information and producing the informatics pipeline that converts those signals into pure sequence falls to Corcoran, who together with Helgesen, has ties with almost every competitor in the market. Corcoran formerly ran the sequencing business for Applied Biosystems (AB), and was involved in AB’s due diligence of its own next-gen sequencing acquisition, Agencourt Personal Genomics. He was previously the CEO of Lynx, which merged with Solexa in 2004, two years before Illumina acquired the new entity. Helgesen, director of software engineering, primary analysis and simulation, spearheaded software development at 454 for several years, until leaving in 2006, just as the first Genome Sequencer was released.

Real Time Analysis

PacBio has a fairly large instrumentation software group that writes hard-core firmware and builds real-time operating systems. Once acquired, the data are passed onto Helgesen’s group, which handles the primary analysis—image processing, signal processing, base calling and quality value assignments. From there, the sequence data are subjected to secondary analysis, including consensus calls and assembly.

“The biggest challenge is real-time processing of the data,” says Helgesen. “The data rates—the amount of data that comes flooding from the sensors—are really high compared to 454, much higher because we’re looking at real-time events.” Unlike at 454, where Helgesen used a CCD camera to integrate photons over time and under his control, “Now, we’re not in charge of the events happening—the molecules just do their thing and we have to watch them.”

One of Helgesen’s passions at 454 was the use of FPGA (field programmable gate array) technology. He’s circumspect on whether he sees a niche for the hybrid computing solution, but Corcoran says: “We’ll either go down the FPGA route or some of these other alternatives. Graphics GPUs are becoming very affordable and more easily programmable.”

For the prototype research instrument, which measures 3,000 DNA polymerase enzymes running in parallel, the software team has to capture the data in real time but doesn’t need to process in real time. When the commercial instrument launches in 2010, however, “the spec for the production system for shipping is capture in real time and process in real time, to keep the throughput going,” Helgesen says.

Helgesen says the processing throughput trade-off is the length of each DNA fragment and the numbers of parallel fragments going on simultaneously. The scheme is scalable, which is neat. “The big issue is the data are coming in so fast, you don’t have time to store it to disc. You cannot capture the original raw signal data, so you have to process that—first-level data reduction in real time. Even when I interviewed here and heard the number, I was like, ‘Man…!’”

Handling that problem is a concerted IT strategy, involving computers with internal blades, data reduction strategies, algorithm optimization, and more.

Reads and Errors

Late last year, PacBio published examples of its first single-molecule sequencing results in a paper in Science. The single-read errors were on the order of 15-20%, but those data were generated almost 12 months ago. “The interesting thing about single-molecule sequencing is that errors are random verses systematic,” says Corcoran. “In Sanger reads, the errors start to get worse the farther you go out. We don’t see that phenomenon.”

Another benefit of PacBio’s approach is molecular consensus. By circularizing the DNA template into a so-called SMRTBell, the polymerase could figuratively “take a couple of laps around the circular molecule, [so] you get phenomenal consensus accuracy of one particular molecule.” Turner reported individual reads of several thousand bases earlier this year.

Corcoran says a priority is to drive up the raw accuracy rates as high as possible. The consensus sequencing mode would by definition reduce throughput, but provide additional fidelity when searching for rare mutations. “In Scott’s pipeline, he has huge amounts of raw movies,” says Corcoran. “He has to identify where all the pulses are, assign a base to that pulse in real time, and then, if it’s a molecular consensus run, assign a consensus value to that particular read.”

As for the sequence traces themselves, Corcoran says customers will have the option of saving them, “but they’ll have to be saved off onto some system they provide. We’ll stream them off the instrument in real time as we’re processing.” More likely, they will go down a level and save the base calls and associated confidence values.

I asked Helgesen how PacBio compared to the nostalgic days at 454? “It’s definitely more challenging.” At PacBio, Helgesen has a team that is “really strong on simulation, figuring out everything beforehand.” “The best thing about Scott,” adds Corcoran, “is that I was explaining what I was looking for, [and] he instantaneously knew what all my problems were!”

Been There…

Kevin Corcoran was a software engineer at Applied Biosystems who became head of Genetic Analysis software group. In 1992, AB spun out Lynx (see, “Just Bead It,” Bio•IT World, Feb 2004) to develop antisense therapeutics, but redirected efforts to develop short read sequencing based on technology developed by Nobel laureate Sydney Brenner. “It was the first massively parallel sequencing in a big way—we were doing 2 million events,” says Corcoran. The MPSS (massively parallel sequencing system) produced 24-base reads, used mainly for transcriptome profiling. The technology had its challenges, but “as a service, it worked very well.”

However, the technology was way ahead of its time. “You were talking to people and trying to explain the benefits of digital expression. Today, everybody gets it! The technology was ten years too early.”

In 2003, Lynx and Solexa jointly bought the assets of a Swiss company called Manteia. With Lynx running out of money and Solexa in need of engineering expertise, they entered into a transatlantic reverse merger. “It made perfect sense,” says Corcoran. “Since we both jointly owned the Manteia technology we both had guns pointed at each others’ heads. They had cash; we were a public company.” The newly public Solexa was then swallowed up by Illumina.

Corcoran opted to return to AB and run the sequencing business for a couple of years. One of his duties, along with Andy Watson, was to identify prospects for AB’s next-gen platform. “AB had a big program, looked at a wide range of technologies. We settled on Agencourt Personal Genomics. We did due diligence on a lot of technologies.”

After that, Corcoran took ten months off and “recuperated.” But with several ex-colleagues reveling at PacBio, he inevitably got the call. Still, he admits to being “very curious about our friends at Oxford Nanopore,” having gotten to know Clive Brown and John Milton during the Solexa merger.

Helgesen’s interview at PacBio was far different than his job interview at 454, where all anybody wanted to know was how he came to create the special effects for the first two minutes of Men in Black! Joining 454, his first taste of biotechnology, Helgesen had no idea if building next-gen sequencing software was possible. “Now, after going through that experience, I’m used to that situation. I’m not as nervous about it. I’m a software engineer.”

Before joining PacBio, he did talk to 454 founder Jonathan Rothberg about his latest venture, Ion Torrent Systems, but Rothberg couldn’t seal the deal. “No way I want to move back to the East Coast,” said Helgesen honestly. Now he gets to enjoy the California climate, and more importantly, as Corcoran points out, join more than “200 people who understand where you’re going. Everybody has this idea of their responsibility.”

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


Brown & Oxford Nanopore

Bio-IT World,

Sept. 22, 2009
By Kevin Davies

September 22, 2009 | Whether Clive Brown, vice president of development and informatics for Oxford Nanopore Technologies (ONT), is indeed “the most honest guy in all of next-gen sequencing,” (as described by The Genome Center’s David Dooling), is perhaps debatable. But as someone who has already stamped his mark on the sequencing world, his views certainly count for something.

Five years ago, Brown was the director of computational biology and IT for Solexa, helping to spearhead the British company’s successful entry into the next-generation sequencing market, which spurred a $650-million acquisition by Illumina. After a spell at the Wellcome Trust Sanger Institute, Brown joined fellow Solexa alum, vice president research John Milton in moving to Oxford to commercialize nanopore sequencing. An intriguing subplot to the business of next-gen is whether Milton and Brown can catch lightning in a bottle again.

Oxford Nanopore is based around the pioneering nanopore research of Oxford University chemistry professor Hagan Bayley. CEO Gordon Sanghera remains close lipped about the firm’s platform specs, but an elegant paper in Nature Nanotechnology earlier this year showed that ONT’s nanopores can neatly discriminate between the four bases of DNA, based on the degree of current inhibition across the lipid bilayer (see, “Breathtaking Biology,” Bio•IT World, Mar 2009). ONT received further validation by inking an $18-million marketing deal with Illumina.

Under the watchful eye of Oxford Nanopore’s communications director, Zoe McDougall, Brown has to be more circumspect than is his true nature. “Things are on track—without telling you what the track is,” he says helpfully. What he will say is that many of the key risks in ONT’s technology have been addressed, and his team has built the entire informatics subsystem of the instrument.

Among the next priorities are to couple an exonuclease enzyme to the nanopore so that it can successively snip off bases from the end of a DNA strand, which will then tumble into the pore and the sequence read. ONT was recently granted a patent for its stable bilayer design. “This is a core element of our nanopore sensing system, not just for DNA sequencing,” says McDougall. Milton calls these bilayers “the workhorse of our nanopore chemistry. We use the bilayer chip to focus on single nanopores and we also operate multiple-channel versions for higher throughput experiments.”

What the Track?

Before ONT can produce an instrument, Brown has to become essentially a small genome center to test the product for months in house. Brown hired his former Sanger Institute colleague Roger Pettett to build up that infrastructure, as well as the software that goes on the instrument. It is “revolutionary new stuff, but we’re reluctant to talk about that at the moment,” says Brown, though he would say, “It does break the conventional instrument software paradigm.”

Brown says the data throughput on ONT’s sequencer will be high, “many tens of Megabytes/second.” Not as high as some high-tech military applications, but “significantly higher than traditional lab equipment.”

“Even before we were running chemistry,” Brown says, “we made software that simulated data streams at launch spec rates. We designed interconnects and wiring, computer boards and live software that would process that data. We did it all in parallel. So when it came to plugging a chip in, it all more or less worked.” But Brown knows from experience that the system must be “very, very flexible to change.”

Another priority is move the data processing close to the point of data generation. ”We have already put a huge effort into not outputting raw data, but outputting optimized processed data instead.” Brown has considered running some of the algorithms on GPUs, but worries about the power consumption demands. “The other option is to use FPGAs. They’re good accelerators, very low power requirements, but a bugger to program and so not very agile.” Brown says FPGAs might be used at the end of product development, but not before. “So far we haven’t had any problems in terms of compute speed when dealing with our data, either at the instrument level or centralized datasets.”

Brown says the data processing simulations have been instructive. “It’s quite early, and we’re not scared,” he jokes. Meanwhile, Brown is quietly checking out potential software partners, which he hopes will deal with the quality scored DNA sequence output. In addition to genome centers and large-scale laboratories, ONT is also targeting the bench-top. “In order to have a bench-top sequencer,” says Brown, “we have to provide pretty easy to use bioinformatics solutions. Otherwise, it’s just not going to happen.”

“One of the problems with all these existing sequencers is, even if you automate the sample prep and make the sequencer easy to use, you still end up with a file with a billion short reads in it. This is still beyond the capability of most non-bioinformatically trained postdocs to do anything sensible with.” ONT aims to generate even more sequence with longer individual reads.

Bench-Side Manner

Brown’s goal is to provide, for want of a better term, a “turnkey” bioinformatics solution sitting alongside the sequencer. Brown has met with several potential partners, including one unnamed company that demonstrated that its “software can deal with a whole human genome-type workflow in a day or 6 hours on a typical workstation.” Brown says that looks quite promising.

He also plans to find a partner to liaise with user IT groups and “help us to smooth the early adoption of lots of our systems. I’m more worried about the bench-top side than the high-end side.” Once ONT is fully launched, Illumina will have a large say in that part of the workflow.

Besides targeting the genome centers and the bench-top sequencer sitting next to a lab researcher, Brown thinks that service organizations such as Complete Genomics might prove another fertile market—in other words, “very large sequencing centers that are not traditional genome centers,” focusing on medical sequencing applications. “I think Complete Genomics is a perfect customer for us. In fact I think our machine’s better suited for what they want to do than theirs is!”

As Brown talks, it sounds as if ONT stands for “on track.” Surely there are problems somewhere? “I don’t want to oversell things, and remember we are still very stealthy as a company,” he responds. As at Solexa, “things just aren’t linear in a company like this. You have days when things work beautifully, and long dry periods when things aren’t working. Half of it is just keeping your nerve.”

Certainly Brown has assembled a strong team to build the IT/informatics infrastructure. Nava Whiteford, another Sanger Institute recruit, is adapting existing algorithms and developing a novel file format called Fast5 for scored sequences. Physicist Stuart Reid is driving data quality measurements and some of the basic science feeding into the platform. Lukasz Szajkowski joined from Illumina to manage the writing of the instrument software, which Brown calls “one of the most risky areas, but it’s all on track thanks to him.” Molecular modeler Mick Knaggs has implemented much of his software on GPU-enabled systems.

I don’t suppose the Sanger Institute is too happy about some of their top people being poached. “Yeah, we did have a chat about my recruitment methods,” says Brown honestly.

--- [full interview] ---

What Can Brown Do For Oxford Nanopore?

Clive Brown, vice president of development and informatics for Oxford Nanopore Technologies (ONT), a.k.a “the most honest guy in all of next-gen sequencing,” as dubbed by The Genome Center's David Dooling, is hoping to catch lightning in a bottle again. Five years ago, he was the head of software for Solexa, spearheading the British company’s entry into the second-generation sequencing market, which spurred a $650-million acquisition by Illumina and early domination of the next-gen sequencing market. After a spell at the Wellcome Trust Sanger Institute, Brown and his fellow Solexa alum, vice president research John Milton, have shuttled over from Cambridge to Oxford to commercialize the astonishing potential of nanopore sequencing. Oxford Nanopore has not yet revealed details of its future platform, but in early 2009, published a lovely paper in Nature Nanotechnology showing that its alpha-hemolysin nanopores can discriminate between the four bases of DNA (not to mention a fifth, methyl C). With a tidy $18-million marketing deal from Illumina, ONT is working on multiple fronts—chemistry, engineering, and of course, IT and informatics. Kevin Davies spoke to Brown about the company’s progress and prospects.

Bio-IT World: Before we get into Oxford Nanopore, what was your reaction to Stephen Quake’s single-molecule genome paper?

CLIVE BROWN: It’s a little bizarre. The main positive message is that they’ve done a single-molecule human genome. This is perfectly worthy of a good Nature Biotechnology paper. The numbers they’re citing give a respectable throughput of about 2 Gigabases/day. The error rates are higher but they’re going to get that with single molecule fluorescence.

But then there’s this Table 1 (Supplementary Information) with old cost claims. Their own cost seems to be exactly what Illumina is citing for their service sequencing now [$48,000]. And it’s bizarre. They’re using the number of names on the Solexa paper [Nature 2008] as evidence of how many people are required to run an instrument! Well, that Solexa paper was the culmination of 8 years work encompassing the entire development of the platform, so it had everybody’s name on it. The CEO’s name was on there, and he didn’t run any instruments.

They’re setting themselves up for a tagline: ‘Look, we only need three people to run a Helicos machine. And Illumina needs all these people and it’s much more expensive.’ If they’d just stuck to the high ground, i.e. they’ve got a working system that does single-molecule genomes, they’d be a lot better off in the credibility stakes. But, this apparent back-door marketing stuff is ridiculous. For example, their original paper had 20 or 30-odd authors for a 6-kb viral genome.

I think Helicos deserves some kudos. They’ve stuck with it, they’ve had a rough time as a company, and they’ve made it work about as good as it can work with single-molecule fluorescence, with the cameras they have. People have taken the technology outside and they’ve used it successfully. And that’s not trivial. If I was them, I’d have stuck to that message. They should stick to the high ground—you can quote me on that.

What progress have you made since your Nature Nanotechnology paper early this year?

It’s difficult to give you specifics without revealing too much, but things are on track—without telling you what the track is! We’ve taken care of a lot of the key risks in our technology. Not all of them, but a lot of them. We’re building things. On the informatics side, we’ve actually built the entire informatics subsystem of the instrument.

One of the things you have to accomplish at a company like ONT or Solexa is to become a small genome center at some point. Before launching a product, you have to run it in house for months, doing genome-center type things. In some ways it’s harder because the system is unpolished, it therefore requires more effort to manage and run and thus certain kinds of software infrastructure are required. Roger Pettett [ex-Wellcome Trust Sanger Institute] is building that infrastructure. He’s also working on software that goes on the instrument itself, which is revolutionary new stuff, but we’re reluctant to talk about that at the moment. It does break the conventional instrument software paradigm.

Does the real-time nature of the nanopore platform make a big difference to the informatics?

What we’re doing in terms of data processing is in some ways easier than Solexa, in some ways harder. Our data rate is much higher—it is many tens of Megabytes/second. If you look at the high-end real-time computing world, it’s in the middle of the data rate range. If you look at high end military radar applications, sonar, etc. we’re two-thirds down that scale. But we’re certainly significantly higher than traditional lab equipment. We decided to tackle this data processing problem very early.

So, we’ve designed and built the basic data processing subsystems very early. Even before we were running chemistry, we made software that simulated data streams at launch spec rates. We designed interconnects and wiring, computer boards and live software that would process that data. We did it all in parallel. So when it came to plugging a chip in, it all more or less worked.

The issue of data reduction close to the instrument is one we tackled very early. But one has to be very careful. When you make one of these products, typically the chemistry evolves all the way to launch, and even after launch. So the way you build things has to be agile and flexible enough to keep up with and optimize against those chemistry changes. So you can’t really build a specification-driven rigid system—it has to be high performance but very, very flexible to change. That’s a real challenge; building a high performance system consisting of hardware and software that can evolve rapidly against real data.

Are you trying to move the data processing close to the chip, and if so, why?

When you scale up, it gets harder to move data over a wire. People are complaining about copying data over a network with the current sequencers. Obviously, just imagine that 100X worse. It isn’t feasible. So you have to move more and more of the processing close to the point of data generation. If you look in other parts of the electronics world, they have the same issues, integrating more and more stuff closer to the point of generation. It’s the same with microprocessors, e.g. Intel CPUs. They put more and more on the silicon. So we have already put a huge effort into not outputting raw data, but outputting optimized processed data instead. That’s not to say you can’t output raw data, my philosophy has always been to have an open system, that lets people dig around in the raw data and algorithms and understand it all but you would need to flick a switch to get the raw data out.

Do you plan to use accelerators such as GPUs or FPGAs?

Two things are going on there. There’s what goes on inside the instrument, and then you have these interim experimental data sets to deal with during the product development phase. So, classically, what you do is write data to disc, and then you have an analysis pipeline that runs on your cluster to look at your data centrally. That’s where the Solexa software came from—the Solexa GA pipeline was actually that, but ended up being shipped for various reasons. We have that pipeline equivalent in house, and have been looking at implementing some of its algorithms on GPUs. For our internal computing needs, there are some attractions to this. But you can’t yet economically stick a GPU inside an instrument, because the power consumption gets quite high.

The other option is to use FPGAs. They’re good accelerators, very low power requirements, but a bugger to program and so not very agile. Once you’ve got your algorithms finished, specifiable, at that point, you can stick it on an FPGA. So FPGA is something we might use at the end of the development process, but not during it.

Bear in mind that standard CPUs are getting faster and faster—they’re not too shabby at all! So far we haven’t had any problems in terms of compute speed when dealing with our data, either at the instrument level or centralized datasets. We haven’t had any problems with that at all, but we are constantly trying to drive up the efficiency of data processing.

Have you started simulating the data processing in real time?

Yes, and it’s quite early, and we’re not scared! But I’ll tell you what’s hard—the hardest bit is the wiring! Wiring two bits of circuit board together… The processing side isn’t so bad, it’s all the interconnects, moving data from A to B. We’re building and designing most of the instrument internally. We use some off the shelf components, and have partnered for others, but we do a lot of our own PCB design.

You’ve said you’re having to “beat the software vendors off with a stick.” How do you see yourselves working with them eventually?

Where I see them coming in is in dealing with what comes out of the ports on the back of our sequencer, which hopefully should be a quality scored DNA sequence, possibly with some accessory information for QC. Obviously, we’re targeting high-end large-scale laboratories, but we’re also targeting the bench-top. In order to have a bench-top sequencer that does the kind of applications being pioneered in the genome centers, to have that accessible to any researcher, we have to provide pretty easy to use bioinformatics solutions. Otherwise, it’s just not going to happen, is it? One of the problems with all these existing sequencers is, even if you automate the sample prep and make the sequencer easy to use, you still end up with a file with a billion short reads in it. This is still beyond the capability of most non-bioinformatically trained postdocs to do anything sensible with. Our system will have even more of that, and we are looking at longer reads. Nevertheless, a massive amount of complex data.

We have to provide, next to our sequencer on the bench—I hate to use the word ‘turnkey’—but a pretty polished bioinformatics solution to deal with that. I’ve been talking to a number of interesting companies—there’s a current favorite—we’ve been doing some proof of principle work. They’ve been demonstrating that their workflow software can deal with a whole human genome-type workflow in a day or 6 hours on a typical workstation. For example, it reads in data from a massively fragmented human genome, then executes a standard QC and alignment workflow, and then produces graphical reports and SNP calls. They’ve done it in reasonable time, in my view, both for development effort and speed of execution. I can’t say who it is, yet, but that looks quite promising to me.

Are you writing your own alignment and variant-calling software?

Nava Whiteford is working on a paper at the moment on very long-read sequencing and its informatics. He’s adapting existing algorithms already out there. For contrast, at Solexa, Tony Cox had to come up with a completely new one called ELAND to deal with mountains of short reads. However, we think we can adapt methods that exist already in such a way as to work optimally with our data. We’ve also developed some interesting file formats for our data—called Fast5 for scored sequences. We’ve also co-developed some HDF-5 based raw data container formats and those are all public domain. We’re slowly putting in the groundwork for adoption.

You say you’re not providing IT solutions, but won’t some early customers want that?

A lot of early customers, because of the current ‘next-gen’ sequencers, will have made the IT investment anyway. A lot of that can be recycled for our sequencer. Our sequencer won’t have the IT overheads of the current sequencers—it’ll be greatly reduced, even though the output will be higher. And unless we get merged again, and all my plans get derailed, I’m going to partner with somebody in terms of the early adoption process. These people will liaise with IT groups and help us to smooth the early adoption of lots of our systems. It won’t be as big a barrier as it was at Solexa, but we will have to put some effort into it. I’m more worried about the bench-top side than the high-end side. Once we go to full commercial launch we are partnered with Illumina and there will have to be agreed and synergisitic approaches.

So you’re targeting two very different markets—genome centers on the one hand, the bench top on the other?

Traditionally, there are about 15,000 of the old capillary/gel-based sequencers. They enjoy about 4% growth per year, mostly in forensics and other conservative market segments. There are about 900 next-gen sequencers out there now. They’ve traditionally gone to people who do already do some sequencing, e.g. core facilities, genome centers etc. However, what will the future look like? Well, an obvious direction is the bench-top sequencer, which sits next to a postdoc or other lab researcher. We target that. We also target genome centers, in that they can run lots of our machines in a scaled up operation, rather like the [Illumina] Genome Analyzers are currently run.

The thing that isn’t really there yet, which oddly enough is being broached by Complete Genomics, is this idea of very large sequencing centers that are not traditional genome centers. They’re either commercial service operations or they’re run by health providers, and they’re doing medical sequencing or whatever. Those don’t exist yet, but I can’t imagine why they wouldn’t in the future. We go across the board from the very small to the very large. What’s new here is the very small and the very large. The middle bit is the old market, still important but a subset of our ambitions.

There are people out there who want to do these new sequencing applications perhaps at a smaller scale or more occasionally. These applications have been pioneered by people who already bought Solexa or SOLiD, but they’re not quite accessible yet. At the other end of the scale I think Complete Genomics is a perfect customer for us. In fact I think our machine’s better suited for what they want to do than theirs is!

It sounds like everything is on track. Where are the problems?

I don’t want to oversell things, and remember we are still very stealthy as a company. On the other side of the coin, any of the more honest people will tell you things just aren’t linear in a company like this. You have days when things work beautifully, and long dry periods when things aren’t working. Half of it is just keeping your nerve. You’ve just got to plough through it all.

Surely your Solexa experience must be helpful in this regard?

Yeah, I think so. [Former Solexa CEO] John West would always say, “You’ll be amazed what you can get to work if you put your mind to it”. The average age of employees here is just 26-27. So we’re trying to impart that attitude to people who haven’t been through this process before. It’s quite a leadership challenge, actually.

What are some of your group doing specifically?

Let me just say, I think they’re all fantastic. A few examples: Stuart Reid, a physicist by background, is doing a fantastic job. He’s very cross-disciplinary but he works in my group which also deals with product design, specification and systems integration. So a lot of what we’re doing is about data and data quality measures and targets. Stu is driving that, along with some of the fundamental science that feeds into the platform. Roger Pettett, who joined us from Sanger, is doing a critical job actually. He’s a very understated chap, but we have to build a ‘genome center’ here and Roger is driving the informatics infrastructure to accomplish that. Lukasz Szajkowski, who joined us from Illumina, is managing the writing of the instrument software. To my mind, one of the most risky areas, but it’s all on track thanks to him.

Nava Whiteford, who also came from Sanger, wrote the first proper publication on short-read sequencing feasibility. He’s doing a fantastic job across the board on bioinformatics and IT. I’ve got Gavin Harper, a statistician from GSK, churns through mountains of raw data, measuring things, and keeps us all honest. We have Mick Knaggs, a very experienced molecular modeler, who has provided key insights into the nanopore design and optimization. He has implemented a lot of his software on GPU-enabled systems in house. There are many others; they’re all actually very, very good. I’ve been very lucky to get them.

Hmm… The Sanger Institute must love you…

Yeah, we did have a chat about my recruitment methods.

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]


David Dooling: Gangbusters at the Genome Center

Bio-IT World
By Kevin Davies
September 16, 2009

David Dooling joined The Genome Center at Washington University at St Louis in 2001 from Exxon Mobil, where he’d been developing chemical reaction models. He started as a programmer, writing a lot of software, with no life science familiarity, and picked things up as he went along. He now oversees about half of the informatics group, including Laboratory Information Management Systems (LIMS); the Analysis Developers group, which creates an automated pipeline for the bioinformaticians; and the IT group—infrastructure, network computing, and storage. Kevin Davies spoke to Dooling the same week as his group published the second cancer genome paper, in the New England Journal of Medicine, an important study that identified recurrent mutations in genes not previously associated with acute myeloid leukemia (AML).

Bio-IT World: How has life changed at The Genome Center in the time you’ve been there?

DOOLING: Well, things were good for a while, we had 10 Terabytes of disk, everything was great! Now we have about 3 Petabytes. When I started it was [ABI] 3700s. Then we replaced that fleet with 3730s, Megabytes a day. Then 454 came along, Illumina, SOLiD. It’s been gangbusters ever since.

What’s the current platform setup at The Genome Center?

Right now, we have 454s and Illuminas. We don’t have any SOLiDs any more… We’d purchased one, and [were using] a couple of others. We carried both platforms forward, but there’s a significant expense with each of them, manual labor costs, library preparation, emulsion PCR, DNA input requirements, etc. In cancer research, you just don’t have 3-5 micrograms of DNA. The Illumina has much lower DNA requirements, which we’ve driven even lower. Carrying on the informatics, lab pipelines, analysis pipelines, we carried both platforms forward but made a decision to concentrate on Illumina… Wrestling with two at a time is troublesome.

Wouldn’t the SOLiD set up two-color space be advantageous for cancer genomics?

That’s true. With the color-space correction, the reads are more accurate. I think the accuracy is marginal. The coverage values you need to be confident you’re sampling both alleles is sufficient that the marginally higher error rate you see with Illumina is washed out in the consensus.

You just published a second cancer genome. Where does that fit in with your other projects?

We aim for about 300 genomes in the next eight months. It’s the Washington University Cancer Genome Initiative—150 tumor-normal pairs. 300 genomes, 150 patients. About 1/3 will be AML, 1/3 lung cancer, and 1/3 breast cancer, with a few others probably. That’s completely separate from the 1000 Genomes Project. We’ll also be doing some glioblastomas and ovarians as part of The Cancer Genome Atlas (TCGA). In addition to just tumor-normal pairs, we have a breast cancer quartet where we have the tumor, normal, and a biopsy from a brain metastasis to see the difference between the primary tumor and the metastasis.

What level of -fold coverage do you aim for?

We have a gross -fold targets for coverage of about 30X. But to really determine breadth of coverage we SNP genotype the samples, and we track lane by lane on the Illumina—how many heterozygous SNPs are we seeing both alleles on? Once we get above 95-99% of those SNPs, we say we have sufficient breadth of coverage. Typically, in a perfect world, that’s around 23-24X. It’s typically closer to 30X, and can be more than that. For our second AML genome, a very well-behaved genome, 23-24X did the job.

Some of the sequencer runs are not usable. How do you judge the quality of alignments?

It’s different for each platform. With Illumina, you’re randomly placing fragments on the flow cell and sometimes they get too close to each other and you can’t distinguish the signal. You might hear the term “chastity filter,” because they’re not chaste, they mix inappropriately. They’re not counted at all. Then there’s the reads that don’t align. Those we keep. When we find more complex translocations etc, we’ll try to find reads that map across that, therefore those are usable reads. For SOLiD, you have “illegal transitions”, reads that differ from the color-space reference by only one color. You need two adjacent color transitions for a true SNP, so those with only one are filtered out because either that read does not align there or something was wrong with your color detection at that position. In addition, reads that don’t align aren’t super useful—only reads that align allow the color-space correction that boosts accuracy.

How much do you collaborate with the other genome centers?

It’s fairly regular. I’ve visited Sanger, Baylor, Broad. Much of the collaboration is on a project-based, results level—let’s share our alignment files, our sequence data. It’s a healthy collaboration and competition. We all like to develop our own tools, but if someone else has a tool, we’re happy to use that.

Can you describe the new data center?

We took possession in May 2008. We’re now completing a second phase of construction. The building is over 16,000 square feet. The data center is about 3600 sq ft. About one fourth was outfitted with cooling and power… Less than a year later, we’re getting the rest of that equipment installed so we can fully utilize the data center. At full capacity, it’ll consume about 4 MegaWatts of power. It’ll have capacity for around 100-110 racks of high-density computing equipment. Average 15 kiloWatts per rack, which is high. Current fully loaded blades are around there.

Are you working with any specific vendors?

For storage, we’re using a software solution called PolyServe, developed by a company that was purchased by HP. We like it, compared to something like Isilon (see, “Isilon’s Data Storage Odyssey,” Bio-IT World, May 2009), because it’s hardware agnostic. We can buy whatever servers, SAN switches, discs we want. If we decide to go away from it, we can still use those discs. It’s a proprietary file system, so we’d have to move all the data off, but we’d have to do that anyway. It’s a parallel file system on the back end that any number of heads can address. It has fail-over capability… We’ve had pretty good success with it.

On the hardware side, we’ve been purchasing HP storage, which has been the cheapest. We’re using HP and Dell servers. Blades, pretty much all Dell. It’s not like we throw stuff away! Over time, we go with whatever works best.

Did you consider commercial LIMS?

I manage about a dozen people in the LIMS group. The LIMS has been developed over a decade... We have evaluated [commercial systems] on several occasions, but not recently. Actually, we’ve talked to the folks at WIkiLIMS and Geospiza, but they’re not really designed for our scale. We’re topping tens of millions of transactions per month. We have tables in our database with billions and billions of lines.

You’re an open source advocate. How does that relate to your role at TGC?

Why open source? It’s just better software. Our entire system runs on Linux, Perl, PHP, Apache. We use Oracle but also MySQL and PostgreSQL. We have several thousand cores in our computer cluster and 250 desktop workstations that all run GNU/Linux, maintained by 1.5 system administrators. You’re talking about thousands of systems that can be maintained by 1.5 FTEs. You can’t get that with a Windows solution or a Mac solution. Granted these guys are highly skilled, but if there’s a problem, they can dig into it. At the scale we operate, we’re always breaking things. Whatever people bring in here, it breaks. We need to have the capability to tweak and to have the source code there and the communities that develop around free software. When we have problems, Google is our friend. 99 times out of 100, you’ll find someone who had that problem. With the proprietary solutions, there’s not a lot out there. They may not care about you.

Do you work with any commercial software tools?

We’ve spoken with CLC bio, we were one of the first people to partner with the Synamatix search tool. We’ve worked with Novo Craft. There’s also Real Time Genomics, formerly SLIM Search (see, “The Quest to Make Sequence Sense,” Bio-IT World, Nov 2006). We’ve had that for a couple of years and are talking to them about their next-generation alignment and analysis tools. We look at them, but it’s a tough nut to crack for those folks given the pace at which this field is changing.

Are you seeing much progress in alignment tools?

You can easily make it less of a bottleneck now. We use more than one aligner. We’re very comfortable with MAQ and have been using it for a long time. It’s not as computationally efficient as others, and we’re currently running several others in parallel with MAQ all the way through our pipeline… We’ll take alignments from each tool and run them through the pipeline in parallel. We’re aggressively testing lots of things to see what is optimal… We’re focusing on BWA [Richard Durbin’s group]. It uses the Burrows-Wheeler transform, as does Bow-Tie and SOAP2.

Do you have any need for cloud computing?

Yes and no. We’re interested in making our tools more useful to as many people as possible, releasing them open source. Part of that is making them useful in HPC environments, whether clouds, or Open Science Grid or BOINC (the engine behind SETI@Home). The one we’re most aggressively pursuing is Open Science Grid (OSG), a federation of grids that provide end-users computing resources through a granting process. It’s not like Amazon, where they charge by the hour...

The other side is that the utilization of our infrastructure goes through ebbs and flows. It’ll be much more efficient to have a system that could overflow onto OSG in times of stress, rather than have things pile up or build a much larger infrastructure just to support the heaviest utilization periods. We’re also talking to Sanger. In March, we had a Genome Informatics Alliance meeting. Amazon was there, Google, OSG, Microsoft. One of the action items was to work with those folks. Sanger took the lead with Amazon.

How do you deposit data into NCBI’s short read archive (SRA)?

We use Aspera. To the best of my knowledge, that’s the only option. There are other options, but NCBI does not provide them.

Illumina has increased throughput over the past year. Are you experiencing that?

Sure, there are two aspects to the increased throughput: increased read lengths and higher cluster densities. Read length is largely due to improved reagents. Cluster density is due to software improvements. The software now does a better job of disambiguating overlapping reads. The read length is essentially because of better chemistry, for example better deblocking of the reversible terminator. Each component of the reaction is not 100% efficient, so you get phasing. Drive the reaction closer to 100%, and you get less phasing, higher signal to noise. Our standard operating is 2x75 bp. You can run them to 2x100 but the error rate is such that it’s not as attractive for us right now.

What are the bottlenecks you anticipate in the next 12 months?

I’d be lucky if I could pick the bottlenecks for the next 8 hours! Essentially, to get to where we are right now, we’ve created a very well balanced system. There isn’t one aspect of the pipeline I’m concerned about—I’m concerned about them all in equal measure. Initially, you’re getting the data and so you buy a lot of disc space. Then you buy more compute nodes, but you can’t get the data to the compute nodes, so you upgrade your network. Now you’re not efficiently using your CPUs, so you rewrite the algorithm in C, make the computation more efficient. Then you find the disc I/O is bad, so you need a more distributed system for higher throughput. We’re getting sustained 10-15 Gigabits per second out of our disc system now. It’s crazy! A year ago, you don’t do that. So each time you dial one up, you have to dial the others up. Now they’re all at 11. It’s just a matter of keeping that stuff in balance, enhancing your monitoring’/troubleshooting techniques.

For the current generation of sequencing technologies, we’re on a good path. Everything scales really nicely. For PacBio etc. it’s going to be 1-2 years for a production instrument to really gain a foothold. I’m very interested to work with any of these 3rd generation sequencers at very early stages and figure out what the problems are. They’re going to have to deliver data in a very different way. You’re never going to have the equivalent of images—it’s just not possible at that scale. It’s likely that’s going to be much more information than you need, but you won’t know what you need. What sort of systems will be in place?

By the end of this year, you’ll have dozens of whole-genome sequences. Where are the tools to do whole-genome vs. whole-genome comparison? Linking that up with phenotypic information? That’s the other huge challenge.

Could you ever outsource sequencing to someone like Complete Genomics?

Sure, why not? By the time they hit that $5000 mark, other vendors will be hitting that mark. SOLiD said $30,000 for their genome. We’re looking somewhere around what they’re charging now per genome in the not too distant future ($20K range). That’s a fully loaded cost—including instrument depreciation.

[R&D and Market analysis of this "game changer" is upon request. Pellionisz; HolGenTech_at_gmail.com]

Venture Investor Uncovers DNA of Economy

The Street
ByCarmen Nobel,

On Wednesday September 16, 2009, 6:00 am EDT

BOSTON (TheStreet) -- The key to the human genome, the genetic codes that determine everything about us including how we look, is also the driver of gross domestic product, according to Juan Enriquez, founding director of the Life Sciences Project at Harvard Business School and now a managing director at Excel Venture Management in Boston.

The world's leading authority on the pervasive impact of life sciences, Enriquez contends that genomics have been driving the economy in virtually every sector -- from high-tech to real estate. To those who doubt that, he points to the history of computing, arguing that genetic code is today what binary code was back then.

"If somebody had stood up in 1980 and said the ability to write in ones and zeroes was going to be the biggest driver for businesses, they would have been thrown off the stage," he says.

"But look at what Hewlett-Packard, Google and Intel are doing with some of the largest databases in the world," he says. "A lot of computer companies are trying to figure out what their life-sciences strategies are. Anything you can code as life you can code as digits."

For starters, he points to the fact that Compaq Computer won the deal to provide the supercomputing power for the Human Genome Project at the turn of the century, shortly before the company was bought by Hewlett Packard for $25 billion. "That's what drove that merger," he says.

Enriquez ties life sciences to the housing crisis, maintaining that "the fact that you don't have a research triangle in Detroit means that the average price of a house in Detroit is $7,000. The solution in Detroit is not to fund Chrysler. It's to fund innovation."

Excel Venture Management's portfolio includes Synthetic Genomics, a La Jolla, Calif., company Enriquez co-founded with three other life-sciences heavy hitters in 2005. In July, the company announced a $300 million deal with ExxonMobil to research and develop biofuels from photosynthetic algae.

Another one to watch is Aileron Therapeutics in Cambridge, Mass., which, as Enriquez describes it, has developed a way to maintain the shape of peptides, which enables them to act as keys to understanding the way cellular behavior leads to diseases. He expects the company to be of interest to Big Pharma, including GlaxoSmithKline and Novartis. A managing director of the Novartis Venture Fund is on the board.

At the Technology, Evolution and Design conference held in Long Beach, Calif., in February, Enriquez predicted a next generation of humans who will be able to evolve themselves through science.

"What we're going to see is a different species of hominid," he said. "And I don't think this is a thousand years out. I think most of us are going to glance at it, and our grandchildren are going to begin to live it -- a hominid that takes direct and deliberate control over the evolution of his species, her species and other species. And that, of course, would be the ultimate reboot."

That said, he acknowledges that bringing life-sciences companies to life is no easy feat. "The care and feeding of geniuses is a really hard profession," he says.

"What's interesting to a businessperson and what's interesting to a scientist are not the same thing," Enriquez told an audience of geniuses on Saturday at an event honoring the 25th anniversary of the Center for Excellence in Education. The attendees largely comprised young scientists who were hatching startups at Harvard and the Massachusetts Institute of Technology, also in Cambridge. "Almost no businessman is nearly as smart as the people in this room. And almost nobody in this room is good at business."

"We go through about three CEOs from the time the thing is invested to the time it goes to market," he says.

[Juan Enriquez is the "super expert" in "Genome Based Economy", especially since the Founder of Field, Norman Borlaug has just passed away at 95 years of age - just when he declared that his first "Green Revolution" culminating in his Nobel Prize in 1970 should kick into a "Second Green Revolution". It is one of the most bizarre mistake that too many believe, that "nobody ever made a wealth on Genomics". Ask those billions of people who survived famines in India and China because of the "Green Revolution" - and as Monsanto or DuPont - or Merck that bought cancer-stopping Small Interfering RNA company (SiRNA) for $1.1 Bn. Pellionisz; HolGenTech_at_gmail.com, Sept. 21, 2009]


Collins, Venter among recipients of White House science and tech medals [the oncoming Nobel Prize for Sequencing of the Human DNA - AJP]

updated 5:45 p.m. PT, Thurs., Sept . 17, 2009

Scientific honor roll includes old genetic rivals

The leaders of competing efforts to decode the human genome were cited Thursday for presidential honors, almost a decade after the "genome race" ended in a tie.

Among the recipients of the National Medal of Science listed by the White House are Francis Collins, who led the government-organized Human Genome Project in the 1990s; and Craig Venter, who established a for-profit corporation called Celera Genomics to pioneer a "shotgun" approach to whole-genome sequencing.

Celera made rapid progress on the genome quest, sparking an acceleration in the publicly funded effort. Both groups ended up publishing their results in February 2001 — with the draft from the public project appearing in the journal Nature, and the draft from Celera appearing in Science.

Since then, both Collins and Venter have taken on new challenges. This year President Barack Obama named Collins to head the National Institutes of Health, while Venter's current focus is the development of a synthetic genome.

Collins, Venter and other honorees will receive their medals Oct. 7, the White House announced.

[The present point of inflection of HoloGenomics, with private industry is about to take over with its accelerated speed of development over laggard government research, which is still tardish to reward paradigm-shift developments, compells "the great conciliator" Prez Obama to declare the field "level". This medal (shared by others) appears to be an unmistakable sign of Venter and Collins sharing a Nobel Prize for their work resulting the sequencing of Humand DNA ( Pellionisz; HolGenTech_at_gmail.com, Sept. 19, 2009]


Personal Genomes Get Very Personal

Friday, September 18, 2009

A scientist believes he is close to finding the cause of his daughter's disease.

By Emily Singer
MIT Technology Review

After five challenging years of searching, Hugh Rienhoff might be near the end of his quest. The bio-entrepreneur, a clinical geneticist by training, is trying to find the cause of an unusual collection of symptoms in his daughter Beatrice, including muscle weakness, curled fingers, and long limbs. About a year ago, Illumina, a California-based genomics company, sequenced parts Beatric[e]'s genome, along with that of Rienhoff and his wife. The determined father has spent the last twelve months searching through the data for mutations that only Beatrice possesses.

Her symptoms resemble those of a collection of rare genetic disorders, including Marfan's syndrome, a condition that leads to defective connective tissue and serious heart problems. So far, Beatrice doesn't have any of the mutations known to cause those diseases, and her heart looks healthy. But that has done little to assuage her father's worry. "My primary concern is that she is at risk for vascular disease," he says.

Rienhoff has focused his search on genes involved in the molecular pathway of transforming growth factor beta (TGF-β), a molecule that provides a common thread between different disorders with symptoms resembling Beatrice's. The protein is involved in different aspects of development, including that of smooth muscle. (Prior to the Illumina sequencing, Rienhoff had been doing his own genomic analysis. He bought equipment for amplifying DNA and began isolating genes involved in the TGF-beta molecular pathway from his daughter's blood, sending them out to be sequenced.)

With the help of Vincent Butty, a scientist at MIT, Rienhoff has compiled a list of genetic variations in Beatrice's genome, filtering out those found in both his and his wife's genomes. He is working on the assumption that the genetic culprit arose anew in Beatrice's DNA and would therefore be absent in her parents. Rienhoff and Butty presented the latest findings from their search at the Personal Genomes conference in Cold Spring Harbor this week--so far, they have identified approximately 80 genes that less active in Beatrice than [in] her parents.

One of the biggest challenges, Rienhoff says, is the software available to analyze the data. "To ask the questions I want to ask would take an army," he says. "I'm trying to connect the dots between being a genomicist and a clinical geneticist. I don't think anyone here realizes how difficult that is. I'm willing to take it on because it matters to me."

Fortunately, Rienhoff has new help in his personal hunt. He sent the sequence information to George Church last week, a Harvard geneticist who heads the Personal Genomics Project. And Reinhart [Rienhoff] says he has now been approached by sequencing companies offering to do the families entire genome.

"I think there is a message--studying rare diseases is informative of common diseases," says Rienhoff. "If we look at the numbers of disorders related to TGF-beta and Marfan syndrome, we might be able to explain a good percentage of aortic aneurisms. The same drug that helps Marfan might help them."

[There seem to be at least two assumptions; one is that "the genetic culprit arose anew ... and therefore be absent in her parents" - rock solid by a geneticist. Another assumption is compelled by the available technology, to seek structural variants in the "genes" - at least 80 of them less active in Beatrice than in her parents'. The second (technical) assumption is to be overridden by sequencing not just the genes (1.3%), but full sequencing parents and all 3 kids, and look for structural variants in the regulatory ("non-coding") part of Bea's full DNA. This, again, could be done by two classes of software: (a) One using "brute force" of cross-comparison of the two parents against two kids without Bea's condition, relative to Bea's DNA - essentially not knowing what to look for. (b) The other, targeted approach could look for the relative structural integrity of (or lack of) the regulatory repeat elements. Since the entire "Junk" (98.7%) used to be dismissed, it is not overly surprising that software is not abundant for focusing on "repeats" - in fact they used to be outright thrown away by "repeat maskers" ( Pellionisz; HolGenTech_at_gmail.com, Sept. 18, 2009]


Personal Genome Conference in Cold Spring Harbor, 14-17 September, 2009

An extended coverage by presenter Dr. Andras Pellionisz will appear here.

Genome sequencing became extended to reveal not only the A,C,T,G bases of the whole human DNA, but the "single molecule" (or "nanosequencing") technologies have been demonstrated by presentations to show also if the bases are methylated, or not.

Two "single molecule" sequencer companies made such announcement in their presentation on the 17th of September; Oxford Nanopore (Oxford, UK) and Pacific Biosciences (Menlo Park, California).

Sequencing is now extended beyond Genomics to Epigenomic factors. As an oncoming affordable tool, the road is therefore open beyond theories of HoloGenomics (such as The Principle of Recursive Genome Function presented by Pellionisz at the Cold Spring Harbor meeting, led by George Church and introduced by James Watson), but also for their experimental follow-up.

Pellionisz set the tone of changing times by publicly asking the organizers and participants at the opening session if the era has come to paraphrasing the JFK inaugural slogan:

"Ask Not What Your Genome Can Do For You - Ask What You Can Do For Your Genome"

[Pellionisz; HolGenTech_at_gmail.com, Sept. 17, 2009]

PacBio Shows Proof of Principle for Methylation Sequencing, Direct RNA Sequencing

September 17, 2009

By Julia Karow

COLD SPRING HARBOR, NY – Pacific BioSciences has demonstrated that it can use its single-molecule real-time sequencing technology to identify methylated DNA bases, and to directly sequence RNA molecules. It has also applied strobe sequencing, a sequencing mode that generates multiple gapped reads from a single DNA strand, to map complex structural variations in human fosmid DNA.

At the Personal Genomes conference at Cold Spring Harbor Laboratory this week, PacBio Chief Technology Officer Steve Turner showed that the company's platform, which measures the incorporation of fluorescently labeled nucleotides into DNA by a polymerase in real time, can distinguish methylated from unmethylated bases for two types of nucleotides.

In principle, this will allow users of the platform to simultaneously determine the DNA sequence and its methylation status. By comparison, current sequencing platforms require DNA to be specially prepared to interrogate methylation sites, for example using bisulfite or methylation-specific antibodies.

The detection hinges on differences in the kinetics of the enzymatic reaction, depending on whether a methylated or an unmethylated nucleotide is incorporated, he explained. The ability to distinguish between the two improves if the same molecule is sequenced several times.

So far, the company has shown that it can distinguish naturally methylated adenosine in E. coli DNA from unmethylated adenosine in DNA from the bacterium that has been amplified in a test tube. Methylated adenosine, he said, appears to interfere with base-pairing during the sequencing reaction.

Detecting changes in the kinetics associated with methylated cytosine is “a little harder,” he said, but the company just recently succeeded in distinguishing methylated from unmethylated cytosine, although the signal was not as good as for adenosine. Turner did not mention whether methylation sequencing will be available at the time of the commercial launch of PacBio’s platform, planned for the second half of next year.

In addition to methylation sequencing, the company has also shown proof-of-principle for directly sequencing RNA. This application requires an RNA-dependent type of polymerase, such as reverse transcriptase. The rate of unproductive binding events of nucleotides is higher than for DNA sequencing, Turner noted. So far, the company has been able to read synthetic RNA templates that consist of alternating adenosines and uridines.

Turner said that direct RNA sequencing will not be available at launch, but is expected to be added within a year after the platform is released.

He also mentioned how the company has started to apply strobe sequencing, a mode of sequencing where instead of a single read, the platform generates several shorter reads from the same molecule that are interrupted by “dark” stretches of DNA of a defined size (see In Sequence 5/12/2009).

At present, the technology produces a distribution of reads with an average read length of 1,000 base pairs. However, each of these reads can be split up into several shorter “strobed” reads that cover a footprint of several kilobases of DNA.

Read length is currently limited by photochemical effects that inactivate the polymerase, but Turner said the company is “hard at work” to eliminate these effects. At that point, read length would only be limited by the enzyme itself and could potentially increase to dozens of kilobases, he said.

In a collaboration with Evan Eichler at the University of Washington, an expert in structural variation, PacBio has analyzed human fosmids using strobe sequencing. This allowed the researchers to analyze insertions spanning up to several kilobases in size and to more accurately resolve breakpoints, which would not be possible using conventional paired-end reads and a library with a single insert size. “It’s a powerful way to solve complex tangles,” Turner said.

The company currently has 12 prototype instruments for in-house research as well as collaborative work. These use chips with about 3,000 wells, or zero-mode waveguides, about a third of which are occupied with a single polymerase each.

The commercial instrument will have significantly more ZMWs, but the company is not yet disclosing that number. Chips for the instrument will be “the price of a nice dinner,” Turner said.

Over the next three to four years, the company expects sensors will become available that will enable it to run a million ZMWs per chip.

Turner said customers can expect a “rapid expansion of capabilities over the lifetime” of the first instrument generation. The second generation, he predicted, will be capable of sequencing a human genome at high coverage on a single chip.


CSHL gears up for 2nd annual Personal Genomes meeting

September 14th, 2009

For decades, scientific meetings at Cold Spring Harbor Laboratory (CSHL) have been held in great esteem by scientists for their role in shaping the agenda of molecular biology. Their reputation for relevance continues, as evidenced by results of a survey of nearly 1,000 attendees of biology meetings over the last year. ...

Excitement about 2nd 'Personal Genomes' meeting

These results were announced as preparations reached their final stages for another genomics-related meeting at CSHL. From the 14th to the 17th of September, the Laboratory will host the second annual "Personal Genomes" meeting, which, according to its organizers, will build upon the excitement generated at the inaugural meeting last October.

An editorial in the journal Nature appearing just after that gathering disbanded, late last October, confessed to initial skepticism about whether such a meeting was justified in view of the newness of the field and the paucity of results to date - at the time, the full genomes of only four people had been completed and made public. But, Nature assured readers after its reporter attended the meeting, participants came to understand that in fact the meeting was overdue, if for no other reason than the fact that "increasingly, private companies are offering personal genome scans and genetic tests for sale - and consumers are buying them."

As Nature opined, reflecting the view of many at the Personal Genomes meeting, "scientists can and should help the public sift through" newly available (and often quite fragmentary) genomic information generated for sale by a growing number of start-ups. At the second Personal Genomes gathering, which begins this evening and continues until Thursday, it is almost certain that participants will discuss the news announced last week that a small firm called Complete Genomes of Mountain View, Calif., claims to have sequenced 14 individual genomes in their entirety and is offering the service commercially for as little as $20,000 per person for orders of eight genomes or more, and an eye-catching $5,000 for groups of 1,000 or more.

About the 'Personal Genomes' Meeting

About 200 participants are expected to attend the four day-long "Personal Genomes" meeting, which has been organized by a renowned team of scientists, including Dr. George Church from Harvard University, and Dr. Elaine Mardis from Washington University, among others. The meeting will open with introductory remarks by CSHL's Dr. James Watson, whose own genome was the first to become publicly available, making him the subject of last year's inaugural meeting.

Dr. Church, a genetics pioneer whose work integrates biosystems modeling with synthetic biology and personal genomics, will give an overview of the field's status in available technology and its current applications. Other notable technology-oriented speakers include Dr. Jonathan Rothberg from Ion Torrent Systems, Inc., and Dr. Steven Turner of Pacific Biosciences, who will discuss "third-generation" sequencing platforms that will soon enter the marketplace.

Many genomics scientists working on cancer are trying to unlock the mystery of cancer's molecular origins and make-up. Molecularly speaking, cancer is a unique disease in every patient, with no two patients sharing the same set of mutations. Dr. Mardis, who is the co-director of Washington University's Genome Sequencing Center, will present on her group's efforts to catalogue all mutations in a quartet of breast cancer patients.

The keynote speech on Tuesday will be given by Dr. Thomas Caskey of University of Texas Health Science Center. "Dr. Caskey was one of the early planners of the Human Genome Project," explains Dr. Mardis. "Now that we are at a stage when genomes are being sequenced in weeks and for medical purposes such as understanding disease causation, his talk will offer a very unique perspective on the past and the future of personal genomes."

The line-up of speakers includes other preeminent scientists in the field such as Dr. Richard Gibbs, Director of the Human Genome Sequencing Center at the Baylor College of Medicine who will describe his group's work on sequencing genomes of patients with disease caused by defects in single genes; Dr. Steven Brenner, of UC, Berkeley, who is developing a public database of human genetic variation and its effect, drawing from databases, diagnostic laboratories, and the scientific literature to interpret human genomics data; and many others. A session on the ethical challenges presented by personal genomes will feature a panel of scientists, ethicists and science writers.

"Fostering this type of cross-disciplinary discussion and debate is one of the strengths of CSHL's meetings program," says David Stewart, Executive Director of Meetings and Courses at CSHL. "This is where different fields are brought together and driven forward." The results of Genome Technology's survey would seem to bear him out.

[This meeting will be an inflection point. Thus far the primary goal was to make genome sequencing affordable by innovative mass-production. With breakthrough already accomplished at several fronts, it is evident that the primary goal shifts to analysis and interpretation of data. - Pellionisz at HolGenTech_at_gmail.com, Sept. 14, 2009]


Apple sheds light on Illumina’s genome app

Friday - September 11th, 2009 - 04:07pm EST by Brian Dolan | Apple iPhone | Consumer Genetics Show | genomics | Illumina | personal genome |

“The iPhone can be an integral part in advancing the fundamental sciencethe very complexities of biology and understanding of the human genome can be made accessible through tools like the iPhone,” Consumer genomics company Illumina’s CEO and President, Jay Flatley told Apple in a recent interview. “I think it is the convergence of the science and IT technology that today creates a unique possibility to manage our human health in new ways,” Flatley said. “It’s an incredibly exciting time.”

Earlier this year at the inaugural Consumer Genetics Show in Boston, Mobihealthnews reported on and included the first photos of Illumina’s concept for an iPhone application, called myGenome, that included information from a person’s genome. Following that sneak peek, Apple published a brief case study that includes a high level over view of Illumina’s use of iPhones among its sales reps and executives. The article also discusses Illumina’s plans for myGenome. Apple also produced a video with a number of images of the concept iPhone application Illumina is developing.

“Illumina is developing an iPhone application that will allow consumers to carry around their genomic information,” Flatley explained to Apple. “Part of it may be on the phone itself, part of it may be in the cloud that the phone would have access to. It would allow the customer to bring up the application and interact with it live in conjunction with their doctor.”

Illumina told Apple that the completed app aims to “present complex genomic datasets in an easy-to-understand, consumer-oriented interface.”

“The understanding of the human genome, which is very inaccessible to most people, can start to become accessible through iPhone,” Flatley said. “It will be a mechanism for communications, for sharing, and for data management. iPhone can translate something very complicated into something very user-friendly.”

For more on the planned myGenome app: Revisit Illumina’s concept iPhone app announcement at the Consumer Genetics Show

Read this write-up Apple posted to the enterprise section of its iPhone site:

iPhone Meets Genome [the same news at Apple website - AJP]

With employees spread across five continents, effective mobile communications are essential for Illumina, a San Diego, CA-based biotechnology company that designs breakthrough tools for genetic analysis. Using iPhone, sales reps can track customers, executives can manage employees, and everyone can stay in touch. And soon Illumina will make it possible for consumers to carry their personal genomes with them on iPhone

iPhone was an obvious technology choice, says Jay Flatley, Illumina's President and CEO. "First and foremost, it's a great phone. But what our employees need goes well beyond that. They need a computer in their hands that can do calculations and data searches, and can manage sales using SalesForce Mobile. Because of the flexibility of the interface, iPhone was the ideal tool for us."

With iPhone apps like Workday HR management software and Cisco WebEx Meeting Center, Illumina executives can do everything from tracking payroll to participating in meetings wherever they are. "iPhone has improved the overall productivity of people at Illumina," says Scott Kahn, Illumina's Chief Information Officer. "It's rare that you deploy a tool and don't get any negative feedback. But with iPhone, the first response is usually 'Thank you.'"

Easy Integration

Deploying iPhone within Illumina's existing IT infrastructure couldn't be easier. Using iPhone Configuration Utility, the IT staff can push configuration profiles for their virtual private network (VPN) and enforce passcodes to secure each device. Setting up iPhone to leverage Exchange capabilities is as simple as double-clicking a configuration file, says Scott Skellenger, Senior Director of Global IT Operations. "All we have to do is direct the phone to the Exchange server and input the user's credentials, and they're off and running."

"iPhone has definitely delivered for Illumina," Kahn agrees. "We found it to be an enterprise-ready device primarily because of the security features. Having the ability to remotely wipe the device was key. It also had to have Exchange, and it needed to be web savvy. On iPhone, those features alone opened the door."

Illumina sees even more business benefits with the latest iPhone software and hardware. "Improvements such as cut-and-paste and the device-wide search capability have added extraordinary value," Skellenger says. "With iPhone 3.0 software, we're able to search our emails, access the global address list in a seamless way, and calendar as if we were sitting at our desks."

The Mobile Personal Genome

For Illumina, iPhone is more than a great mobile business device - it's the delivery platform for an ambitious new approach to personalized medicine.

The iPhone SDK has been extremely easy to work with, Flatley says. Though Illumina's developers had never written an iPhone application before, they were able to produce a fully functional prototype of the application within just ten days. When completed, the final application will allow the company to present complex genomic datasets in an easy-to-understand, consumer-oriented interface.

"The understanding of the human genome, which is very inaccessible to most people, can start to become accessible through iPhone," Flatley says. "It will be a mechanism for communications, for sharing, and for data management. iPhone can translate something very complicated into something very user-friendly." At Illumina, the convergence of science with iPhone is helping transform the future of individual health care.

[As reported in this column earlier, on June 10, 2009 two PDA-applications were announced at the "first ever" Consumer Genetics Conference in Boston. In the morning, Dr. Pellionisz, Founder of HolGenTech introduced the demo of using a PDA in the mode of "Personal Genome Assistant" (PGA) - befitting the (IP-protected, see YouTube) Genome Computing Architecture of HolGenTech. There are substantial differences not only in the technology as reported by Jack Germain in the widely covered TechNewsWorld article (HolGenTech uses barcode-scanning of PGA apps for shopping) but also in its philosophy and utility. HolGenTech finds putting a personal genome on a PDA that can be lost or stolen an unacceptable risk - if the PDA falls into the wrong hands precious info can be downloaded even before a "remote wipe-out". Second, it is highly unlikely that the general public will be able - or even want to - understand the science of HoloGenomics - when not only doctors are not yet trained for it, but even some "researchers" still hold ancient views that "the majority of DNA is junkDNA". What people do need is an easy-to-use practical device by which they can get "genome-based recommendation" for their consumer (as well as prevention) activities. In the upcoming "Personal Genome" meeting Sept. 14-17, 2009 in Cold Spring Harbor Dr. Pellionisz will show key aspects of the Genome Computer Architecture for the "Genome Based Economy"REPEAT-Scientists unlock secrets of Irish potato famine genome

Wed Sep 9, 2009

By Julie Steenhuysen

CHICAGO, Sept 9 (Reuters) - Scientists have unlocked the genetic code of late blight -- the plant pathogen that sparked the Irish potato famine of the 1840s and 1850s -- and it is revealing clues about why it has been such a formidable foe.

They said on Wednesday the Phytophthora infestans or late blight genome is nearly "animal sized" and loaded with extra DNA that allows it to quickly adapt to overcome any defense the host plant might mount.

"The genome is much larger than some of its relatives. The reason is it is full of these jumping genes that copy themselves," said Chad Nusbaum of the Broad Institute of Massachusetts Institute of Technology and Harvard, whose study was published in the journal Nature.

Nusbaum said nearly 75 percent of the genome is filled with repetitive DNA that appears to evolve quickly, allowing for the rapid development of genes that can attack plant hosts.

That could help explain how the disease has been able to attack potatoes genetically bred to resist the infection, he said in a telephone interview.

"The late blight has been a tremendously challenging organism through the ages. There are only poor pesticides to use against it, even now if you use the most horrible stuff," Nusbaum said.

The disease, spread by spores, remains a threat to global food security. In the United States, it is currently killing potato and tomato plants in home gardens from Maine to Ohio and threatening commercial and organic farms.

It causes large mold-ringed olive-green or brown spots on leaves and blackened stems and can wipe out a crop in days.

To understand how the pathogen has been so successful, the researchers decoded its genome and compared it to the genomes of two relatives: P. sojae, which infects soybeans, and P. ramorum, which causes a condition known as sudden oak death in oak and other trees.

They said the late blight genome is 2.5 to four times larger, and has two distinct regions. One is full of copies of DNA that is undergoing rapid change. This area contains just a handful of genes which play a role in plant infection.

The other area contains genes that have been preserved over millions of years of evolution.

Nusbaum said the strategy appears to allow for the rapid birth of many hundreds of "attack genes" that are evolving much faster than the host plant.

[For 40 years Barbara McClintock, with her "jumping genes" was considered to be a "Kook" - till she lived long enough to get her belated Nobel in 1983. One would guess, nobody is laughing any more, when leaders call for "re-thinking long-held beliefs" and when evidence "shakes the whole intellectual basis of molecular biology". It will be interesting to see in Cold Spring Harbor next week (with Chad Nusbaum also presenting) how the challenge shifts from "sequencing" to "genome regulation" in which apparently "repeats" play a key role - Pellionisz, HolGenTech_at_gmail.com, Sept. 9, 2009]


Irish Researchers Sleuth Out Unique Human Genes Originating from Non-Coding DNA

September 02, 2009

By a GenomeWeb staff reporter [see full story there]

NEW YORK (GenomeWeb News) - ... Researchers David Knowles and Aoife McLysaght of the University of Dublin's Smurfit Institute of Genetics compared chimp and human protein and DNA sequences, identifying three human genes that lack orthologues in other species. The researchers tracked down DNA sequences resembling the genes in chimps and other primates. But those sequences don't code for proteins, suggesting the trio of human-specific genes found in the study may have sprung up from non-coding DNA.

Past studies have identified numerous genes that have been duplicated or rearranged throughout evolution, taking on distinct characteristics and functions in different lineages. But less is known about whether - or how - new genes originated from non-coding sequences...

Initially, the researchers found 644 proteins in the human genome that had no BLASTP hits in chimp. They excluded hundreds of genes that corresponded to assembly gaps in the chimp or macaque genome from their subsequent analyses, as well as genes with known or suspected orthologues in other species. The pair also tossed out potentially spurious human genes or annotation artifacts.

In the end, they were left with three genes: CLLU1, which codes for the chronic lymphocytic leukemia upregulated gene 1, as well as C22orf45 and DNAH10OS, which are less well characterized.

A dozen nucleotide substitutions spanned these three genes. Seven of these substitutions (four synonymous and three non-synonymous) appear to have occurred in the chimp genome, where sequences for the genes are present but non-coding. ...

Based on these findings and their subsequent analyses, the team concluded the genes originated in parts of the genome that are non-coding in other primates.

Although the functions of the genes are poorly understood, the researchers noted that all three overlap with genes on the opposite DNA strand. In addition, each produces an intronless ORF coding for a short protein.


[Not so very long ago any notion that contradicted to the "Junk DNA" dogma almost automatically got cut out of funding. The "lucid heresy" of FractoGene that attributed specific function to "non-coding DNA" (since 2002) pales in comparision to such a proposal that "non-coding DNA" can actually be transformed into "genes". Now it is only too bad that today there is no single definition for "genes" and the notion of "non-coding" has long lost its original meaning. Perhaps it can be re-phrased that "most theories are at a loss as to how it contributes to recursive and iterative protein-synthesis ("coding")" - Pellionisz, HolGenTech_at_gmail.com, Sept. 9, 2009]


GenomeWeb [see full story there] - September 8, 2009
By Julia Karow

Complete Genomics said this week that it has sequenced, analyzed, and delivered 14 human genomes to early-access customers since March...

The company said it currently has more than a dozen customers, including Pfizer, the Flanders Institute for Biotechnology, Duke University, Brigham & Women's Hospital, the HudsonAlpha Institute for Biotechnology, and the Ontario Institute for Cancer Research, in addition to the Institute for Systems Biology and the Broad Institute, which Complete announced previously...

The pilot projects include disease studies in the areas of melanoma; breast, lung, and colorectal cancer; HIV; and schizophrenia. Complete Genomics said previously that its charges approximately $20,000 per genome for the pilot projects. The firm is targeting a price of $5,000 per genome for the commercial launch of the service, which is slated for January 2010.

Complete Genomics also sequenced a single human genome for the Personal Genome Project, a research study led by Harvard Medical School. George Church, principal investigator of the PGP and a member of Complete Genomics' scientific advisory board, said in a statement issued by the firm that his team has cross-validated the data set it obtained for the genome, including a list of variants, to gauge the technology's accuracy.

"I am pleased with the quality of the data provided," he said. ... I look forward to continuing to work with the company as it scales up the process to sequence thousands of genomes next year.”


[The number of fully sequenced individuals of the Universe just about doubled since March. With 10,000 sequences projected by Complete Genomics in 2010 and Pacific Biosciences kicking in by the second half of 2010 with mass-production of full sequences with an order of magnitude lower price, we have shifted from "get info" (sequencing) into "use info" (Genome Computer Analysis of HoloGenome Regulation - Pellionisz, HolGenTech_at_gmail.com, Sept. 8, 2009]


Helicos Sells Multiple HeliScope Sequencers to RIKEN Institute

Tue Sep 8, 2009

CAMBRIDGE, Mass.--(Business Wire)--

Single Molecule Sequencing Technology Will Play Integral Role in Japan`s National Sequencing Center

Helicos BioSciences Corporation (NASDAQ:HLCS) today announced that the RIKEN Yokohama Institute Omics Science Center (OSC) has agreed to purchase four Helicos Genetic Analysis Systems. The transaction represents the first multisystem sale of the Helicos system to a large genome center. Shipments are expected to take place in September, with one system remaining in Cambridge to be used in a scientific collaboration between the two parties that began last year.

RIKEN`s OSC recently received funding from the Japanese government to assume the role of the country`s primary national DNA sequencing center. As part of Japan`s renowned RIKEN, the OSC has the unifying objective of developing a multi-purpose, large-scale analysis center to elucidate molecular networks in biological systems. A major component of the national project for the RIKEN OSC is the "Cell Innovation Project", which is aimed at understanding cell function at the molecular level using next-generation and single-molecule sequencing technologies.

In addition to the sale of systems and reagents, RIKEN and Helicos will continue a scientific collaboration that began in November 2008. The collaborative research plan is designed to enable new insights into cell biology and to explore novel methods to ultimately enable singlecell transcriptome analyses.

The OSC Director, Yoshihide Hayashizaki, M.D., Ph.D., will co-manage the research plan with Patrice Milos, Ph.D., Helicos Vice President and Chief Scientific Officer.

"Our relationship with the RIKEN OSC led by Dr. Hayashizaki and RIKEN`s commitment to purchase the Helicos systems demonstrates the value of Helicos technology to the scientific community," explained Helicos President Steve Lombardi. "We`re proud to have such a prestigious institution as our first multi-system genome center customer. Their commitment is further validation of the scientific and commercial importance of single molecule sequencing."

"The single-molecule sequencing platform developed by Helicos will help accelerate on-going progress in transcriptomics analysis at RIKEN`s OSC," remarked Dr. Hayashizaki. "The Helicos system offers us the ability to make unbiased measurements of nucleic acids without amplification, to utilize very low starting amounts of nucleic acid, and to generate over 800 million sequences per run, providing accurate and precise measurements. These benefits of single-molecule sequencing to quantitative expression profiling, including measuring the activity of gene regulatory regions, will greatly expand our range of experimental approaches so that we can reveal more of the unknown biology of the genome to the scientific community."

[While Complete Genomics or Pacific Biosciences does not appear to follow the "horizontal" marketing by the earlier "Shotgun Sequencer" Companies (Roche/454, Illumina Genome Analyzer and AB SOLiD widely marketed to end-users even to Beijing Genome Institute), Helicos as a member of the "new league" of molecular (nano)sequencing seems to carry on the legacy of horizontal marketing. Time will tell, if e.g. for Genome Computer development the horizontal or vertical marketing is better - HolGenTech_at_gmail.com, Sept. 8, 2009]


23andMe Co-Founder Linda Avey Leaves Personal Genetics Start-Up to Focus on Alzheimer’s Research

by Kara Swisher
Posted on September 4, 2009 at 1:49 PM PT

BoomTown just got the following email from Anne Wojcicki, co-founder of 23andMe, the personal genetics start-up, about the departure of her co-founder, Linda Avey. She will be starting a foundation related to Alzheimer’s disease.

The pair founded the high-profile company–whose Series A investors include Genentech (DNA), Google (GOOG) and New Enterprise Associates, as well as Wojcicki’s husband, Google co-founder Sergey Brin–in 2006.

It has collected almost $23 million in funding.

Avey noted in an email to staff, which is posted in its entirety below: “I also recognize that the company has reached a critical point in its growth where new leadership can take it to the successful heights we all think it can achieve.”

Wojcicki’s email reads, in part:

[Wojcicki's announcement] - I wanted to let you know that Linda Avey will be leaving 23andMe to focus her energy on transforming Alzheimer’s research and treatment, leveraging the 23andMe platform. Linda and I have talked about doing research in Alzheimer’s since the inception of the company. Linda, whose father-in-law recently died from the disorder, will be leveraging 23andMe’s platform as she works to revolutionize the research, treatments and prevention for Alzheimer’s.

Linda will be greatly missed by me and my colleagues but we’re glad she will continue to be in a related field, and we are committed to continuing the work that she and I started three years ago.

And here is the email from Avey to the staff, as well as Wojcicki’s below it and then the official press release:

[Avey to Team] Dear all-

As I trust you all know, 23andMe is very special to me. I also recognize that the company has reached a critical point in its growth where new leadership can take it to the successful heights we all think it can achieve.

I’ve decided that I’d like to focus my efforts on an area that is personally significant and will continue to have a huge impact on our healthcare system–Alzheimer’s disease. Effective today, I’m leaving 23andMe and have begun making plans for the creation of a foundation dedicated to the study of this disorder. The foundation will leverage the research platform we’ve built at 23andMe–the goal is to drive the formation of the world’s largest community of individuals with a family history of Alzheimer’s, empower them with their genetic information and track their brain health using state-of-the-art tools. We’ve always planned to include Alzheimer’s in our 23andWe research mission…I’m just approaching it from a new angle.

Some of you might be aware that my father-in-law suffered from Alzheimer’s and passed away last year. For this reason, Randy and I are motivated to do what we can to improve the understanding of what leads to the debilitating symptoms and what might prevent them from starting in the first place. The ApoE4 association is barely understood but gives us a great starting point.

I’ll miss working with you but will be excited to hear about the progress I know you’ll be making!

All the best,


[Wojcicki to] Team:

As Linda has told you, she will be leaving 23andMe to focus her energy on transforming Alzheimer’s research and treatment, leveraging the 23andMe platform. While I am quite sad to see her leave I am excited and hopeful as she takes on this mission. As Linda’s co-founder and partner over the last three years, it has been clear that revolutionizing research has been a primary passion. Our drive to change health care has always had roots in our personal lives and we have tried to structure 23andMe so that any individual or organization could actively participate in research. Linda and I have talked about doing research in Alzheimer’s since the inception of the company and the need for the Alzheimer’s community to have a strong leader. With Linda’s involvement, I believe that the APOE4 community could be the first asymptomatic community to successfully develop preventative treatments. I hope that going forward we’ll both be able to shake up and transform the health care space, making health care and treatments better for all.

Linda’s departure is also a sign of 23andMe’s maturation. When we started the company, the personal genetics industry did not exist; now it is a thriving and competitive landscape. Our company has grown and we continue to be an innovative industry leader. While our success has been exceptional, it is also clear we have a lot of work ahead. We have created a significant and empowering tool, but we must find new and better ways to promote the value of knowing your DNA. In the weeks ahead, we will outline a strategy for the company that we believe will make genetics a routine part of health care and will lead us to making significant research discoveries.

Linda has been instrumental in making 23andMe what it is today and we thank her for her passion and dedication to the company. We have many exciting opportunities before us, and I look forward to working with all of you to make 23andMe a spectacular success.


Linda Avey to Create Alzheimer’s Foundation

Mountain View, CA–September 4, 2009–Linda Avey, co-founder of 23andMe, an industry leader in personal genetics, announced today that she is leaving the company to start a new foundation focused on Alzheimer’s disease. Ms. Avey’s foundation will leverage the 23andMe research platform to search for causes and treatments for the disease, which afflicts more than 5.2 million people in the United States.

Ms. Avey and Anne Wojcicki founded 23andMe together in 2006. The company provides personalized genetic information through DNA analysis and allows individuals to interact with their private information through a variety of web-based tools.

“I could not be more proud of what we have accomplished in the three years since Anne and I created 23andMe, and I am excited to take the next step in applying my experiences to one of the great health challenges of our time,” said Ms. Avey, who has more than 20 years of experience in the biopharmaceutical industry. “There is a clear need for revolutionary research and concentrated effort to confront Alzheimer’s, and we need to start now in order to make meaningful progress. The resources are out there–my goal is to marshal them to find answers for families, like mine, who have lost family members to such a debilitating disease.”

Concurrent with Ms. Avey’s announcement, Ms. Wojcicki said that 23andMe expects to make its genetic data platform available to Ms. Avey’s foundation in order to advance its research.

“Linda has been a true partner with me over these last three years, an innovative leader for our company and our industry, and instrumental in making 23andMe what it is today,” said Wojcicki. “It is only fitting that she will be making full use of our work together and leveraging the 23andMe platform for a tremendous cause. We look forward to joining her as a partner in her efforts.”

Ms. Avey’s departure announced today takes effect immediately.

[A new stage, heralded by Dietrich Stephan's announcement to create "Ignite" has now switched gears - Pellionisz, HolGenTech_at_gmail.com, Sept. 4, 2009]


Miami Institute for Human Genomics Receives $20M Gift for Research

Institute Renamed John P. Hussman Institute for Human Genomics

[Goldschmidt, M.D. and John Hussman, venture philantropist]

The Miller School’s Miami Institute for Human Genomics, nationally known for its groundbreaking work in unraveling some of the medical mysteries behind autism and other common diseases, today received a $20 million gift to support its critical research efforts.

Miller School Dean Pascal J. Goldschmidt, M.D., University President Donna E. Shalala and institute director Margaret Pericak–Vance, Ph.D., joined the donor, philanthropist John P. Hussman, Ph.D., in announcing the extraordinary commitment from the John P. Hussman Foundation. The institute will now be known as the John P. Hussman Institute for Human Genomics.

“Today is an absolutely memorable and fantastic day,” Goldschmidt said at the announcement ceremony, an occasion he called “extraordinarily significant” because it is a commitment to the eventual eradication of certain diseases, and because few academic institutions are benefiting from large gifts in a slow economy.

“One can count gifts of $20 million or more to any institution over the past six months on the fingers of two hands,” Goldschmidt continued. “The University of Miami Leonard M. Miller School of Medicine is up there with top medical schools such as Stanford and Yale for receiving such a large gift. The gift is also extraordinary because of the field he is supporting. Genetics is the best opportunity we have to predict the occurrence of illnesses in a given individual.”

President Shalala said the announcement marked “an incredible day” for the University, the medical school and the institute, which was created two years ago and has received an $80 million economic grant from the State of Florida and support from the Dr. John T. Macdonald Foundation.

“It will accelerate the growth of the institute and we will become the most important center of this kind anywhere in the world,” Shalala said at the renaming event held at the institute located in the Miller School’s new state–of–the–art Biomedical Research Building.

When she introduced Hussman, she described him as “a successful human being and civic leader, and a person who is willing to bet with people and institutions he deeply believes in. For that, we are grateful.”

Hussman, whose son was diagnosed with autism 12 years ago at age 3, said he was told then that nothing could be done to treat the disease.

“I am convinced that with the work here at the institute those days will be behind us,” Hussman said.

Since his son’s illness, Hussman has been steadfast in supporting autism research and has long been involved in the research of Pericak–Vance, who is also the Dr. John T. Macdonald Professor of Human Genomics and a world–renowned genetics researcher. She has discovered key genes that impact major human clinical disorders such as Alzheimer’s disease, multiple sclerosis, macular degeneration, cardiovascular disease, and recently autism. Most top scientists are proud to have their work cited by their peers 3,000 to 4,000 times in their lifetime –– the work of Dr. Pericak–Vance has been cited more than 35,000 times already. Such recognition by peers is an index of quality and impact on the field of medicine.

“I am honored to accept this gift,” Pericak–Vance said at the ceremony. “John and I share a passion for autism genetics research. We have the technology here and John’s gift will help make it possible to figure out the answers to this complex problem. This is really exciting.”

Funds from today’s gift will also go toward matching the state’s $80 million economic grant. The original state money was awarded based on recommendations from Enterprise Florida to the Office of Tourism, Trade, and Economic Development.

Hussman met Pericak–Vance and her husband, Jeffery Vance, M.D., Ph.D., professor and chair of the Miller School’s Dr. John T. Macdonald Foundation Department of Human Genetics, about six years ago and became friends and research collaborators.

Hussman is president and principal shareholder of Hussman Econometrics Advisors, the investment advisory firm that manages the Hussman Funds. He received his Ph.D. in economics from Stanford University. Prior to managing the Hussman Funds, Hussman was a professor of economics and international finance at the University of Michigan. He established the John P. Hussman Foundation, which provides financial support for projects that have the capacity to save or significantly improve human lives

Hussman’s visionary support of Pericak–Vance’s work has helped produce some of the most important discoveries in the field of autism genetics, including the identification of a common risk variant for autism earlier this year.

Funds from the gift will support one of the first large–scale autism sequencing projects of its kind. The application of next generation genomic sequencing technology to the extensive autism family dataset will give institute researchers data that will implicate genes responsible for autism risk and explain how those genes cause autism.

The autism sequencing project will create a number of jobs at the institute and bolster the University of Miami Miller School of Medicine’s reputation as a center for cutting–edge research. Since its establishment in January 2007, faculty at the genomics institute have drawn attention to South Florida with breakthrough genomics discoveries in several human disorders including autism, Alzheimer’s disease, Parkinson’s disease, and multiple sclerosis.

It was Pericak–Vance’s early work in autism genetics that caught Hussman’s attention. After reading her published work, Hussman called Pericak–Vance. Shortly thereafter, he decided to invest financial and intellectual resources into her effort to tackle questions about autism genetics. Not only has Hussman lent his financial support to Pericak–Vance’s projects, he has contributed his own unique expertise in data mining to the search for autism risk genes. Based on his scientific contributions, Hussman has been a co–author on several of Pericak–Vance’s autism publications.

While the National Institutes of Health are the largest funding source for genetics research, they tend to fund projects with tried and true methods. New and unproven approaches are often the source of radical scientific breakthroughs, yet it is difficult to secure funding projects that operate outside the box. The collaboration between Hussman and Pericak–Vance has proven that with risk there can be great reward.

In summing up the importance of Hussman’s gift and the work of Pericak–Vance and the institute, Goldschmidt said the collaboration holds benefits for mankind.

“Understanding the susceptibility of a given individual to develop a specific clinical illness gives us the opportunity to prevent that illness in the first place,” Goldschmidt said. “Ailments such as cardiovascular disease, diabetes, cancers, Alzheimer’s, and stroke can be prevented if preventive maneuvers are initiated early on in life. Hence, with genetics, we can shelter our fellow humans from developing the most devastating illnesses known to humans.”

[20 years ago, when the "Fractal Approach to Development of Purkinje brain cells" was published, it violated both of the then prevailing "axioms" of genomics (the Junk DNA and The Central Dogma misnomers). Thus, continuation of NIH support was denied. Today, nobody would deny that Genome Regulation is recursive (and thus recursive fractal iteration is at a minimum should be paid close attention, to make up for lost time. "Fractal Defects" were shown for: (STM2, Presenilin) of Alzheimer's (human), (APP) Amyloid precursor protein; Alzheimer's, Down (human), (Parkin) Parkinson's (human), (SNCA) Parkinson's (human), (CFTR) Cystic Fibrosis (human), (CFTR) Cystic Fibrosis (fugu), (Dysferlin) Muscular Dystrophy (human), (X25, Frataxin) Friedreich' Spinocerebellar Ataxia (human), (11q13) Asthma Susceptibility Complex (human), (HLA) Human Leukocyte Antigen Complex (human) - and with the development of Personal Genome Computing architecture their search was made suitably fast. Moreover, with "structural variants" the business model was developed how to make "Genome Based Recommendations" practal e.g. for shopping, with barcode-reader PGA (Personal Genome Assistants). These developments run decades ahead of routine government research, and engagement with enlightened and personally motivated investors are far superior - Pellionisz, HolGenTech_at_gmail.com, Aug 31, 2009]


Illumina Announces Delivery of the First Genome Through Its Individual Genome Sequencing Service

Business Wire, August 31, 2009

SAN DIEGO--(BUSINESS WIRE)--Aug. 31, 2009-- Illumina, Inc. today announced that it has delivered Hermann Hauser’s genome sequence. Dr. Hauser, Partner, Amadeus Capital Partners Ltd, is the first consumer to purchase Illumina’s individual genome sequencing service working with his physician, Michael Nova, MD, of Pathway Genomics. The genome was completed in Illumina’s CLIA-certified and College of American Pathologists (CAP) accredited laboratory using the Genome Analyzer technology. Over 110 billion base calls were generated, delivering over 30X coverage of the genome. Data analysis showed 300K novel SNPs in the genome that have not been documented elsewhere. This discovery demonstrates the power of whole genome sequencing as an exploratory tool, as these SNPs were novel but not necessarily unique.

“We are very excited to be delivering our first individual genome sequence to Hermann Hauser,” said Jay Flatley, President and CEO of Illumina. This is a landmark since just two months ago we launched the availability of this service from Illumina. The experience we created for Hermann was not only one of personal genetic exploration, but one that points to a future where genome sequencing will become a routine practice and the information generated will enable physicians to make better healthcare decisions for the individual. This information has long term value for Hermann as he can continue to access it and gain personal genomic insights as new discoveries are made.

Dr. Hauser’s genome was delivered by a team consisting of his physician, Dr. Michael Nova, a bioinformatics specialist and geneticist at Illumina’s San Diego headquarters on Thursday, August 20, 2009. The visit included a consultation, facility tour and ceremony during which Dr. Hauser’s genome was delivered on an iMac® computer using GenomeStudio® software as a genome browsing interface.

Hermann Hauser is one of the first of a small, select group of individuals who have had their genome sequenced. “Going through Illumina’s process was very exciting for me personally. I am looking forward to the information on gene variants that will give my doctors guidance on effective treatments and drug dosage based on pharmacogenetic information, for any future medical condition I may develop. This is the beginning of personalized medicine and I am delighted to be there at the start of it. As an early investor in the gene sequencing technology used in this work, I am proud that Illumina has introduced this service to consumers. It fulfills an early dream to substantially reduce the cost of whole genome sequencing,” said Hauser.

Dr. Hauser is a pioneer member of a growing community that is driving education and exchange of information for those who have had their genomes sequenced. As more information becomes available, participants will be in a position to mine their personal genome sequence data to understand their identity in ways which have never been possible before.

[The Personal Genome was sold in a package with Illumina' "Genome Studio" software. It is impossible to sell a full human genome without a "Personal Genome Computer". This delivery was on an "iMac". The price of sequencing is $5,000, the iMac is $2,000. Illumina sold the full sequence at a value of $48,500. The margin for "genome informatics software" is currently $41,500 per copy - Pellionisz, HolGenTech_at_gmail.com, Aug 31, 2009]


Are we ready for personal genomics?

I have a piece in this month's edition of Prospect that might interest anybody with an interest in the future of personal genomics. You'll need a subscription to read the full article online (or do buy the magazine in the shops -- it also includes Adair Turner's proposals for banking reform, which have received a fair bit of coverage this week).

My broad argument is that while personal genomics services such as 23andMe and deCODEme aren't yet likely to give you reliable information you can use to improve your health, the day when genomic scans can do that is not far off. And if they are to deliver the benefits they could, we need to be ready.

I suggest there are three broad classes of challenge ahead (there are probably many more -- please do share your thoughts in the comments).

First, businesses need to figure out what they are going to do with this information, and how they can make it pay. For personal genomics companies, the challenge is to find a business model that works, which will probably involve interpreting genetic information (perhaps for third parties like the NHS), as well as just collecting it. The pharmaceutical industry, too, needs to consider what pharmacogenomic medicine will mean for its old (and broken) blockbuster model.

The second challenge is for health providers like the NHS, which I've been through in my coverage of last month's House of Lords report into genomic medicine. We need the IT and pathology infrastructure to integrate genomic information into healthcare, and doctors and nurses need to be trained to interpret its insights wisely.

Perhaps bigger than all these challenges, however, is the social one. We need to debate issues such as genetic privacy, and the use of genetic data by insurers, before they become active issues. My own hunch is that there is less to fear from genetic discrimination than many people imagine, but there are no obviously right and wrong answers and we ought to be engaging with this now.

We also need to be thinking actively about our own relationships with our genomes. I think that many of the biggest potential drawbacks of the genomic age, such as false reassurance and needless alarm, stem from a misunderstanding of genetics as primarily a deterministic rather than probabalistic science. Education is our best defence. As I conclude the Prospect piece:

Genetics is too often considered as a deterministic science, revealing traits hardwired by our DNA. This can frighten us when we learn that we carry a gene linked to a dread disease, or breed overconfidence when genetic risks are low. Yet this science more usually works in clues, not certainties, and in concert with the environment. Someone who has a low genetic risk of lung cancer can still raise it substantially by smoking.

Personal genomics will reveal much, but only so much. It should help us to make better lifestyle choices, and help doctors to make better choices about our medical care, but it will not provide an infallible guide to our future health.

Posted by Mark Henderson on August 28, 2009 in Genetics | Permalink | Post to Twitter

[It was only 10 months ago, when in a Google Tech Talk YouTube I addressed these very same issues - and e.g. when talking about the HoloGenome (Genome plus Epigenome) as a quantum thermodynamical open system with nonlinear dynamics, wherein the entropy of genome information is NOT mandated to hyperescalate - translated to the man on the street as "Circle of Hope" that the genome is NOT our destiny - some commenters (even old schooler biochemist professors who are desperate over Genomics having gone Informatics) - were simply flabbergasted. I am happy that now I am only one year ahead (at best) - and used this time to work out the business model in which the "personal genome" will be used by "Personal Genome Computers" (PGC) synced with "Personal Genome Assistants" (barcode-reading PDA-s) for the Genome Based Economy, where consumers will be tooled up to use "Personal Genome Based Product Recommendations" of goods that fit or fix their genome best. This is the business of HolGenTech - Pellionisz, HolGenTech_at_gmail.com, Aug 30, 2009]


Your Chance to get Seriously Wealthy from the Next Wave of “Computational Biology”

By Patrick Cox

August 28, 2009

There’s a radical new approach to biology that treats cellular functions very much like computers and software. It’s a truly transformational emerging discipline

The sequencing of the human genome has resulted in the emergence of an enormously important new branch in the biotechnological sciences. The most common terms for this field are bioinformatics or computational biology.

You may have read about the discovery, recently, of a new and radically more effective mosquito repellant. Based on molecules found in black pepper, it was not discovered using traditional laboratory methods. Instead, it came about through computer simulations based on knowledge of mosquito cell biology. This is just the tip of the bioinformatics iceberg.

Until recently, cell biology has been something of a “black box.” We could observe how cells functioned, but had little insight into the actual mechanisms. Now, though, scientists are learning how cells work on the molecular level.

Using mathematical models and new technologies for detecting molecular processes, researchers are extracting raw data from DNA and modeling the ways genes work and interact. To understand this field, you should view your own genome as a giant software program for manufacturing proteins.


This will go down in history as the “story of our era…”
By then, you could be among the ultra-wealthy…
Because you took decisive action today…


The process of unraveling and decoding the DNA software involves massive amounts of data collection. Then, once collected, correlation and other forms of computer analysis are performed on those data to figure out cause and effect. How big is this challenge?

Consider this: Each human cell contains about three gigabytes (three billion bytes) of pure data and instructions. If this information were written in book form, it would require 5,000 volumes, each 300 pages long. That’s 120 times larger than the kernel of the Windows operating system, which is about 25 megabytes of code. This data resides, of course, in each cell’s pinpoint-sized nucleus. The human body, in turn, has approximately 100 trillion of these three-gig cells.

Add to this complexity about 5,000 different proteins expressed by each cell. Different cells, however, express different proteins. These proteins, the proteome, behave as computer commands and serve to communicate between cells.

The decoding of all these systems is, obviously, a huge computational challenge. It has only just begun and it would not be possible, in fact, without recent advances in computer technologies. As more powerful computing comes online, the pace of bioinformatics discovery will accelerate. Quantum computing, because it is particularly suited to sorting out cell biology, will enable a “quantum” leap in understanding.


Imagine Getting Rich as Ignored Stocks Soar

You could get rich investing in scientifically selected penny stocks. My CXS Money-Multiplier Strategy helps me find enormous gains. The profits can be awesome...
You’ll HATE yourself if you miss this one!


Today, there are three main areas of research in computational biology. These are genome analysis, protein structure prediction and drug design.

Genomic analysis is, as you would expect, the statistical analysis of genes. As more and more DNA is analyzed in conjunction with individual medical information, more is known. Among other reasons for performing this analysis, scientists are looking for the genes that cause or contribute to diseases.

Protein structure predictions are based on computer models that integrate information about the function of these proteins. This is an immense task, as there are tens of thousands of proteins. Ultimately, understanding the proteome will enable truly personalized medicine, with minimal side effects for patients.

With the knowledge gained from understanding the genome and proteome, computer models of target proteins can be created. Using these virtual proteins, drugs can be designed and tested using in silica simulations before testing in the lab.

The development of these virtual molecules, the heart of computational biology, is ending the practice of shooting blindfolded while hunting for drug candidates. Instead of randomly testing different drug candidates and analyzing the results, the field of candidates can be significantly narrowed using simulations. This radically improves the “hit rate,” increasing the speed of drug discovery and lowering costs.

Moreover, computer cell simulations improve as additional data are collected and integrated back into the models. Significant advances have already taken place in this transformational space. Medicine, incidentally, is only one area that is benefiting from bioinformatics. Many of the benefits are taking place in the agricultural sector. The genetic engineering of microorganisms is another area of enormous potential.

This new science of building and experimenting on virtual molecules may be the most important new experimental tool since John Stuart Mill codified the scientific method in the 1840s. As Moore’s Law (the exponentially increasing power and cost-effectiveness of computers) continues to prove true, so will the power and importance of bioinformatics.

[Though the analyst is a bit sloppy on science (each cell has two copies of chromosomes, except X and Y in males, thus the information is 6.2 Gbases, and the field is "Genome Informatics" - rather than "computational biology", etc), the business opportunity is remarkable. As for "Penny stocks", HolGenTech common stocks can be purchased at the bargain-basement price of presently a nickel each - Pellionisz, HolGenTech_at_gmail.com, Aug 29, 2009]


CSHL scientists develop new method to detect copy number variants using DNA sequencing technologies

The new technique can detect key genetic variations overlooked by current methods

Cold Spring Harbor, N.Y. - A research team led by Associate Professor Jonathan Sebat, Ph.D., of Cold Spring Harbor Laboratory (CSHL) has developed a sensitive and accurate way of identifying gene copy number variations (CNVs). The method, which is described in a paper published online ahead of print in Genome Research, uses new DNA sequencing technologies to look for regions of the genome that vary in copy number between individuals in the population. Capable of detecting a wide range of different classes of CNVs, large and small, this method allows researchers to extract more genetic information from the complete genome sequence of an individual.

CNVs are regions of the genome that vary in the number of copies between individuals. These variants were once considered to be anomalies that occurred rarely among healthy individuals. As the result of a discovery by CSHL Professor Michael Wigler and Dr. Sebat in 2004, CNVs are now recognized as a major source of human genetic variation and methods for detecting CNVs have proven to be an effective approach for identifying genetic risk factors for disease.

Genome sequencing technologies are improving at a rapid pace. The current challenge is to find ways to extract all of the genetic information from the data. One of the biggest challenges has been the detection of CNVs. Sebat, in collaboration with Seungtai Yoon of CSHL and Kenny Ye, Ph.D., at the Albert Einstein College of Medicine, developed a statistical method to estimate DNA copy number of a genomic region based on the number of sequences that map to that location (or "read depth"). When the genomes of multiple individuals are compared, regions that differ in copy number between individuals can be identified.

The new method allows the detection of small structural variants that could not be detected using earlier microarray-based methods. This is significant because most of the CNVs the genome are less than 5000 nucleotides in length. The new method is also able to detect certain classes of CNVs that other sequencing-based approaches struggle with, particularly those located in complex genomic regions where rearrangements occur frequently.

The development of this novel method is timely. The 1000 Genomes Project was launched in 2008, as an international effort to sequence the genomes of 2000 individuals across geographic and ethnic regions to catalog human genetic variation. Sebat's team along with many other groups has contributed to the production and analysis of these data.

This innovation improves the detection of structural variants from whole genome sequence data, which will lead to improved sensitivity to detect disease-causing CNVs.

[Targeted and random searches of "Sturctural Variants" in the DNA necessitate Genome Computing. - Pellionisz, HolGenTech_at_gmail.com, Aug 24, 2009]


Complete Genomics Nets $45M in Financing

August 24, 2009

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – Complete Genomics said today that it has raised $45 million from new and previous investors, and it will use the funds to build up its Silicon Valley genome sequencing facility.

The Series D financing included two new investors, Essex Woodlands Health Ventures and OrbiMed Advisors, as well as investments from Enterprise Partners Venture Capital, OVP Venture Partners, Prospect Venture Partners, and Highland Capital Management.

"This new capital will enable us to scale up our facilities in preparation for large customer projects. We now plan to launch our large-scale commercial sequencing center in January 2010 with the goal of sequencing 10,000 human genomes next year," Complete Genomics President and CEO Clifford Reid said in a statement.

The firm plans to offer sequencing services based on its own technology to academic and biopharmaceutical industry researchers who are conducting large-scale genomics studies.

Reid noted that Complete Genomics closed the funding round six months later than the company had planned "due to the collapse of capital markets."

The Mountain View, Calif.-based company also has added two new directors, C. Thomas Caskey, who is an adjunct partner at Essex Woodland Health Ventures and was president of the American Society of Human Genetics, and Carl Gordon, who is a founding general partner of OrbiMed and a former fellow at The Rockefeller University.

"I have been involved in DNA sequencing for many years and believe that Complete Genomics' technology has the potential to revolutionize the market by providing researchers with access to large-scale genome sequencing," Caskey said in the statement


Complete Genomics seals $45M for cheaper gene sequencing

August 24, 2009 | Camille Ricketts

Complete Genomics, provider of supposedly cheaper and faster DNA sequencing services, has raised $45 million in a fourth round of funding — a huge amount for a biotech company in today’s economic environment. Based in Mountain View, Calif., the company says it will be able to sequence people’s genes in only a few days, and for the bargain price of $5,000.

Apparently, genetic companies are where its at in the life science market right now. Earlier this month, Pacific Biosciences, another firm looking to speed sequencing practices, commanded a large $68 million in new financing from Wellcome Trust, Monsanto, Sutter Hill Ventures and others.

The money will be used to set up a facility for large-scale sequencing project, slated to open by the beginning of 2010, the company says. The goal is to sequence 10,000 human genomes in the next year at this center. If it does achieve the lower price tag — down from the typical price of between $100,000 and $350,000 — Complete Genomics could achieve the same commercial success as 23andMe, Navigenics and deCODE, startups that also read customers’ DNA, while providing even more in-depth information.

That being said, the company won’t be targeting the consumer market at first, opting instead to sell to pharmaceutical companies, laboratories and other companies that want to refine their genetic testing practices. One such company, called Knome, completely sequences its customers’ genomes for the astronomical price of $350,000. Some reports say Knome is in licensing talks with Complete Genomics to bring its price point down and broaden its appeal.

Essex Woodlands Health Ventures, OrbiMed Advisors, Enterprise Partners Venture Capital, OVP Venture Partners, Prospect Venture Partners and Highland Capital Management provided the recent round of funding. The company has raised $90 million to date.

[This announcement has special significance. With "affordable sequencing" upon us - though delayed by 6 months due to the collapse of capital markets - strategies are clearing at Complete Genomics (and the race with PacBio tightens, since Complete Genomics' lead is down to 6 months from a year). It appears that the involvement of new Essex Woodland Health Ventures (HQ Houston) and C. Thomas Caskey on the Board of Directors, and with Boston-based Knome's Co-Founder George Church on the Board of Science Advisors, Complete Genomics is pitched to Silicon Valley centered Nationwide pool of genome researchers (as opposed to mass-market of individuals). With considerable concentration of medical records both in Boston and Houston, correlation of phenotypes and genotypes, based on full sequencing may emerge as one main avenue for "affordable sequencing" - Pellionisz, HolGenTech_at_gmail.com, Aug 24, 2009]


Study: Chinese herbs could treat heart disease
August 19, 7:08 PM

Charlotte Health and Happiness Examiner

Researchers at The University of Texas Health Science Center at Houston are actively studying the ingredients in ancient Chinese herbal remedies used to treat heart disease. The results show that the herbs have heart healing properties, akin to prescription medications typically used to improve blood flow to the heart. The findings open the door for further developing the specific ingredients found in Chinese herbal remedies for treatment of heart disease.

Chinese herbs, used to treat heart ailments in Asia for centuries, now being studied by scientists in the university's Brown Foundation Institute of Molecular Medicine for the Prevention of Human Diseases (IMM), have been found to deliver nitric oxide to the body. Nitric oxide lowers blood pressure by relaxing or dilating large blood vessels. The result is improved blood flow to the heart.

The scientists also found that Chinese herbs can prevent plaque buildup and the formation of clots in the arteries that can lead to heart attack.

According to Nathan S. Bryan, Ph.D., the study's senior author and an IMM assistant professor, Chinese herbs “have profound nitric oxide bioactivity primarily through the enhancement of nitric oxide in the inner walls of blood vessels, but also through their ability to convert nitrite and nitrate into nitric oxide."

Thomas Caskey, M.D., IMM director and CEO says Traditional Chinese Medicines (TCM’s) have been provided useful tools that help scientists find clues about treating a variety of diseases. "TCMs have provided leads to safe medications in cancer, cardiovascular disease and diabetes. The opportunity for Dr. Bryan's work is outstanding given that cardiac disease is the No. 1 cause of death in the United States.”

The researchers studied Chinese herbs in the lab, purchased in local stores in the Houston area, including DanShen, and GuaLou, used to treat chest pain and heart failure. ”Each of the TCMs tested in the assays relaxed vessels to various degrees," the authors stated.

Few scientific studies have been performed on ancient Chinese herbs used for centuries in Asia to treat heart disease. The researchers hope to isolate specific components of the herbal remedies. The new study shows that Chinese herbal remedies could be further developed to help treat heart disease.

[It will happen sooner than most think that walking into special stores one can "barcode" the Chinese herbs to get "Genome-Based Recommendation" by your PGA that fits or fixes you genomic profile best - Pellionisz, HolGenTech_at_gmail.com, Aug 20, 2009]


The Single Life - Stephen Quake Q&A

Kevin Davies
Bio-IT World

August 10, 2009 | Working on a single instrument in a single lab, a single research associate generated the first single-molecule human genome sequence in a single month. According to Stanford professor Stephen Quake, a.k.a. “patient zero” and co-founder of Helicos Biosciences, his group’s success is proof of the growing democratization of genomics. Kevin Davies caught up with Quake on the eve of his landmark personal genome publication...

Bio-IT World: This must be very pleasing from six years ago when you managed to sequence all of five bases?

QUAKE: It is. I feel we’ve managed to come full circle.

You talk a lot in the paper about the democratization of gene sequencing.

There’s a table in the supplement which indicates the effort that’s been needed to sequence human genomes up until now. Our work is important at this time in that this is the first case you haven’t needed a genome center to sequence a human genome. What we’ve shown is that you can do it with a pretty modest set of resources—a single professor’s lab, one person doing the sequencing, one instrument, lower cost. Those are all order-of-magnitude improvements over what’s been published recently.

That being said, as you’re no doubt aware, the DNA sequencing industry is certainly competitive. Everything is moving fast, very much in flux. All the manufacturers are improving their platform by a factor of two per year. I’m just saying, at this point in time, Helicos is the best platform, and they’re going to be in a dogfight to try to keep that title -- which is good for the scientific community.

There’s very little mention of Helicos, which you co-founded, in the paper. Did you deliberately set out to keep this a separate effort?

Yeah, it’s complicated. One of the reasons is the conflict-of-interest rules of my institutions. I’m part of two institutions – both Stanford and the Howard Hughes Medical Institute. They have almost orthogonal, non-overlapping conflict-of-interest rules, very constraining. One upshot of that is I’m not allowed to collaborate with a company… It’s much more strict on the Hughes side, it’s like one of the Ten Commandments.

You didn’t buy the HeliScope outright, you collaborated with several other faculty?

Exactly. The machine was purchased by the stem cell institute at Stanford. The purchasing process was very transparent. There was a call for bids, it was done in a very open manner... So I benefited from collaborating with those guys. The reason they bought it was not to sequence my genome, but to sequence cancer, tumor stem cell genomes. That’s what’s up next. Mine was just to practice, to show that we could do it and to get the informatics into place.

The supplementary information put the price of your genome at $48,000. Can you elaborate?

Those were just the reagent costs. The amortized machine cost is about another $10-20,000.

Why didn’t you name yourself in the paper as Patient Zero?

Well, you know, we wanted to retain some semblance of dignity for the scientific literature! It’s really irrelevant for the purposes of the paper.

Your grad student wrote the variant calling algorithm. Was that out of necessity?

In fact, Helicos wrote a mapper but not a base caller. We used their mapper, which is tuned to the error profile of the instrument. All the mapping softwares have been written with particular instrument performance in mind. For example, Maq and ELAND are written basically for the Illumina platform, where the dominant error is substitution. For Helicos, the dominant error is deletion, and that has consequences for how you do the algorithm. We used the Helicos mapper [IndexDP], but then, all the base callers are tied to the mapping software. So ELAND and Maq will call the bases, but it’s all linked into how they do the mapping. So we ended up writing our own base caller.

If you look through the literature, the way base calling is done in the other published genomes is still rather ad hoc. I wouldn’t say there’s real consensus. People put on arbitrary conditions. Some people like to use a priori knowledge of human variation. A database like dbSNP can be used, but we wanted to take a very rigorous approach where we ignored everything that is known about human variation, and just try to call the genome based on the data. Then use what’s known about human variation to validate the calling. So that was another reason.

The genome coverage was 90%. Would you get higher coverage with more reads?

There are very repetitive parts of the genome that don’t map well. Most people aren’t mapping to the whole thing. The Chinese one was also 91-92%, something like that.

The paper talks about the deletion errors...

Yeah, that’s the primary source of error – deletions due to these ‘dark bases.’ One of the reasons this is an interesting result for the academic literature is: Is it possible to sequence the human genome with reads that are a little shorter and different dominant error mode than you have on other platforms? We show that it’s definitely possible.

You didn’t get into this in the paper, but you’ve done some preliminary analysis of genetic conditions. Did you use the Church lab’s Trait-o-matic program?

That’s right. George [Church] was very kind, and ran it through Trait-o-matic. That’s where we got a preliminary annotation. We didn’t discuss that in the manuscript, which was more about the technical aspects of the sequencing. We’re preparing another paper on the annotation. In fact, my medical colleagues have gotten really interested in this. There’s a small army doing a hand annotation for things that aren’t covered in Trait-o-matic yet, like pharmacogenomics. That’s going to be quite a lot of fun going forward.

You’ve said earlier this year (in the New York Times) that your daughters have pretty severe peanut allergies. Where do you start looking for that?

Yeah, that’s a really interesting question. The genome is maybe not the answer to that, right? The immune system has this interesting property that it rearranges the genome. All the immunoglobulin genes are rearranged in B cells and T cells. That’s more of an epigenetic question, one I’ve got a great interest in. We published a paper this spring in Science describing how to sequence all the expressed antibodies in a model organism, zebrafish in that case. I’m taking a more direct epigenetic approach to these questions when the immune system goes haywire. It’s possible classical genetics will be helpful. My personal opinion at the moment is the way these technologies can be used is to measure immune repertoires and understand what’s happening from a more physiological point of view.

What other research uses do you foresee using the HeliScope?

We already have three more genomes in the can related to leukemia and cancer. We’re neck deep trying to analyze those and understand what they mean.

After a tough 2008 for Helicos, this must be a very timely publication?

It’s hard to say whether there will be any impact. It’s kind of a David vs Goliath battle. There are four commercial platforms out there right now, and three of them are billion-dollar companies. The fourth is Helicos, which is a scrappy little bunch -- they’re trying to hang on! I think they’re fantastic, and I’m hoping they’re going to end up at the top of the heap.

[Such that people do not get a misimpression that a single person with two helpers can sequence their own DNA (using using chemicals costing $48k) one may note that Prof. Quake is not exactly an "average man in the street". It takes to be a co-Founder of a genome sequencing company, and a Stanford Professor to do that, plus access to the sequencer - Pellionisz, HolGenTech_at_gmail.com, Aug 20, 2009]


The Impact of the Genome

Aug 18th, 2009 | By Patrick Cox | Category: Stock Market Investing

Currently, medicine is, to a large degree, a “one size fits all” proposition. Doctors watch for adverse effects and check personal and family histories. Medical technologies, however, are designed for the general population, not individuals. That’s going to change.

Moreover, there will be huge profit opportunities, in many enabling technologies, for those who invest accordingly. And today I’m going to tell you about a company that will hand you your best chance to make a transformational fortune.

We know that many current treatments work on some people, yet not others. Some drugs are safe for many people, but have dangerous side effects for others. This is because all of us have individual differences in our genetic code based on heredity and environment. Even slight differences can lead to very different reactions to medications.

This has created serious regulatory problems. Drugs are denied regulatory approval not because they do not work, but because some fraction of the population suffers adverse effects. As a result, we are often denied incredibly effective therapies simply because they are not universally effective.

This shockingly primitive state of affairs exists because, until very lately, we simply have not had the tools to get to the genetic roots of disease. Scientists and pharmaceutical companies haven’t precisely known how a particular drug’s chemical profile interacts with a genetic one. Medical science, in turn, has been unable to tailor drugs to work with a specific genetic makeup.

This is rapidly changing. Just a few short years ago, the human genome was first mapped. The genome, as you know, is the entire collection of genetic code that defines us at a biological level. Now scientists are studying single genes and their individual expressions.

It is meaningful, from the investor’s perspective, that Dr. Francis Collins, the head of the Human Genome Project, has just been selected by the Obama administration to head up the National Institutes of Health. Collins has long been a prominent champion for using the knowledge gained from human genome to accelerate personalized medicine.

This is important because institutional forces, with lobbying clout, always resist change. Much of Big Pharm, and its regulators, are vested in the “one size fits all” model. Many of the old players fear personalized medicine because it threatens the existing hierarchy. Collins’ presence at the top of the NIH will help counter this institutional resistance.

Incidentally, Collins has stated that genomics is currently where the computer industry was back in the 1970s – at the beginning of a technological revolution. While he was speaking in scientific terms, we should remember that the ’70s was also the right time to begin investing in a diversified portfolio of breakthrough computer technologies. Those who did so, despite claims that it was too risky or early, were made rich.

Dr. Collins is not alone in his views about personalized medicine. Former FDA director under G.W. Bush Dr. Andrew Von Eschenbach urges that the FDA approval process be overhauled and streamlined to help accelerate the adoption of personalized medicine. He is on record predicting that the medical industry will, in fact, undergo this profound metamorphosis.

I won’t pretend, by the way, that the prospect of socialized US medicine does not threaten the pace of this transformation. If American pharm’s prices and profits are controlled by the same people who run the Post Office and Medicare, it will not be good for R&D. It will not, however, stop progress. It will only shift it offshore.

Canada and much of Europe have squelched innovation in their countries by nationalizing health care. Rather than allowing drug companies the profits needed to fund future medical technologies, they mandate cheap care. This is why we regularly see politicians from these countries coming to the US to avoid long delays or get therapies unavailable in their own countries. I live in Florida, incidentally, and a million or so Canadians winter here annually. The weather is a factor, of course, but so is our superior medical care....

I speak regularly with the CEOs of some of the most important breakthrough medical companies. Universally, they tell me the same thing. They are all constantly courted by Asian investors who come with the blessings of their political leaders. These American CEOs are saddened, as am I, by the prospect that they may be forced offshore. They are, though, unwilling to halt the progress of medical science in the misguided quest for lower medical costs. I maintain hope, by the way, that Americans will stop this self-destructive move toward socialist health care.

In Greek mythology, Proteus was the son of Poseidon, who could change his shape at will. From this comes the adjective “protean,” meaning versatile, flexible and adaptable. It is not coincidence that this also describes the proteins expressed by our genes.

By now, the public is somewhat aware of genome progress. Now that the code is cracked, however, we know that it was simply the first step in the process of developing truly personalized medicine.

Though our genome contains the basic information that determines our biology, our proteome is the entire domain of protein chemistry that regulates the structure and functioning of our individual cells. By extension, the proteome determines how each of our bodies function. Everyone’s proteome is unique, because each of us has a unique genome and has been exposed to unique environmental factors.

The human genome contains a staggering amount of information. If it were a book, it would contain a billion words. Yet consider this: Each individual gene can determine the cellular manufacture and function of many, many proteins. Genes are merely the instructions for making proteins. Unlike our genome, which stays mostly the same over time, our proteome is always in a state of flux.

Proteomics concerns itself with these proteins and their interactions. These interactions determine the course of nearly all human diseases. They also open up entire new avenues of treatments and investment.

One important proteomic avenue is cancer chemotherapy. A recent study of personalized medicine by Scottsdale Healthcare showed that when cancer patients were individually profiled at the molecular level, treatments were more successful. Tumors that had resisted shrinkage using several courses of conventional chemotherapy were successfully treated when the patient’s individual genetic makeup was used to customize treatment.

Many of these personalized treatments use therapeutic monoclonal antibodies directed against specific proteins. They work only, however, in specific tumors that strongly express that particular protein. For example, tumors need to develop new blood vessels in order to grow. If the protein instructions are known, antibodies can be developed that prevent new blood vessel formation by these tumors. Antibodies can also be developed against other growth factors that feed the tumor’s growth.

We have already seen big investor successes in this arena. Early investors in Genentech struck gold. Genentech (NYSE:DNA), now owned by Roche, was the first company to develop a targeted proteomic cancer therapy when it brought the breast cancer drug Herceptin to the market in 1998. Yet Herceptin is effective only in less than a third of breast cancer patients. In some, it can trigger dangerous cardiac side effects.

The FDA, therefore, has approved procedures to test the breast cancer for the genetic protein expression that is specifically targeted by Herceptin. Women can now be individually screened for overexpressing the particular HER2 protein that Herceptin targets.


Patrick Cox

[If someone invested $100,000 in Genentech at its Foundation, when most weren't even sure to have detected a pardigm-shift, how much would the person be worth today? Try this at a late summer barbecue party: "I am investing in the business opportunity that the "one size fits all" does not apply in Genomics". Would you make people think? - Pellionisz, HolGenTech_at_gmail.com, Aug 19, 2009]


Collins sets out his vision for the NIH

Translational research and neglected diseases are on the agenda for incoming director.

Nature, News
Elie Dolgin


On his first day in the job, the new director of the US National Institutes of Health (NIH) laid out a five-point road map for the agency — which includes focusing greater attention on translational research, neglected diseases and health-care reform. But Francis Collins's top priority will be tackling budget constraints after the $10.4-billion boost from the economic stimulus package runs out in 2010.

"I don't want you to think that all it's going to take is a few speeches or maybe a little playing the guitar for this to be successful," he said at a 17 August town-hall meeting at NIH headquarters in Bethesda.

For example, he estimated that only around 3% of applicants for the stimulus-backed 'challenge grants' will be funded, although some have predicted success rates of below 1%. Collins has raised red flags about losing a generation of young scientists if the NIH budget drops or flatlines. The budget for fiscal year 2010 has not yet been finalized, but the administration of President Barack Obama has requested a 1.4% increase over the 2009 budget of $30.6 billion.

Collins, who headed the National Human Genome Research Institute in Bethesda from 1993 to 2008, made the case that investment in biomedical research creates jobs and offers quick economic returns. Looking ahead, he said that the agency should devote more money to "five areas of special opportunity". First, to applying high-throughput technologies in genomics and nanotechnology to discover the genetic bases of diseases including cancer, autism, diabetes and neurodegenerative disorders. Second, to developing diagnostics, preventative strategies and therapeutic tools through more public–private partnerships. Third, to reining in the costs of health care with comparative-effectiveness research and personalized medicine. Fourth, to expanding research into diseases affecting the developing world. Finally, to increasing budgets and investing in training and peer review to achieve a predictable funding trajectory for the research community...

["Junk DNA diseases" (better called "genome regulation diseases"), prevention by personalized medicine in public-private partnership are seen as absolute top priorities by this column as well. Pellionisz, HolGenTech_at_gmail.com, Aug 19, 2009]


Knome - Offering consumers whole-genome sequencing--and software to interpret it
MIT Technology Review


Thanks to Jorge Conde, anyone can have his or her genome sequenced and scoured for clues to future health--all for just under $100,000. Conde is the driving force behind Knome, a personal-genomics startup founded in 2007 that is the first to offer whole-genome sequencing directly to consumers. The approach sets Knome apart from other consumer genomics companies, which analyze just a fraction of an individual's DNA for a few hundred dollars.

Conde, Knome's cofounder and CEO, thinks that the commercial value in personal genomics will lie less in sequencing itself than in interpretation. So the company has developed software to manage, protect, and analyze genetic data; the software combs online databases for the latest scientific findings that have been validated, ranks their relevance, and then uses them to probe an individual's DNA sequence for helpful information.

While Knome's service is still unaffordable for most people, the cost of DNA sequencing is plummeting--from millions of dollars in 2006 to tens of thousands in 2009. Conde believes that when the price of genome sequencing eventually lands within reach of the average consumer, possibly within the next five years, Knome's whole-genome focus will put it far ahead of other companies. --Emily Singer

[Knome successfully validated the Personal Genome Computer market. The rest is not merely a question for individual companies, but (just as in the case of the Personal Computer) - if the Boston region or Silicon Valley (or Seattle...) profits the most. With the "IBM PC", the East Coast Computer Establishment took the lead, but probably nobody doubts that the Seattle-based Microsoft was the big winner, with their PC-customized OS and Killer Apps. Pellionisz, HolGenTech_at_gmail.com, Aug 18, 2009]


Harris: Silicon Valley may be getting its mojo back

By Scott Duke Harris
San Jose Mercury News Columnist

Is Silicon Valley gradually getting its mojo back? The evidence may be anecdotal, but this week brought news that Facebook acquired FriendFeed, VMware purchased SpringSource and security firm Fortinet registered for an initial public offering on Wall Street.

And Wednesday, Pacific Biosciences, based in Menlo Park, added to the upbeat news, announcing $68 million in new financing for the company that boasts a "pioneering" role in developing a transformative DNA sequencing technology. The new financing included strategic investments from the Wellcome Trust and Monsanto.

Pacific Biosciences' technology "has the power to unlock how inheritance and environmental factors affect human health," Wellcome Trust Chairman Bill Castell said in a news release.

Pacific Biosciences CEO Hugh Martin said the strategic investments from Wellcome, "the premier trust for health care," and Monsanto, "a global leader in agriculture," represent further validation of the company's potential and "the promise of our disruptive technology platform."

He added: "We have been able to raise a total of $188 million since last summer, during a very difficult economic market."

At a time when clean tech has become the hottest trend in venture investing, the life science sector has quietly sustained a strong presence and may prove the safer bet, say observers such as Tracy Lefteroff, the lead venture industry manager for PricewaterhouseCooper. While the maturity of the valley's hardware and software businesses makes new breakthroughs difficult, fields such as genomics and proteomics (the analysis of proteins) are in the early stages — and aging baby boomers are a growing market for therapies and medical devices.

Pacific Biosciences was founded in 2004 and has received an Advanced Sequencing Technology Award grant from the National Human Genome Research Institute. The company said it expects to launch the commercial version of its SMRT (single molecule real time) Sequencing System in the second half of 2010.

The new funding also added Sutter Hill Ventures to Pacific Biosciences' long list of investors, which also includes Deerfield Management, Intel Capital, Morgan Stanley, Redmile Group, T. Rowe Price, Mohr Davidow Ventures, Kleiner Perkins Caufield & Byers, Alloy Ventures, Maverick Capital, AllianceBernstein, DAG Ventures, Teachers' Private Capital and Blackstone Cleantech Venture Partners

Dyyno-mite debut? Dyyno, creator of a new high-definition online video platform that CEO Raj Jaswa says will enable "ubiquitous use of video for business and entertainment," this week launched the first commercial version of its technology.

Based on technology licensed from Stanford University and financed by Artiman Ventures and Startup Capital, Dyyno claims to have reduced the cost of video over the Internet from $50 a terabyte to less than $2 a terabyte.

Dyyno's system, available on a subscription basis, has been embraced by Cisco Systems' popular WebEx business collaboration and is creating an emerging "spectator mode" for Xfire and other companies in the online gaming industry. Jaswa said the company also anticipates the Dyyno platform being used for education, civic affairs and "citizen journalism."

Openings at Simply Hired: Unemployment remains high, but the Silicon Valley startup that bills itself as "the largest job search engine" this week announced it has secured $4.6 million in new venture capital funding as it crosses a milestone toward profitability.

Simply Hired expects to do some hiring itself, announcing plans to increase its staff from 50 to 80 by the end of the year.

The Mountain View company has raised $22.3 million in financing to date and says it will use the new financing from IDG Ventures and Foundation Capital to expand domestically and internationally. Simply Hired said it now operates in 13 countries and seven languages across five continents.

[Some key investors are seriously worried about sustainability of growth once "affordable full human DNA sequences" are here (i.e. right now). Should a ramp-up of DNA-analysis not make the sequences useful and carry to the end-market of Customers in time, the "mass production" of DNA sequences would drown the market into a "DNA flood". This gap in the "supply chain" will trigger shift of investment from "get info" into "use info"; Pellionisz, HolGenTech_at_gmail.com, Aug 16, 2009]


The DNA discount

The falling cost of genetic testing opens a whole new market

Geoffrey Shmigelsky says the best money he ever spent on his health was a $1,000 test he took a year ago. The 41-year-old—who sold Calgary’s largest internet service provider, PSINet, for a “stupid amount of money” a decade ago—spat into a test tube and FedExed the vial to biotech start-up 23andMe in California. Eight weeks later, he sat down to find out his risk for developing everything from heart disease to Alzheimer’s, schizophrenia to prostate cancer—even how likely he was to go bald.

Most of the information was interesting, but benign. However, Shmigelsky did discover that he’s 10 times more likely than average to develop glaucoma and 50 per cent more likely to develop age-related macular degeneration of the eyes. So now he takes lutein, a dietary nutrient that significantly decreases his risk of developing the ocular disease. He’s also learned that he carries a gene putting him at “extremely high risk” for developing gallstones, so he has frequent, thorough ultrasounds to screen for them. “I can do something today to reduce my risks going forward,” he says. “It’s empowering.”

Not so long ago, predictive genetic tests were the stuff of science fiction, but DNA sequencing is rapidly becoming a new growth area in biotech. There’s no doubt that many people are interested in finding out whether they are a genetically advantaged athlete, or have a particularly high risk of developing a disease—the only question is how much they’ll pay to find out. Right now, the tests can cost as much as US$50,000, but prices are coming down quickly and the race is on to provide fast results at mass-market prices that the whole family can afford. At stake is a whole new market that could be worth billions.

Already, entrepreneurs are offering consumers enticing glimpses of what might be in their genetic cards. For US$149 and a cheek swab, Atlas Sports Genetics can tell parents whether to steer their toddler toward power sports like football or endurance sports like distance running. For US$995, ScientificMatch.com promises to pair clients with DNA-friendly mates who will give them more orgasms and produce healthier children. Two months ago, Icelandic start-up deCODEme began offering a $195 test analyzing your risk for six common cardio conditions including heart attack, stroke and aneurysm.

Such tests are essentially the Coles Notes of DNA decoding, giving you just a hint of what’s inside. But at the pricier end of the field are companies that can completely unravel your DNA, and prices are coming down here too. Illumina, a San Diego biotechnology company, can now deliver a complete genetic decoding for US$48,000. That may sound like a lot, but consider that just seven years ago, when the first complete human genome was decoded, the procedure cost $2.7 billion. Since then the price has dropped from $1 million two years ago, to $350,000 a year ago, to $100,000 a few months ago, to $48,000 today. Complete Genomics, a Silicon Valley biotech start-up, now has plans to sequence human genomes for $5,000. Some scientists predict that within a decade, the sequencing technologies used by Illumina will be cheap enough to be integrated into widely available health care programs.

There are still a few hurdles to jump before we see DNA kits at the drugstore, however. For starters, some big names are still questioning the science behind the tests. Muin Khoury, director of public health genomics at the U.S. Centers for Disease Control and Prevention, says genes are not destiny, and mutations that put us at risk for a disease certainly don’t guarantee that we will develop it. Most diseases result from an interaction between genes and environment, he says, and you need to understand both parts of the puzzle to gauge the risk.

Perhaps the main barrier to widespread genetic testing, however, will be convincing people that this is a service they need. After all, sometimes the tests deliver bad news that doesn’t help you at all. Google co-founder Sergei Brin, whose wife Anne Wojcicki helped found 23andMe, discovered that he has a genetic mutation that sharply raises his risk for developing Parkinson’s disease. The 35-year-old billionaire, who gives himself “50-50 odds” of developing the neurological disorder, has announced funding for a major new study of its genetic underpinnings, but there’s little else he can do.

Such diseases are rare, however, and if you find out you have a higher-than-average risk for ailments such as diabetes or heart disease, there’s plenty you can do to improve your odds. That’s why boosters like Shmigelsky suggest the coming decade will be the “decade of genomics,” and that eventually all children will have their genomes sequenced at birth. The “genie is out of the bottle,” says Khoury. And as the price continues to drop, that genie will only grow stronger.

[Your DNA thus has to be matched to the "environment" for prevention - the DTC model of HolGenTech; Pellionisz, HolGenTech_at_gmail.com, Aug 14, 2009]


Health 2.0 could shock the system

Financial Times
By Esther Dyson
Published: August 12 2009 19:41 | Last updated: August 12 2009 19:41

Can the grassroots internet do for health what it is doing for politics?

America’s political culture has been revitalised in recent years as citizens have become newly vocal and engaged, in large part through the web. The internet gives all of us the ability to get involved, whether by signing up to a presidential candidate’s mailing list, participating in local politics through blogs or even just sharing photos and interests on social networking sites.

Whatever happens with President Barack Obama’s plans for healthcare reform, the same thing is starting to happen in the field of health itself – that is, that happy state in which you do not need much from the medical system. Remember: “That government is best, which governs least”?

Indeed, healthcare is ripe for the kind of revolution we are having in politics, due both to a backlash against the dysfunctionality of the old system and to the power that the internet gives people to collect information and organise themselves.

The internet is changing people’s expectations of what they have a right to know and say: just as they expect to know more about their politicians, they expect to know more about their own health institutions – and to criticise them publicly. Websites let people rate their own doctors and hospitals, even as public pressure and occasionally public rules demand more and more transparency about performance and outcomes.

With these resources, people are taking a more active role in their own health. Instead of relying on the medical establishment, they are searching for information on the internet in order to do for themselves what institutions cannot or do not.

They are using online tools to generate and manage information about themselves. Health and healthcare are personal in a way that politics is not. Following the lead of diabetics and others with chronic diseases who monitor themselves and share data on sites such as PatientsLikeMe, healthy people are monitoring their own blood pressure, exercise and other data.

If this sounds bizarre, remember that just a few years ago it would have seemed odd to “manage” your friendships online; now that is taken for granted. People create and share user-generated content; why not collect and share user-generated data?

Hundreds of start-ups, not to mention Google and Microsoft, are developing tools and services to let individuals manage their own health information, even as the government encourages institutions to make all health data electronic.

But those medical institutions are talking about electronic medical records for their own use; the patients are perceived as subjects of the data, not owners. HealthDataRights.org is a website where people can sign up to assert their rights to their own health data. That sounds obvious, but have you tried getting your own records lately? More specifically, in some US states (New York and California), there are regulations that prevent you from seeing your own genetic data without a doctor’s involvement.

From the public-health point of view, all this is meaningful only if it can affect behaviour and, in turn, result in a healthier population overall. The establishment is sceptical, but consider how attitudes and behaviour have changed on smoking. Consider the impact of community support (Alcoholics Anonymous, for example) on the behaviour of people who recognise they have a problem.

We have only begun to see how good information – and specific, personal, quantifiable information – can affect people’s behaviour. It is hard to eat and exercise right if you cannot see the impact. But better information and modelling tools can shed light on how, for example, a particular diet might make you look and feel years from now. Data on blood pressure trends, meanwhile, can have an immediate impact on behaviour.

Not everyone will become a “self-quantifier”, but the choices of those who do will shape overall consumer consciousness and, in turn, the products and services offered by restaurants and food companies, the popularity of gyms and exercise overall, and the role models we emulate.

Ultimately, of course, individual involvement can have an impact on the healthcare system too. Whatever Congress decides about funding and restructuring healthcare, increasingly informed individuals will make better choices about doctors, treatments and hospitals – providing market-style feedback that has been lacking in the system until now.

Doctors are horrified that they could be chosen in the same way someone would choose a restaurant – but why not? We pick a restaurant for a variety of reasons, even as we rely on government inspection to make sure the food is safe. It is now up to the medical establishment to start presenting information in a way that is truthful, relevant and intelligible. It is not beyond the capacity of the public to understand that hospitals treating sicker patients might have worse outcomes, so that you need to concentrate on the relative outcomes. If people can understand baseball statistics, they can understand that a 30 per cent chance of cancer is neither a death sentence nor a licence to avoid screening.

Real improvement in the healthcare system depends only partly on who pays. It also has to do with who makes the choices – and whether they have enough information and incentives to do so wisely. In the end, all politics is local. And health begins at home.

The writer is involved with a number of “health 2.0” organisations, including 23andMe, PatientsLikeMe, Patients-KnowBest, ReliefInsite, Keas, Epernicus, HealthDataRights.org and the Personal Genome Project, for which she has published her entire genome online

[As pointed out in YouTube, with the Internet the "Health-sector" will blur with e.g. "Web-searching" and in the Genome Based Economy getting personalized, Genome-Based Recommendations, quantitated. As the cost of health care will rise and its quality decline, individuals will seek out modern ways of prevention (e.g. through genome testing and genome analysis, leading to Personal Genome Assistants in their hand, to barcode products that fit or fix their genome). Soon, the government will realize that it simply does not have the budget and gray matter to invent, develop and deploy 21st Century technology of Health 2.0, free for all. Very soon, the government will cry uncle; "Prevention!!!" It costs the government nothing to let private industry thrive on inventing, developing and deploying modern electronic means of prevention. People already know that prevention is cheaper than hospital bills and sky-high medical insurance. The most expensive DTC genome testing costs $999, and some can be had for $99. How is this price related to hospital bills and yearly health insurance premium? Pellionisz, HolGenTech_at_gmail.com, Aug 13, 2009]


Nancy Brinker: Finding a national cancer strategy
06:18 PM CDT on Monday, August 10, 2009

[Earlier coverage in this column on Nancy Brinker. - Breast Cancer Prevention may be the target of a pilot-project of "Personal Genome Assistant" for genome-based-shopping-for-prevention (presently, GooglePhone, but Apple's iPhone is already in the "Genome Space" and Texas Instruments, HP and other PDA-s are also considered by HolGenTech - AJP]

[Must be thinking of Susan ... - AJP]

As a 9-year-old girl, in 1955, I remember waking to church bells ringing in my hometown of Peoria, Ill. Mom and Dad told me that factories stopped and teachers wept as news spread of a polio vaccine.

We celebrated because life would be better - free of a calamitous disease discussed only in whispers, free of the sight of friends in iron lungs and leg braces. We lined up in the school cafeteria and gladly thrust our arms forward for the injections that would let us go wherever we wanted, unafraid of this deadly virus.

We knew then that Americans could conquer even the most feared disease of our generation when we marshaled the national will and did the hard work.

This was the inspiration for two little girls a few years earlier, me, age 5, and my sister, Susan G. Komen, age 8, to hold a fundraiser that only little girls would hold for polio — a talent show in our backyard. Years later, when Susan died of another feared disease — breast cancer - and I later was diagnosed with the disease, I would marshal that same spirit to challenge this cancer with a foundation named in my sister’s memory - Susan G. Komen for the Cure.

Fueled by passionate and determined women and men around the world, we created a global movement against breast cancer, working the long hours and raising the funds that would lead to a cure. We’ve made tremendous progress in just one generation, with five-year survival rates of 98 percent for cancers that haven’t spread from the breast (compared with 74 percent in Susan’s time). We today have new treatments, better detection and frank discussions about a disease that was once also discussed only in whispers.

Most satisfying to me, as Goodwill Ambassador for Cancer Control to the World Health Organization, is that what we’re learning in breast cancer research is bringing new hope in the fight against all forms of cancer.

This is critical, because cancer is quickly becoming the leading killer worldwide: One in two men and one in three women will be stricken in their lifetimes. While we seek the cures, we are at a place today where many cancers can be, at the very least, chronic diseases, saving many from a virtual death sentence.

We need America to lead again. We need the agreement of lawmakers. We must make our voices heard in Washington for a national cancer strategy: to broker ideas, to meet each other halfway, to allow physicians to determine how best to help cancer patients, to speed up discovery of the cures and to make treatments available for all.

Everywhere I go, I hear the stories of husbands, wives, mothers, sisters, aunts and uncles, cut down by cancer, just as my sister was cut down at the age of 36. I marvel at the strength of the survivors and their resolve to find a cure. The national will is there.

We must put that national will to work. We must meet - with clear minds and hearts full of hope - this challenge, to create a world where cancer is conquered. The world waits for America to lead, and for the church bells to ring once again.

Dallas’ Nancy Brinker will receive the Presidential Medal of Freedom, our nation’s highest civilian honor, tomorrow for her work leading Susan G. Komen for the Cure and the global cancer movement. She is Goodwill Ambassador for Cancer Control for the United Nations’ World Health Organization. Her e-mail address is news_at_komen.org.


[An Open Letter - too long for "comments" field of the above article]

Dear Ms. Brinker:

Beyond congratulations to your highest decoration in the USA - that are difficult to express in words - I am sending you this letter to request your participation as an Originator-Founder of International HoloGenomics Society. By joining the 60+ existing Founders, not as an individual, but as the Originator-Founder of your “Susan G. Komen Breast Cancer Foundation”, you will open a new category of Founders in our Society, representing “Charitable Organizations advocating cure of diseases with roots including elsewhere than in the minuscle exonic protein-coding DNA”.

With your help, the international community may hold the World Inaugural of the post-ENCODE science of HoloGenomics on HoloGenome Regulation. Let us join in what we consider to be a crucial stage of formulating the right strategy before plunging into a Second War on Cancer - such that this time we can get it right! Please note the following explanation of where efforts have, thus far, been confounded and why we can succeed at last.

Make (Prior) Strategy, Not War!

A Repeat of Nixon’s “War on Cancer”, 1971 was called for in an Op-Ed article in the New York Times on the 7th of August, 2009 by the chancellor emeritus of Cold Spring Harbor Laboratory, the Nobel Prize winner Dr. James Watson.

Haven’t we won the First War? From his call it appears, we haven’t. He concludes: “While overall cancer death rates in the United States began to decrease slowly in the 1990s, cancer continues to take an appalling toll, claiming nearly 560,000 lives in 2006, some 200,000 more fatalities than in the year before the War on Cancer began. Any claim that we are still “at war” elicits painful sarcasm. Hardly anyone I know works on Sunday or even much on Saturday, as almost no one believes that his or her current work will soon lead to a big cure”.

“The War on Cancer is a bunch of sh@t.”- Watson sniped earlier, in his trademark style of great subtlety, on the website of peopleagainstcancer.com along with other depressing opinions, as the success story of breast cancer does not apply to all cancers:

Despite nearly 3 trillion dollars into research and treatment, America has no plan to prevent cancer and no settled strategy to deploy real innovation into a new “war on cancer”.

It has been estimated that less than two percent of the budget of the National Cancer Institute and the American Cancer Society is focused on prevention. Of this paltry sum, most is spent on diagnostic procedures like colonoscopies and mammograms which are wrongly included in the budget as prevention.

Pauling, one of the premier minds of the 20th century said, “I have never observed a disease that was not directly linked to a significant nutritional deficiency.”

Considering the common wisdom “In between two wars, generals prepare for winning the war they fought before” the title of Watson’s call appears well-chosen: “To Fight Cancer, Know the Enemy”.

Perhaps we missed it for the first time?

Boy, did we! In 38 years, why have we not found “The Cancer Gene”? Maybe because no such single entity exists. Note that while Nixon declared a “war” in 1971, Ohno declared “Junk DNA” in 1972. By establishing the (false) premise that only the “genes” count (1.3%) and labeling the rest (98.7%) “Junk,” it was determined that the vast majority of DNA would be overlooked for the next 35 years, at least. With Nobelist Gilbert's discovery of introns in 1978, the non-coding parts within "genes", investigation was further reduced to the exons only (protein-coding sequences of genes, much-much less than 1.3% of the human DNA, leaving vast sees of intergenic and intronic sequences in the dark, along with the "Junk DNA." Where are BRCA2, BRCA1 etc., etc lurking in the “shell game” (and have we figured out the maze of their derailments)?

In the 21st Century, the “Junk DNA” has been taken down from the attic and out of the garage. The International PostGenetics Society since 2005 (now under the name International HoloGenomics Society) became the first international organization that at its European Inaugural Symposium in Budapest (2006) officially abandoned the “Junk DNA” misnomer as a scientific concept – 8 months before the NIH-led “ENCODE” did so in June of 2007.

While at Stanford even student rappers live it up “regulatin’ genes” (see YouTube) - Dr. Watson, perhaps because he for so long championed “gene discovery”, falls a bit shy of calling the “enemy” by name: “cancer is a genome regulation disease” (see e.g. here) - and no “cure” is likely before a full understanding, or at least one or two re-thought principles, as mastermind of ENCODE, Francis Collins requested upon releasing the results of a veritable “genome revolution”.

Now we know that the Genome is not a complete system without considering often carcinogen Epigenomic factors, with Genomics and Epigenomics uniting in HoloGenomics.

We can clearly point to the lack in understanding the Regulation of HoloGenome as the enemy, and the plan is to focus on this single most crucial issue. With your help, as a first step along the implementation of the strategy, not only Stanford rapper students, but an international scientific community will convene for a collective "re-think some long-held views".

The US government called for the "re-think" in 2007, and the US is to lead, but we are short of a mandate to govern the World, and even the US’s Government Departments and Institutes are competitively segmented.

It takes global movements, like yours, to devise the best strategy for the allied forces that can be victorious for human kind.

We await your response. Thanks very much.


Andras J. Pellionisz, Ph.D.

Founder of International HoloGenomics Society


"To Fight Cancer, Know the Enemy" “The War on Cancer is a Bunch of Sh@t.” [Both by Nobelist James Watson]

OpEd by New York Times
Published: August 5, 2009
Cold Spring Harbor, N.Y.

["The War on Cancer is a Bunch of Sh@t" - an earlier statement by Watson - see analysis below the NYT article]

THE National Cancer Institute, which has overseen American efforts on researching and combating cancers since 1971, should take on an ambitious new goal for the next decade: the development of new drugs that will provide lifelong cures for many, if not all, major cancers. Beating cancer now is a realistic ambition because, at long last, we largely know its true genetic and chemical characteristics.

This was not the case when President Richard Nixon and Congress declared a “war on cancer” more than 35 years ago. As a member of the new National Cancer Advisory Board, I argued that money for “pure cancer research” would be a more prudent expenditure of federal funds than creating new clinical cancer centers. My words, however, fell on deaf ears, and the institute took on a clinical mission. My reward for openly disagreeing was being kicked off the advisory board after only two years.

While overall cancer death rates in the United States began to decrease slowly in the 1990s, cancer continues to take an appalling toll, claiming nearly 560,000 lives in 2006, some 200,000 more fatalities than in the year before the War on Cancer began. Any claim that we are still “at war” elicits painful sarcasm. Hardly anyone I know works on Sunday or even much on Saturday, as almost no one believes that his or her current work will soon lead to a big cure.

A comprehensive overview of how cancer works did not begin to emerge until about 2000, with more extensive details about specific cancers beginning to pour forth only after the 2003 completion of the Human Genome Project (a breathtaking achievement that the Italian-born virologist and Nobel laureate Renato Dulbecco foresaw in 1985 as a necessary prerequisite for a deep understanding of cancer). We shall soon know all the genetic changes that underlie the major cancers that plague us. We already know most, if not all, of the major pathways through which cancer-inducing signals move through cells. Some 20 signal-blocking drugs are now in clinical testing after first being shown to block cancer in mice. A few, such as Herceptin and Tarceva, have Food and Drug Administration approval and are in widespread use.

Unfortunately, virtually none of these new drugs leads to a lifelong cure. In most instances, they can offer only modest extensions in survival time. This is partly because there are often many types of cancer-causing genetic “drivers” within single cancer cells. While getting a DNA diagnosis for the drivers of every individual cancer would help us to prescribe more specific regimens of chemotherapy, given the inherent genetic instability of most cancer cells, the use of drugs acting against single drivers would all too soon lead to the emergence of genetic variants driven by increasingly destructive second, if not third, drivers.

Most anticancer drugs, then, will probably never reach their full potential unless they are given in combination with other drugs developed against second or even third drivers. Yet current F.D.A. regulations effectively prohibit testing in combination new drugs that, when given alone, have proved ineffective.

While targeted combination chemotherapies would be a big step forward, I fear we still do not yet have in hand the “miracle drugs” that acting alone or in combination would stop most metastatic cancer cells in their tracks. To develop them, we may have to turn our main research focus away from decoding the genetic instructions behind cancer and toward understanding the chemical reactions within cancer cells. ...

This discovery indicates that we need bold new efforts to see if drugs that specifically inhibit the key enzymes involved in this glucose breakdown have anti-cancer activity. In the late 1940s, when I was working toward my doctorate, the top dogs of biology were its biochemists, who were trying to discover how the intermediary molecules of metabolism were made and broken down.

After my colleagues and I discovered the double helix of DNA, biology’s top dogs then became its molecular biologists, whose primary role was finding out how the information encoded by DNA sequences was used to make the nucleic acid and protein components of cells. Clever biochemists must again come to the fore to help us understand the cancer cell chemically as well as we do genetically.

While the major pharmaceutical and biotechnology corporations have the financial means to exploit their most promising drug candidates, that is not true of many of the smaller biotechnology companies that are doing highly innovative work. Their financing from venture-capital firms has drastically dwindled in this recession. The National Cancer Institute should come to their rescue, providing funds that will let their products move through animal testing to the exploratory phases of clinical testing in humans.

At the same time, the institute should provide much more money to major research-oriented cancer centers to let them take on the low probability-high payoff projects that pharmaceutical giants and, increasingly now, the big biotech companies almost reflexively turn down.

Restarting the War on Cancer has to start at the top: in 1971, Congress decided that the president, not the head of the National Institutes of Health, should appoint the director of the National Cancer Institute. Yet like all too many outposts of the White House, the institute has become a largely rudderless ship in dire need of a bold captain who will settle only for total victory. President Obama must choose strong new leadership for the institute from among our nation’s best cancer researchers; it also needs a seasoned developer of new pharmaceuticals who can radically speed up the pace at which anticancer drugs are developed and clinically tested.

I expect that my views will provoke rebuttals from prominent scientists who feel that it’s not yet the time to go all out against cancer, and that until victory is more certain we should not further tap our limited coffers for more big-cancer money. While they are right that victory will not come from money alone, neither will it come from biding our time.

[As shown e.g. by Nancy Brinker's article (above) the debate may not rotate around the question if a new attack on cancer is necessary, but perhaps more on the question if we have the right strategy, before breaking out a war (Pellionisz, HolGenTech_at_gmail.com, Aug 11, 2009]


Not 'Genomic Junk' After All: LincRNAs Have Global Role In Genome Regulation

ScienceDaily (July 20, 2009) Earlier this year, a scientific team from Beth Israel Deaconess Medical Center (BIDMC) and the Broad Institute identified a class of RNA genes known as large intervening non-coding RNAs or "lincRNAs," a discovery that has pushed the field forward in understanding the roles of these molecules in many biological processes, including stem cell pluripotency, cell cycle regulation, and the innate immune response.

But even as one question was being answered, another was close on its heels: What, exactly, were these mysterious molecules doing?

They now appear to have found an important clue. Described in the July 14 issue of the Proceedings of the National Academy of Sciences (PNAS) the scientific team from BIDMC and the Broad Institute shows that lincRNAs – once dismissed as "genomic junk" - have a global role in genome regulation, ferrying proteins to assist their regulation at specific regions of the genome.

"I like to think of them as genetic air traffic controllers," explains co-senior author John Rinn, PhD, a Harvard Medical School Assistant Professor of Pathology at BIDMC and Associate Member of the Broad Institute. "It has long been a mystery as to how widely expressed proteins shape the fate of cells. How does the same protein know to regulate one genomic location in a brain cell and regulate a different genomic region in a liver cell? Our study suggests that in the same way that air traffic controllers organize planes in the air, lincRNAs may be organizing key chromatin complexes in the cell."

Inspired by a lincRNA called HOTAIR -- which is known to bind key chromatin modifier proteins and to assist in getting these proteins to the proper location in the genome – the researchers hypothesized that other lincRNA molecules might be playing similar roles.

"DNA wraps around partner proteins to form a structure called chromatin, which affects which genes are 'turned on' and which are 'turned off'," explains first author Ahmad Khalil, PhD, a scientist in the Department of Pathology at BIDMC and the Broad Institute. "Chromatin does this through a process of compaction; by determining which areas to compact and which to leave open, chromatin successfully determines which genes are accessible for transcription."

But he adds, it has been a mystery as to how this chromatin structure is so precisely targeted by specific enzymes – and not others.

"By utilizing a technology known as RIP-Chip we were able to examine RNA-protein interaction on a large scale and determine which lincRNAs are associated with each enzyme we examined," he adds. To analyze this tremendous amount of data, coauthor Mitchell Guttman, PhD, a bioinformatician at BIDMC and the Broad Institute, used a mathematical algorithm that identified which lincRNAs are bound by chromatin-modifying enzymes and which are not.

"This analysis revealed that 20 to 30 percent of lincRNAs are bound by three distinct chromatin-modifying complexes," adds Khalil. "By depleting several of these lincRNAs from cells, we were able to show a significant overlap between the genes which become affected by the depletion of lincRNAs and the depletion of the enzymes. This provided us with the evidence that these proteins work together with lincRNAs to target specific regions of the genome."

Standard "textbook" genes encode RNAs that are translated into proteins, and mammalian genomes contain about 20,000 such protein-coding genes. Some genes, however, encode functional RNAs that are never translated into proteins. These include a handful of classical examples known for decades and some recently discovered classes of tiny RNAs, such as microRNAs. By contrast, lincRNAs are thousands of bases long. Because only about 10 examples of functional RNAs were previously identified, they seemed more like genomic oddities than key players. With these latest findings, which also uncovered an additional 1,500 lincRNAs, it's clear these RNA molecules are no mere messengers – they have demonstrated that they can and do play a leading role.

"Much work still remains to be done but we could one day envision utilizing RNA to guide personalized stem cells to specific cell fates to restore diseased and degenerative disease tissues," notes Rinn.

This study was funded by grants from the National Institutes of Health, and support from the Damon Runyon Cancer Foundation, the Smith Family Foundation and the National Human Genome Research Institute.

In addition to Rinn, Khalil, and Guttman, coauthors include Broad Institute investigators Eric Lander (co-senior author), Manuel Garber, Aviva Presser, Bradley Bernstein, and Aviv Regev; Maite Huarte, Dianali Rivera Morales, and Kelly Thomas of BIDMC and the Broad Institute; and Arjun Raj and Alexander van Oudenaarden of the Massachusetts Institute of Technology.

[Eric Lander, with his first degree in mathematics, and now in the Science Advisory Board to the President, clearly identifies regulation as a key to our understanding genome function. Pellionisz, HolGenTech_at_gmail.com, Aug 7, 2009]


Senate confirms new NIH director [Francis Collins]

(AP) – 27 minutes ago

WASHINGTON — The Senate on Friday confirmed Dr. Francis Collins, a scientist who helped unravel the human genetic code, as director of the National Institutes of Health.

Collins led the Human Genome Project that, along with a competing private company, mapped the genetic code — or, as he famously called it, "the book of human life." He was awarded the Presidential Medal of Freedom, the highest civilian award, but may be more widely known for his 2007 best-selling book, "The Language of God: A Scientist Presents Evidence for Belief."

"Dr. Collins is one of our generation's great scientific leaders," Health and Human Services Secretary Kathleen Sebelius said in a statement. "Dr. Collins will be an outstanding leader. Today is an exciting day for NIH and for science in this country."

The NIH is the nation's premiere medical research agency, directing $29.5 billion to spur innovative science intended to lead to better health.

Collins, who was nominated to the post by President Barack Obama in July, was confirmed by voice vote.

NIH is familiar turf: Collins spent 15 years as the NIH's chief of genome research, before stepping down last year to, among other things, work with Obama's campaign. He also helped found the BioLogos Foundation, a Web site formed by a group of scientists who say they want to bridge gaps between science and religion.

[The "voice vote" - bypassing regular Congressional hearings is important, since the end of the government fiscal year is rapidly approaching (Pellionisz, HolGenTech_at_gmail.com, Aug 7, 2009]


Your Genome - There is an App for that

By Jack M. Germain
08/03/09 6:00 AM PT

Analyzing one's one genome can be fascinating and informative. Combined with a smartphone and the right software, it can also provide one with information on how to live a healthier life and make informed decisions about everything down to what products to buy and what to avoid at a supermarket.

Don't look now, but we may be about to enter the genome-based economy.

.... You could carry a specially-tuned smartphone able to scan the UPC (universal product code) of any clothing or food substance you buy.

These new inventions, what Dr. Andras Pellionisz calls your "Personal Genome Assistant," or PGA, will ferret out substances that are toxic to the one or more conditions mapped in your personalized genome report. Pellionisz is founder of HolGenTech.

Pellionisz has already proven the concept of the PGA with several working models that scan supermarket UPC codes. His firm also produces genome reports for customers bent on fostering help for their health issues.

Lest you think that Pellionisz is off by himself in left field, know that several other companies are already deeply involved in applying modern genomic knowledge into useful applications. One such firm is 23andMe... Also, let's not forget ... George Church when he sold on eBay his Knome genome computer [software] and genome sequencing package at auction for US$68,000.

"The benefits will revolutionize healthcare and disease prevention in years to come," Pellionisz told TechNewsWorld.

...The U.S. government began the Human Genome Project in late 1980s under the sponsorship of the NIH (National Institutes of Health); it took 13 years and $3 billion to complete, said Pellionisz.

A New View

In Pellionisz's perspective, that was a lot of money spent for very incomplete results. The Human Genome Project identified only 30,000 genes. "The project missed 110,000 genes," he said, referencing the latest gene research.

A new era of research [is] marked what Pellionisz called "hologenomics." ...

No Junk DNA

So-called junk DNA represents a sizable portion of the DNA sequence of a genome. Longstanding opinion has held that so-called junk DNA is just that -- junk. Recent research, however, suggests that some junk DNA may have a job after all.

Long-held beliefs may need re-working. For some 50 years the paradigm has been that DNA [information] only moves in one direction -- from DNA to RNA to protein, explained Pellionisz.

"... thinking [was] that the DNA you were born with is the DNA you die with and nothing such as food or diseases can change it," he suggested.

Some estimates hold that junk code is responsible for more than 150,000 diseases. Recursion and [the function of] junk DNA comes into vital play. DNA can be effected and changed. This gives new hope, and new cause for investments.

Platform for Health

New ideas about the true role of so-called junk DNA is fostering a keen awareness among both the scientific world and health-conscious customers of new businesses offering a sort of genome counseling....

The PGA scan device is tuned to the specific results of the customer's genome analysis. Is that too hefty a price to pay for better insight into one's health? Or is the shortage contrived to drive up the demand?

"Limited supplies and higher selling price are typical for any industry developing new technology. A genome computer will be much faster at processing the analysis," Pellionisz said, noting that the prices will drop as momentum builds.

Machine Control

Today, direct-to-customers genome testing companies such as 23andME rely on Illumina/Affymetrix microarray technology to analyze up to 1.6 million SNiP-s (single nucleotide polymorphism and point mutations of the 6.2 billion A,C,T,G letters/amino acid bases of human DNA). ... nano-sequencing technology... holds the promise of affordable and readily available personal genome analysis by the end of 2009.

23andMe provides detailed analysis in a genome report on a customer's DNA. The company has no devices to help customers apply the information they learn about their DNA, but the analysis often provides more information and insight into a person's health risks. This forms the basis for enlightened change.

In some cases, the customers can apply the information to make more informed choices about the way they live. In other cases the genome analysis reinforces what customers already knew or suspected, given their family heritage, according to Esther Dyson, a director on the board of 23andMe.

... The company used to offer the genome analysis for $1,000. Now that price is $399, the result of a larger customer base now, according to Dyson.

Device World

Pellionisz debuted a consumer applications for personal genomes functioning with an Android-based smartphone at the inaugural Consumer Genetics Show in Boston last spring. Later that same day, Illumina's CEO Jay Flatley featured a different business model application for personal genomes for the iPhone.

Using the Android device's built-in barcode reader, Pellionisz demonstrated how personal genome computing can detect genome-friendly and genome-supportive products, from foods to cosmetics to building materials. In the demonstration, the device user was assumed to have a genomic proclivity to Parkinson's Disease.

The demonstration leveraged the handset's barcode reader to capture product information. It used a product rating scale to identify any product's prevention efficacy. These demonstrations of personal genome handheld device applications could well be the tip of a future genomic iceberg.

The use of such devices, Pellionisz said, can affect individual choices and create new awareness and understanding of how the world around us impacts the one within us. The personal genome accessed via handheld applications could present new insights to the near-term future.

Super G Computer

The development of less costly genome computers is also part of this platform for a genome-based economy.

"We look to [both serial and parallel] chipmakers -- Intel, AMD, [and] Xilinx, ... Altera, [respectively] -- and to integrators like HP, Dell, DRC and even IT giants like Google and Microsoft, for next developments in parallel processing to produce HPC [hybrid serial/parallel Personal Genome Computers], desktop and server lines as the IT infrastructure of the genome based economy," said Pellionisz.

HolGenTech software, using some open source but mostly proprietary code, ported to parallel processors will yield several hundred-fold acceleration, Pellionisz claimed. This will lead to the ability to fine-comb a person's genome.

Sound Business

The potential for fertile business opportunity for companies that participate in the genome economy is strong. However, like any business, an element of risk is involved.

"The World Wide Web was unexpectedly easy. A few people had Internet access but didn't do much with it. In one year ... it all went from zero to millions. Other things take longer. For instance, The Apple I computer left people wondering what to do with it. Apple sold millions of its Apple II computer," George Church, creator of the Knome genome computer [software], told TechNewsWorld.

However, vendors already providing genome-related services are seeing growing interest in the field, said both Dyson and Church. The same can be said for investors in these companies.

Investors like to wrap their hands around new products. They are often attracted to being among the first to back a new product concept. This will appeal to those investors with a widget mentality, noted Church....

[Perhaps the two most important business aspects are the following: (1) there is an imminent flood of "full human DNA sequences" (with Complete Genomics avalanching the field with 10,000 full DNA sequences in 2009, at $5,000 each, and in 2010 Complete Genomics is planning 20,000 full DNA sequences. Pacific Biosciences will escalate the supply by $100-range full sequences a year from now. The supply of data is simply not met with the demand for ever-more-complex full DNA analyses.  (2) Even if analysis of full DNA were available in an affordable manner (Illumina provides full DNA analysis and interpretation at $48,000 presently, and very partial genomic testing results can be obtained for $99/$399/$999 by 23andMe and Navigenics) - the genomic results are not connected by suitable instrumentation (Personal Genome Assistant) to practical means of prevention. HolGenTech puts genome information to Apps via software architecture for the Genome Based Economy, by means of Personal Genome Computers (by genome software for available hybrid serial/parallel hardware, by means of its Genome Based Recommendation Engine providing assistance for consumer decisions. By turning PDA-s into Personal Genome Assistants HolGenTech deploys results of genome analysis by Personal Genome Computers into practical tools in genome-based consumer recommendations (Pellionisz, HolGenTech_at_gmail.com, Aug 3, 2009]


Roche and Google.org start initiative for early discovery of new diseases

Roche Applied Science and Google.org recently started a joint project to demonstrate the feasibility of developing a multidisciplinary surveillance, research and response system. This system will enhance the ability to predict and prevent emerging infectious diseases in East Africa. The East African region is known as one of the major hot beds for emergence of new infectious viral agents and new strains of known viruses.

Roche has donated a Genome Sequencer FLX system as backbone of this project. “We are proud to work with Google.org, and the dedicated research organizations in Kenya to bring this technology to a region of the world where novel viruses frequently emerge. We are confident that access to the 454 Sequencing Systems will improve monitoring of novel infectious diseases and enable faster discovery in case of an outbreak,” said Chris McLeod, CEO of 454 Life Sciences, a member of the Roche Group.

The project will focus primarily on arboviruses (arthropod-borne viruses), a large group of viruses which frequently cause emergent disease and are transmitted by blood-sucking insects and their arthropod cousins, such as ticks. The first disease the project will tackle is Rift Valley Fever (RVF), a lethal disease of livestock and people caused by an arbovirus spread by mosquito vectors. The initiative will:

- Survey human, livestock, wildlife and vector populations to monitor the circulation transmission and maintenance of arboviruses within them, with a focus on RVF virus

- Employ state of the art genomics and knowledge management systems to advance understanding of the dynamics and diversity of disease-causing agents, their vectors and hosts and

- Link this wealth of new information to existing risk information and decision support tools to provide early warning of disease outbreaks and enable rapid responses to control them.

Google.org, the philanthropic arm of Google.com, provided a US$5-million grant to icipe (International Centre of Insect Physiology and Ecology) and partners late last year to enhance insect-carried disease discovery and surveillance of East Africa. In Nairobi, icipe, ILRI (International Livestock Research Institute), Kenya’s national organizations for health (Ministry of Health, Ministry of Public Health and Kenya Medical Research Institute), livestock (Department of Veterinary Services and Kenya Agricultural Research Institute) and wildlife (Kenya Wildlife Services) have been chosen to participate in the project.

“We will be fortunate to have a GS FLX instrument initially donated to support the Arbovirus infection surveillance in Kenya, with plans to subsequently train and bring on board other research partners in East Africa. The region has experienced large epidemics of arboviral diseases, such as Rift Valley Fever, Dengue and Yellow Fever just to mention a few. Surveillance to monitor circulation of such agents is critical in informing public health decision for early warning and response,” says Christian Borgemeister, Director General of icipe.

Rosemary C. Sang, research scientist, added: “This new technology will play a very important role in promoting the capacity of surveillance and research groups in East Africa, to leap over constraints of currently available technologies and be able to discover infectious agents circulating unrecognized in our environment and monitor the evolutionary trends in the viral pathogens in order to remain current with the diagnostics and management options.”

Emerging infectious diseases are a significant burden to our global economies and public health systems. Approximately 70% of emerging diseases are zoonotic, meaning that they are transmissible between humans and animals. Through the study of insects, icipe works to improve food production, human, animal, environmental health and natural resource conservation. In addition, it captures indigenous expertise, builds and strengthens local capacity. icipe conducts research and develops methods for pest control that are selective, non-polluting and not susceptible to resistance. The aim is to develop affordable, sustainable and conservation oriented solutions.

The Roche Genome Sequencer platform will be established within the Nairobi laboratories of ILRI and a regional joint venture called Biosciences Eastern and Central Africa (BecA). The ILRI-BecA Hub provides a biosciences research and bioinformatics platform linked to a network of laboratories distributed throughout Eastern and Central Africa, serving a new generation of African scientists.

The high-performance 454 Sequencing System has proven a powerful pathogen discovery tool in a series of recent novel virus outbreaks. In late 2008, for example, the system was used to discover a new zoonotic arena virus responsible for a highly fatal hemorrhagic fever outbreak in South Africa. Earlier that year, as reported in the New England Journal of Medicine, the system was used to identify a previously undetected virus responsible for the death of three transplant recipients in Australia.

[Google and Roche co-invested in Genomics (23andMe.com) - thus it is not surprising that they closely collaborate for charitable purposes. Pellionisz, HolGenTech_at_gmail.com, July 31st, 2009]


How To Make A Fortune From The Personalized Medicine Revolution
Jim Nelson Thursday, July 30, 2009

Many of the big transformational technologies set to change the science of medicine are based on single simple concepts. These include stem cells and RNA interference. There is another transformational change coming, however, that involves a huge array of technologies. I’m talking about “personalized medicine.”

Currently, medicine is, to a large degree, a “one size fits all” proposition. Doctors watch for adverse effects and check personal and family histories. Medical technologies, however, are designed for the general population, not individuals.

That’s going to change…

The Problem With the “Normal Curve”

We know that many current treatments work on some people, yet not others. Some drugs are safe for many people, but have dangerous side effects for others. This is because all of us have individual differences in our genetic code based on heredity and environment. Even slight differences can lead to very different reactions to medications.

This has created serious regulatory problems. Drugs are denied regulatory approval not because they do not work, but because some fraction of the population suffers adverse effects. As a result, we are often denied incredibly effective therapies simply because they are not universally effective.

This shockingly primitive state of affairs exists because, until very lately, we simply have not had the tools to get to the genetic roots of disease. Scientists and pharmaceutical companies haven’t precisely known how a particular drug’s chemical profile interacts with a genetic one. Medical science, in turn, has been unable to tailor drugs to work with a specific genetic makeup.

The Impact of the Genome

This is rapidly changing. Just a few short years ago, the human genome was first mapped. The genome, as you know, is the entire collection of genetic code that defines us at a biological level. Now scientists are studying single genes and their individual expressions.

It is meaningful, from the investor’s perspective, that Dr. Francis Collins, the head of the Human Genome Project, has just been selected by the Obama administration to head up the National Institutes of Health. Collins has long been a prominent champion for using the knowledge gained from human genome to accelerate personalized medicine.

This is important because institutional forces, with lobbying clout, always resist change. Much of Big Pharm, and its regulators, are vested in the “one size fits all” model. Many of the old players fear personalized medicine because it threatens the existing hierarchy. Collins’ presence at the top of the NIH will help counter this institutional resistance.

Incidentally, Collins has stated that genomics is currently where the computer industry was back in the 1970s - at the beginning of a technological revolution. While he was speaking in scientific terms, we should remember that the ’70s was also the right time to begin investing in a diversified portfolio of breakthrough computer technologies. Those who did so, despite claims that it was too risky or early, were made rich.

Dr. Collins is not alone in his views about personalized medicine. Former FDA director under G.W. Bush Dr. Andrew Von Eschenbach urges that the FDA approval process be overhauled and streamlined to help accelerate the adoption of personalized medicine. He is on record predicting that the medical industry will, in fact, undergo this profound metamorphosis.

[Francis Collins, once officially nominated and confirmed by Congress is likely to steer NIH forcefully into the direction of Personalized Medicine and Prevention. Pellionisz, HolGenTech_at_gmail.com, July 30th, 2009]


The 15-Minute Genome: Faster, Cheaper Genome Sequencing On The Way

ScienceDaily (July 29, 2009) — In the race for faster, cheaper ways to read human genomes, Pacific Biosciences is hoping to set a new benchmark with technology that watches DNA being copied in real time. The device is being developed to sequence DNA at speeds 20,000 times faster than second-generation sequencers currently on the market and will ultimately have a price tag of $100 per genome.

Chief Technology Officer Stephen Turner of Pacific Biosciences will discuss Single Molecule Real-Time (SMRT) sequencing, due to be released commercially in 2010, at the 2009 Industrial Physics Forum, a component of the 51st Annual Meeting of American Association of Physicists in Medicine, which takes place from July 26 - 30 in Anaheim, California.

A decade ago, it took Celera Genomics and the Human Genome Project years to sequence complete human genomes. In 2008, James Watson's entire genetic code was read by a new generation of technology in months. SMRT sequencing aims to eventually accomplish the same feat in minutes.

The method used in the Human Genome Project, Sanger sequencing, taps into the cell's natural machinery for replicating DNA. The enzyme DNA polymerase is used to copy strands of DNA, creating billions of fragments of varying length. Each fragment -- a chain of building blocks called nucleotides -- ends with a tiny fluorescent molecule that identifies only the last nucleotide in the chain. By lining these fragments up according to length, their glowing tips can be read off like letters on a page.

Instead of inspecting DNA copies after polymerase has done its work, SMRT sequencing watches the enzyme in real time as it races along and copies an individual strand stuck to the bottom of a tiny well. Every nucleotide used to make the copy is attached to its own fluorescent molecule that lights up when the nucleotide is incorporated. This light is spotted by a detector that identifies the color and the nucleotide -- A, C, G, or T.

By repeating this process simultaneously in many wells, the technology hopes to bring about a substantial boost in sequencing speed. "When we reach a million separate molecules that we're able to sequence at once … we'll be able to sequence the entire human genome in less than 15 minutes," said Turner.

The speed of the reaction is currently limited by the ability of the detector to keep up with the polymerase. The first commercial instrument will operate at three to five bases per second, and Turner reports that lab tests have achieved 10 bases per second. The polymerase has the potential to go much faster, up to hundreds of bases per second. "To push past 50 bases per second, we will need brighter fluorescent reporters or more sensitive detection," says Turner.

The device also has the potential to reduce the number of errors made in DNA sequencing. Current technologies achieve an accuracy of 99.9999 percent (three thousand errors in a genome of three billion base pairs). "For cancer, you need to be able to spot a single mutation in the genome," said Turner. Because the errors made by SMRT sequencing are random -- not systematically occurring at the same spot -- they are more likely to disappear as the procedure is repeated.

The talk, "Single Molecule Real-Time DNA Sequencers," was given on July 27, 2009.

[Confirmation of an earlier announced astonishing $100-range price and fraction of an hour speed of full human DNA sequencing by Pacific Biosciences in about a year from now is both a report that the brilliant project proceeds right on schedule, and a great promise for those who are ready for the "Dreaded DNA Data Deluge". Pellionisz, HolGenTech_at_gmail.com, July 29th, 2009]


Programming cells by multiplex genome engineering and accelerated evolution

Harris H. Wang1,2,3,8, Farren J. Isaacs1,8, Peter A. Carr4,5, Zachary Z. Sun6, George Xu6, Craig R. Forest7 & George M. Church1

Nature advance online publication 26 July 2009 | doi:10.1038/nature08187; Received 6 March 2009; Accepted 29 May 2009; Published online 26 July 2009

Department of Genetics, Harvard Medical School, Boston, Massachusetts 02115, USA
Program in Biophysics, Harvard University, Cambridge, Massachusetts 02138, USA
Program in Medical Engineering Medical Physics, Harvard-MIT Division of Health Sciences and Technology, The Center for Bits and Atoms, Media Lab, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Harvard College, Cambridge, Massachusetts 02138, USA
George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA

[The cyclical nature of the process of parallel and continuous directed and accelarted evolution - AJP]

The breadth of genomic diversity found among organisms in nature allows populations to adapt to diverse environments1, 2. However, genomic diversity is difficult to generate in the laboratory and new phenotypes do not easily arise on practical timescales3. Although in vitro and directed evolution methods4, 5, 6, 7, 8, 9 have created genetic variants with usefully altered phenotypes, these methods are limited to laborious and serial manipulation of single genes and are not used for parallel and continuous directed evolution of gene networks or genomes. Here, we describe multiplex automated genome engineering (MAGE) for large-scale programming and evolution of cells. MAGE simultaneously targets many locations on the chromosome for modification in a single cell or across a population of cells, thus producing combinatorial genomic diversity. Because the process is cyclical and scalable, we constructed prototype devices that automate the MAGE technology to facilitate rapid and continuous generation of a diverse set of genetic changes (mismatches, insertions, deletions). We applied MAGE to optimize the 1-deoxy-d-xylulose-5-phosphate (DXP) biosynthesis pathway in Escherichia coli to overproduce the industrially important isoprenoid lycopene. Twenty-four genetic components in the DXP pathway were modified simultaneously using a complex pool of synthetic DNA, creating over 4.3 billion combinatorial genomic variants per day. We isolated variants with more than fivefold increase in lycopene production within 3 days, a significant improvement over existing metabolic engineering techniques. Our multiplex approach embraces engineering in the context of evolution by expediting the design and evolution of organisms with new and improved properties.

[A cyclical, directed and accelerated evolution, applied to produce industrially important biosynthesis of desired materials is revolutionary. Pellionisz, HolGenTech_at_gmail.com, July 29th, 2009]


Double chromosomes equals more plant power

By Karin Kloosterman
Israel 21 C
July 26, 2009

Biofuels are alternative energy fuels produced from living organisms or metabolic byproducts (organic or food waste products). If we could just find a more efficient way to unlock their energy, and to minimize the amount of land and water resources needed to grow them, they could replace the polluting and limited reserves of fossil fuels currently in use.

Now "Kaiima Bio-Agritech" of Israel believes that it has found a way to do just that.

"The oil is going to end," Ariel Krolzig, product manager of Kaiima, tells ISRAEL21c. "It's a question of time. In the last few years no new oil fields have been found. Why are countries like Brazil looking for alternatives?" he asks rhetorically.

Sporting a sage-like beard, Krolzig is standing beside the star of his likely success story, a castor oil plant. He proceeds to describe the method developed by Kaiima that doubles a plant's chromosomes from a set of two to a set of four.

This doubling results in higher cell activity, increased photosynthesis and better adaptation to local conditions in the field. Most importantly, it more than doubles the plant's biofuel potential.

Castor oil could save the day

Companies around the world are now field testing Kaiima's seeds for the castor oil plant. "There are about 120 different purposes for it," says Krolzig, stressing that biofuel is among them.

The chromosome doubling that Kaiima can now induce may occur naturally in nature. When it does, the plants with four chromosomes typically show advantages over those with just two sets in each nucleus.

For some time now, plant breeders and scientists have been trying to encourage this doubling or "polyploidy" in certain plants with high economic value, using artificial methods including colchicine treatment, nitrous oxide treatment and temperature shock.

However, these methods have caused damage to the plants' DNA and ultimately to the plants themselves. Using a biotechnology technique called CGM (Clean Genome Multiplication), Kaiima has found a way to create polyploidy in plants, without encroaching on their DNA.

Kaiima believes that its new castor oil plants (sold as seeds) will revolutionize the biofuel industry. By using its CGM technique, the company brings about dramatic increases in the plants' yields and energy, while using less water and land.

Great potential, no drawbacks

And an added benefit, which should mollify the sizable resistance to organisms that are altered in any way: "It's not transgenic, it's not a genetically modified organism (GMO)," Krolzig asserts.

Explaining why the research was conducted on castor plants, Krolzig says that the castor plant, grown mainly in India and China, is widely utilized in the chemical, plastic and cosmetic industries and also as a lubricant that doesn't break down under high temperatures, for use in high-speed cars and airplanes.

A non-edible crop, castor can be grown on poor quality land that isn''t suitable for other kinds of food crops. This means that growing it won't influence global food prices on a large scale, unlike other biofuels such as sugarcane or corn.

Until now, the problem with castor oil has been that it is very expensive to produce, relative to its yields. Previously, the highest yield of oily beans from castor has been about 1.5 to 1.6 tons of beans per hectare, half of which is oil - about 750 kilograms.

"We have varieties that yield five to10 tons of seeds per hectare. At this yield, castor starts to be profitable as a biofuel," Krolzig declares.

Before closing any big deals, prospective clients are testing Kaiima's claims in Mexico, Spain, Argentina and other South American locations. "We just started selling now; the customers want to try them first," adds Krolzig, explaining that living biological material may behave differently in different parts of the world.

Mitigating the dangers of global warming

Food crops that have undergone Kaiima's CGM technique tend to show a greater tolerance to high temperatures and poor soil conditions. The company believes it will be able to produce rice varieties which can withstand ground temperatures higher than 35 degrees Celsius. This bio-technology may grant us some global food security if the dire predictions about global warming prove accurate.

In addition, Kaiima says that its plant varieties may even mitigate the dangers of global warming. Plants that undergo CGM can absorb twice as much CO2 per unit leaf area and their leaves are twice as big. They also use 20-30 percent less water per accumulated biomass unit, according to the company. Kaiima's conclusion is that CGM can be used to effectively mitigate global CO2 emissions and save water.

Kaiima was founded in 2002, by Amit Avidov, an agronomist with 30 years experience in seed breeding. (The company was originally named Bio Fuel, but changed its name in 2006.) Prior to this, he worked for Morning Seeds and Top Seeds, and was chief breeder at De Ruiter Seeds, a Dutch seed company later sold to Monsanto.

At present, Kaiima is involved in projects to multiply the genomes and increase the yield of other plants for fuel and food. They are working with jatropha, rapeseed (canola), rice, wheat, sugarcane and eucalyptus.

Based in Ramat Yishai, Kaiima employs between 60 and 80 people and all its operations are in Israel. It is backed by the venture funds Draper Fisher Jurvetson and DFJ-Tamir Fishman, and recently raised $8 million in investment money.

Krolzig sums up the company raison d'etre: With biofuels we are "not disturbing the balance."

[Draper, Fisher and Jurvetson appears to be in the "driver's seat". They had a very early investment in Venter's "Synthetic Genomics", and with Craig's turn from H2 to genome-based-oils the levearge seems to be how to make it profitable. Pellionisz, HolGenTech_at_gmail.com, July 26th, 2009]


Human Genome doubles quarterly revenue

Washington Business Journal

by Vandana Sinha Staff Reporter

Wednesday, July 22, 2009, 5:12pm EDT

Human Genome Sciences Inc., a frequent newsmaker this week, more than doubled its quarterly revenue while shaving its quarterly losses down by 18 percent in the period from April through June, though it still fell short of analyst expectations.

The Rockville biotech, which this week released news about positive clinical trial results and a new federal contract for its anthrax treatment, said it recorded $26.7 million in revenue in the second quarter this year, up from $11.6 million in the same period last year.

That revenue was split three ways by final payments from the U.S. government on its first purchase of Human Genome Sciences’ (NASDAQ: HGSI) anthrax treatment, by payments from its partners Novartis AG and GlaxoSmithKline PLC for hitting certain milestones in their joint drug development and by customers for manufacturing and development services.

Meanwhile, the company pared down its second-quarter losses from $80.1 million, or 59 cents per share, last year to $65.4 million, or 48 cents per share, this year.

The company missed Wall Street’s targets for the quarter, when analysts forecasted HGSI would generate an average $30.8 million in revenue and record 34 cents per share in losses. As a result, after the earnings were announced after the closing bell, HGSI shares took a 15-cent, 1 percent hit in after-hours trading as prices dipped from $14.05 to $13.90.

That’s been a negative blip in a positive trading spree this week, which more than quadrupled the company’s stock price from $3.32 at closing bell July 17 to $14.05 at closing bell Wednesday. That reaction followed the news that HGSI scored unexpectedly positive results in clinical trials for Benlysta, its lupus drug, and also garnered its second contract, valued at $151 million for three years, with the federal government to ship 45,000 more orders of its anthrax treatment.

Human Genome Sciences said it’s on track to file an application this fall to start marketing its first commercial product, Zalbin, for patients with chronic hepatitis C. Investors were less enthusiastic about that drug’s sales prospects, however, since its clinical trial results did not fulfill their expectations when matched with rival drugs.

The company only spent $2 million in the first six months of this year, bringing its total cash and investments down to $370.9 million as of June 30.

At midpoint in the year, HGSI is four-fifths of the way toward its revenue goal of $250 million for this year. Officials had also forecasted that the company would limit its spending this year to $25 million. Its next milestone will come in November, when it expects the second batch of advanced clinical trial data on its lupus drug.

[From $3.32 to $14.05 on a single day - it is quite a trip. Pellionisz, HolGenTech_at_gmail.com, July 20th, 2009]


SNPs in Non-Cancerous Tissue May Differ From Those In Blood, Study Finds

NEW YORK (GenomeWeb News), July 18, 2009 – A new paper by Montreal researchers is providing evidence that the gene variants found in some non-cancerous tissues may differ from those present in blood samples from the same individual.

"The usual dogma is that your DNA is the same all over the place," senior author Morris Schweitzer, an endocrinologist and lipidologist with McGill University and the affiliated Lady Davis Institute for Medical Research at Montreal's Jewish General Hospital, told GenomeWeb Daily News. But, he said, his team's work suggests that isn't the case.

The researchers, who were studying a condition called abdominal aortic aneurysm, or AAA, found that SNPs in a gene called BAK1 were different in aortic tissue than in blood samples, even in samples taken from the same individuals. The work appears in this month's issue of the journal Human Mutation. Based on the findings, those involved in the study are urging caution in interpreting genetic associations based on DNA from blood samples alone.

"Traditionally when we have looked for genetic risk factors for, say, heart disease, we have assumed that the blood will tell us what's happening in the tissue," lead author Bruce Gottlieb, a researcher affiliated with McGill and the Lady Davis Institute for Medical Research, said in a statement. "It now seems this is simply not the case."

AAA is a rare cardiovascular disease characterized by a ballooning of the abdominal aorta, which provides blood for the abdomen and much of the lower body. Although there are no obvious symptoms, the disease can lead increase the risk of aortic ruptures, a potentially fatal event. AAA affects roughly five to nine percent of the North American population — particularly men over 60 years old — and is more common in individuals with cardiovascular risk factors such as smoking, hypertension, or high cholesterol.

Because chronic apoptosis has been implicated in AAA, the researchers decided to investigate a gene called BAK1, which codes for an apoptosis-activating protein, in AAA patients. They used Sanger sequencing to sequence BAK1 cDNA from diseased abdominal aortic tissue and matching blood samples from 31 AAA patients.

When they looked at these sequences, the researchers were surprised to find that the BAK1 sequences in the aortic tissues differed from that in the matched blood samples. The aortic tissue carried a version of the gene containing three SNPs that are rare in the blood. In contrast, the matched blood samples contained a version of BAK1 that did not contain any of the three SNPs found in the aortic tissue samples.

On the other hand, when the team sequenced BAK1 cDNA from healthy aortic tissue obtained from a Quebec transplant service, they found the same three SNPs as in the aortic tissue from the AAA cases. The researchers verified their findings by sequencing both strands of DNA and repeating the sequencing several times.

So far, Schweitzer said it's unclear whether these BAK1 differences in the blood and aortic tissue are the consequence of RNA editing, which changes the messenger RNA but not the gene, or DNA editing, which involves differences in the gene itself.

Down the road, he and his team intend to use pyrosequencing to look at BAK1 sequences from healthy and diseased abdominal aortic tissues in more depth — an approach that could provide insights into whether that tissue contains both majority and minority BAK1 sequences. If that's the case, it would bolster the idea that different genetic sequences can arise through selective pressures in different tissue, Schweitzer added, "You may have different tissue selectivity for different DNA phenotypes."

The team is also interested in investigating whether the pattern they detected holds true in BAK1 sequences from genomic DNA and learning more about whether there are differences in the expression or activity of different BAK1 variants.

Based on the evidence so far, Schweitzer believes the BAK1 differences his team detected resulted from developmental rather than somatic DNA alterations. Such a pattern may not hold true for all genes, he said, but the BAK1 story suggests there could be other genes that vary slightly between blood and other tissues.

That, in turn, highlights the need to assess genetic profiles specifically in tissues of interest, Schweitzer argued, though he noted that that is a lofty goal given the fact that it is difficult or ethically impossible to collect some types of tissue from living individuals.

He and his colleagues also suggested that their findings raise questions about GWAS, many of which rely on DNA profiles obtained from blood samples.

"Genome-wide association studies were introduced with enormous hype several years ago, and people expected tremendous breakthroughs," Gottlieb said in a statement. "Unfortunately, the reality of these studies has been very disappointing, and our discovery certainly could explain at least one of the reasons why."

In an e-mail message to GenomeWeb Daily News, Navigenics Co-founder and Chief Science Officer Dietrich Stephan said the team's work is interesting and deserves further investigation.

"Differences between the germ-line genome and somatic cells is well established in cancer. It is also well described that chimeras can result from early DNA changes in early embryonic development that propagate to form regional differences in the genome across the body," Stephan noted. "It is intriguing to think that such mechanisms could result in common phenotypes, and is a topic that warrants deeper study."

Even so, he does not believe the findings are a blow to the results or rationale behind GWAS in general. Researchers have gained "incredible insight" into disease mechanisms using GWAS over the past few years, Stephan said in his message, adding, "It is much more likely that the missing heritability that we are all searching for will be accounted for by rare DNA variants, copy number variants, and heritable epigenetic modifications than by this mechanism."

[Dogmas disappear as if they were only tenable in the quicksandwalk remote backyard of some who are unable to proceed with abstract sciences. Dr. Dietrich Stephan is absolutely right stating the necessity of deeper studies. Pellionisz, HolGenTech_at_gmail.com, July 18th, 2009]


Complete Genomics Raises $14.5M as Part of Series D Financing

Wednesday, July 22, 2009, 5:12pm EDT
July 07, 2009

CEO Cliff Reid told In Sequence that the amount raised in the sale represents "a portion" of the company's Series D financing, which is not yet complete.

[Complete Genomics presently holds the world record of delivering a full human DNA sequence for $5,000 for 10,000 individuals in the rest of 2009, and for about 20,000 individuals in 20010. This upscale of leading-edge production calls for significant resources. Pellionisz, HolGenTech_at_gmail.com, July 8th, 2009]


Exxon Sinks $600M Into Algae-Based Biofuels in Major Strategy Shift

KATIE HOWELL of Greenwire

Published: July 14, 2009

Oil giant Exxon Mobil Corp. is making a major jump into renewable energy with a $600 million investment in algae-based biofuels.

Exxon is joining a biotech company, Synthetic Genomics Inc., to research and develop next-generation biofuels produced from sunlight, water and waste carbon dioxide by photosynthetic pond scum.

"The world faces a significant challenge to supply the energy required for economic development and improved standards of living while managing greenhouse gas emissions and the risks of climate change," said Emil Jacobs, vice president of research and development at Exxon Mobil Research and Engineering Co. "It's going to take integrated solutions and the development of all commercially viable energy sources, improved energy efficiency and effective steps to curb emissions. It is also going to include the development of new technology."

Exxon Mobil's collaboration with Synthetic Genomics will last five to six years, Jacobs said, and will involve the creation of a new test facility in San Diego to study algae-growing methods and oil extraction techniques. After that, he said the company could invest billions of dollars more to scale up the technology and bring it to commercial production.

"We're not claiming to know all the answers," said Craig Venter, founder and CEO of Synthetic Genomics, which has so far done early work on algae strains. "There are different approaches to what is truly economically scalable, so we're testing things and giving a new reality to the timelines and expectations of what it takes to have a global impact on fuel supply."

Jacobs and Venter are mum about the specific technology the collaborative effort would employ. They said the team would investigate all options, including growing organisms in open ponds and in closed photobioreactors.

They added that they were likewise uncertain what end-product fuels would result from the collaboration. Other startup companies have announced that they were producing both synthetic crude and biodiesel using photosynthetic algae (Greenwire, April 28).

"As far as products to expect from this program, our intent is to make hydrocarbons that look a lot like today's transportation fuels," Jacobs said. "We want to produce hydrocarbons that look like today's refinery products, that can go into a refinery to be processed along with other petroleum streams and then used in the transportation fleet or even jet fuel. And we think we've got a good chance of doing that."

Exxon Mobil launched the partnership after years of being publicly opposed to investing in renewable energy. Privately, though, Jacobs said the company has been investigating the sector for years.

"It's fair to say that we looked at all the biofuels options," Jacobs said. "Algae ended up on top."

Others in the algae-biofuels industry say Exxon Mobil's investment validates the sector.

"A couple years ago, the petroleum institute said there's only a couple of years left for oil, and now they're really finally acting on that," said Riggs Eckelberry, president and CEO of OriginOil Inc. "Algae is the feedstock to overtake petroleum. It's the real alternative to petroleum."

Environmentalists were more cautious in their appraisal of the Exxon Mobil-Synthetic Genomics plan.

"They've never done anything like this before -- invested real money in the renewables sector," said Kert Davies, research director at Greenpeace. "We've always said [the oil industry] has to be part of the climate change solution. We can't solve anything without companies like Exxon helping."

He added, "I'm guarding my optimism."

Exxon Mobil's timing is noteworthy, Davies said, because of the ongoing energy and climate legislative fight.

"It's interesting timing as the oil companies are struggling to find a place at the table," Davies said.

Renewable fuels standard

While Exxon Mobil's investment marks a sea change in activity in the sector, significant challenges remain in place to achieve wide-scale commercial development.

Next-wave biofuels that could reduce carbon emissions and displace oil imports are politically popular but have not moved into commercial production as fast as supporters would have hoped. Biofuels overall got a boost through a 2007 law that expands the national renewable fuels standard, or RFS, to reach 36 billion gallons by 2022.

But Senate Energy and Natural Resources Chairman Jeff Bingaman (D-N.M.) said the RFS expansion is too restrictive and could freeze out emerging technologies -- including algae-based biofuels. He is calling for changes that would make it more "technology- and feedstock-neutral" to accommodate fuels that could ultimately prove superior in several respects.

"Algae-based fuels are the most obvious example, which, despite having characteristics superior to any renewable fuels in commercial production today, have no home in the RFS," Bingaman said in a column about the standards published in the Politico newspaper.

Senior reporter Ben Geman contributed.

[For Venter, this is a major turn from H2 economy towards synthetic gasoline. Pellionisz, HolGenTech_at_gmail.com, July 14th, 2009]


Navigenics Cuts Price of Screening Service 60 Percent

July 17, 2009

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – Consumer genomics firm Navigenics has cut the price of its comprehensive genetic screening service by 60 percent, from $2,500 to $999, the firm announced on its blog, The Navigator.

A thousand dollars buys customers a year subscription to Navigenics' Health Compass testing services, which includes genetic risk data for 28 diseases and conditions; an hour-long session with a genetic counselor; and updates to risk scores as new gene associations and diseases are added to the service.

Navigenics launched Health Compass in 2007. The company conducts its genetic risk scans using Affymetrix arrays, and earlier this year it took over ownership of Affy's CLIA lab.

Also earlier this year, Navigenics launched a pared down version of its service, Annual Insight, that costs $499 and provides individuals with information about their genetic predisposition for 10 common health conditions, including breast, colon, and prostate cancer, celiac disease, Crohn's disease, type 2 diabetes, glaucoma, heart attack, macular degeneration, and osteoarthritis. The company markets this version of its service to doctors, aiming to drive the integration of genomic information into people's yearly medical checkups.

Navigenics is not the only the firm to drop the price of its DTC genomics service offering. Last year, one of its competitors, 23andMe, lowered the cost of its service from $999 to $399, citing technological advancements in the Illumina platform.

Further competitive pressure has come from the launch this week of Pathway Genomics, a San Diego-based DTC genetic testing firm that is offering genotyping and analysis for disease risk and heredity for under $250.

[Whenever there is a fierce competition of vendors, the big winner is always the customer. Now the selection between $99 to $399, $499 or $999 is a question that customers may need either to seek some advice which one to go for - all to ask for "second opinions" from comptetitor companies at the outset. Pellionisz, HolGenTech_at_gmail.com, July 17th, 2009]


Collins nominated to head NIH

Science News
July 8th, 2009
By Janet Raloff Web edition : 5:21 pm

Today, President Obama confirmed his intent to nominate geneticist Francis Collins to head the National Institutes of Health. This federal agency’s $30 billion a year research budget is dwarfed only by R&D spending on defense.

“Dr. Collins is one of the top scientists in the world,” the president said in a prepared statement. “His groundbreaking work has changed the very ways we consider our health and examine disease. I look forward to working with him in the months and years ahead.”

NIH is an institution Collins knows well, having worked there for the past 16 years. He arrived in 1993 to direct the National Human Genome Research Institute, which focused on sequencing the entire human genetic blueprint — a feat it accomplished in 2003. His work has also uncovered several genes linked to disease, including ones that play a role in cystic fibrosis, neurofibromatosis, Huntington's disease, an endocrine cancer syndrome, and “adult onset” (type 2) diabetes. Although Collins resigned his post at NIH’s genome research center last year, he has continued to work with people there. He also has been putting the finishing touches on a book, due out next year: The Language of Life: DNA and the Revolution in Personalized Medicine.

Collins didn’t grow up anticipating such a career. He started out a chemist with no interest in the messiness of living systems. Only after attaining a PhD in physical chemistry at Yale did Collins consider moving into biology. He eventually went to med school at the University of North Carolina at Chapel Hill and became an internist. But in the early ‘80s he returned to Yale to study human genetics. And his interest in the chemistry of life has never waned since.

NIH bills itself as “the steward of medical and behavioral research for the nation. Toward that end, it funds science on site and in labs around the nation “in pursuit of fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to extend healthy life and reduce the burdens of illness and disability.”

Collins has been elected to the Institute of Medicine and the National Academy of Sciences, and in late 2007 received the Presidential Medal of Freedom — the nation's highest civil award — for his contributions to genetic research.

The nominee also has a renowned spiritual side, something Collins described in his 2006 book: The Language of God: A Scientist Presents Evidence for Belief.

A number of organizations have already praised Collins' nomination. Among them: the American Association for the Advancement of Science, in Washington, D.C.

"This is a terrific nomination," said AAAS CEO Alan I. Leshner, in a brief statement this afternoon. "Dr. Collins has the scientific stature to sit at the table with the nation's top scientists, and he has the additional ability to discuss science in very clear ways with both the public and policy makers." Leshner added that it also doesn't hurt "to have a very credible geneticist heading NIH at a time when we are pursuing so vigorously the promise of personalized medicine, based on genomics."

[This terrific news is hardly a surprise - perhaps only belated. It seem very safe to predict that Dr. Collins' confirmation will sail through Congress without a hitch. Pellionisz, HolGenTech_at_gmail.com, July 9th, 2009]


The 23andMe Research Revolution

Viva la Revolución de … 23andMe
Genomeweb, July 08, 2009

With its new Research Revolution, 23andMe says it will "jumpstart genetic research" into certain diseases by engaging the community. According to the Spittoon, this project will help "large groups of people to assemble themselves into large-scale genetic studies." People vote for the disease they are interested in and once 1,000 people are signed up, the study commences — participants receive the scaled-down Research Edition of the 23andMe service for $99. The 10 diseases under study are: ALS, celiac disease, epilepsy, lymphoma and leukemia, migraines, multiple sclerosis, psoriasis, rheumatoid arthritis, severe food allergies, and testicular cancer. Genetic Future's Daniel MacArthur weighs in. "Let me be perfectly frank - it's unlikely that a genome-wide association study with only 1,000 patients will reveal any novel genetic associations, especially for those diseases on the list," he writes. He later adds that "23andMe's goals are clearly far beyond this: they aim to build stable, self-sustaining communities of potential research participants, that add new members over time and are available to add further trait data."

[This absolute masterstroke accomplishes two unprecedented revolutions, plus a bonus. First, it brings down the price of genomic testing, by the use of the industry's best microarray, to under $100. Considering that 23andMe presently tests for 116 genomic traits and conditions , your cost is 86 cents per a potentially lethal disease - much less than a cup of coffee these days. Thus, participation is no longer a matter of money. Second, nobody can, any longer, claim that research of a certain genomic condition is beyond his/her control; if his/her Congressional Representative does not vote for research, the consumer can both vote the representative out of office, and vote in the research via 23andMe that he/she requires. "Enabling consumers" will not stop at e.g. establishing "severe food allergies". Soon - see HolGenTech' PDA demo and Press Release, once proclivity for a condition is established, consumers can "tool up" when going for "safeDNA" shopping. As a third bonus, even "the scare factor" can be eliminated, since the consumer may, or may not wish to know his/her proclivity of conditions that he/she can't even pronounce, but can get PDA recommendation, just by barcode-pointing which also unpronouncable ingredient lurks in an otherwise attractive-looking item that the consumer could use for prevention, or avoidance. Pellionisz, HolGenTech_at_gmail.com, July 9th, 2009]


[Excel] VC Investor Launches $125M Life Sciences Fund

July 07, 2009
GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – Excel Venture Management, a venture capital firm based in Boston, announced today that it has launched the Excel Medical Fund with $125 million to be invested in early- to late-stage life science platform companies.

Excel said that the new fund will focus its investments on healthcare information technology, services, diagnostics, medical devices, and life sciences platforms with applicability to adjacent industries such as energy, chemicals, defense, and agriculture.

The new fund also has already established investments in two genomics industry players: BioTrove and Synthetic Genomics.

The investment team for the fund includes Managing Directors Rick Blume, Steven Gullans, Juan Enriquez, and Enrico Petrillo; Venture Partner Donald Crothers; and Vice Presidents Caleb Winder and Tricia Moriarty.

"Excel's portfolio will be balanced across the many interrelated sectors that are impacted by life science technologies," Enriquez said in a statement. "The rapid adoption and expansion of these technologies into major industries such as energy, chemicals, defense, and agriculture represents an enormous investment opportunity. We believe that we are just seeing the tip of the iceberg."

[Juan Enriquez is singled out for his pioneering towards "Genome Based Economy" - read his 2009 update below. Pellionisz, HolGenTech_at_gmail.com, July 7th, 2009]


How Biotech Will Reshape The Global Economy

Life Science Leader
Juan Enriquez

If you are a business executive today dealing with teetering financial markets and a weak economy, it’s doubtful you are thinking much about genomic literacy. But how well you, your company, and your industry understand this new, still esoteric language may have much to do with your company’s long-term survival and prosperity.

Biology is likely to become the greatest single driver of the global economy. The coming changes are not so much a second industrial or green revolution but the dawn of the organic age — organic in the sense that what things we make and how we make them will be tied to understanding and reading life and to programming life for specific purposes. We do this now with food and textiles. But the use of this powerful language/programming is spreading fast and far. In the future, wealth creation could be closely tied to life sciences, much as it is currently tied to digits.

If you think this sounds farfetched, ask someone who lived in the early 1970s whether people thought India, Ireland, Korea, Silicon Valley, or Taiwan would be centers of technology and new wealth. A few bold digital geeks argued computers would revolutionize not just how information is gathered and disseminated, but almost every business on the planet and more than a few countries. Over the past few decades, most new jobs, wealth, and growth were created in the knowledge and digital realm. And while venture capital represented only about 0.2 % of U.S. GDP, the companies it created generated about 17% of economic activity. The Internet changed virtually every industry. Yet as far-reaching as the digital revolution was, the ability to code life will likely reach even further.


There are many competitors in this new race. What started out as an obscure subspecialty primarily of interest to the pharmaceuticals industry has now spread to agriculture, chemicals, biodefense, and energy. Life sciences are a key component of many national development plans. Brazil leads the world in biofuels and preventing citrus diseases, thanks to R&D programs launched decades ago. Korea, despite recent scandals, continues investing in cloning and tissue engineering. Japan leads in fermentation technologies and is growing plastic car parts from bacteria and plants. Singapore considers life sciences a vital part of its development strategy and invests hundreds of thousands of U.S. dollars in each of its graduate students. China is building a genome city, while the United Arab Emirates attempts to use scale and petrodollars to leapfrog everyone. For now, the United States remains, by far, the leader in R&D and new venture creation.


What’s driving all this? In the mid-20th century, we discovered that all life is coded in four base pairs (adenine, thymine, guanine, and cytosine). Then we learned how this simple code translates into amino acids and finally builds the hundreds of thousands of proteins that generate life and guide its operations.

Biotechnology first allowed us to slightly modify a few life forms to produce medicines, new seeds, and a few chemicals. Then, in just over a decade, the development of rapid sequencing of a virus, bacteria, plant, fly, and, eventually, human has unlocked the entire gene code, or genome, of living things. Now, during the past year, at least three discoveries have fundamentally altered how we think of the code of life and what we can achieve using it. First, two labs, one in Japan and one in the United States, showed how adult cells can become pluripotent. That means human skin cells can be shocked to reboot from scratch. They forget what they are specialized to do and go back to their original state, just after conception. This is an important discovery because all complex organisms start out as undifferentiated, pluripotent cells. And each of our cells, with the exception of some blood cells, contains our entire genome. So, in theory, each of our cells can produce all of our body parts.

A second major paper showed that we can reprogram cells. A few months ago, Synthetic Genomics and the Venter Institute took a cell, inserted long strands of DNA, and booted up a cell as a different species. This is important because half the biomass on Earth is made up of microscopic organisms. And it is these small, single-celled (or few-celled) creatures that have created much of our environment; they help plants grow and humans digest, and they turn plants into oil and gas.

Finally, we are learning to write the code from scratch. In January 2008, the same group published the world’s largest organic molecule. Consider it a life software package. By stringing together very long strands of DNA, companies will be able to program cells to do specific things. In fact, high school and college students are already beginning to do this using the Massachusetts Institute of Technology’s open-source BioBricks. If assuming that cells become programmable hardware, they will become micro-machines that make myriad organic products including textiles, chemicals, medicines, and fuels. By December 2008, we were able to assemble an entire cell software package in a day instead of years.

Taken together, these three discoveries, plus the increasing availability of standardized cheap cellular components, suggest that we will be able to write out a life code, which will in turn allow us to program a cell to execute a desired function. And this function is rapidly scalable since gene code builds its own hardware.


As we begin to read, reproduce, and program life, we will change many industries, including agriculture, biotechnology, chemicals, defense, energy, insurance, IT, leather, medicine, real estate, pharmaceuticals, and textiles. About 70% of the grain we consume in the United States and Canada already is genetically modified. Some of our cars, clothes, corn, food, gasoline, IT, leathers, medicines, and plastics are produced organically using life science technologies. Companies as diverse as Aetna, BP, GE, Google, Intel, Kaiser Permanente, L’Oréal, Monsanto, Nestlé, Novartis, and P&G are investing heavily in the application of life science technologies.

Beyond pharma, biotech, and agriculture, the greatest initial impact of life sciences will likely be in energy. After all, a leaf is simply a solar panel that powers a plant. Oil and gas are rotted plant and bacterial tissues—in essence, sunlight concentrated in organic matter. Using genetic modifications, we will be able to produce gasoline directly from plants or bacteria. Similar techniques will allow us to go directly from plants to tertiary petrochemicals that produce dyes, fine chemicals, paints, plastics, polyesters, and rubber.

DuPont already is growing your next breathable, water-repellent jogging suit using bacteria. Thanks to a Cargill–Dow joint venture, your disposable cup or lunchtime salad container may already be a biodegradable plastic grown from plants. Toyota Motor is making some of its car parts using life sciences and is launching medical, foodstuff, and chemical divisions. These are just first steps.

Just as the ability to code digits created an unprecedented burst of wealth as well as a large-scale restructuring of industries and the rapid rise of some very poor countries, life sciences will likely produce a restructuring of industry. It will drive mergers and acquisitions such as GE’s purchase of Amersham, a British pharmaceutical. DuPont’s earnings increasingly are driven by a seed company, Pioneer. And the trend is becoming global: Japan’s Daiichi Sankyo bought control of India’s Ranbaxy. The next eBay, Google, Intel, and Microsoft will be a company that uses life forms to create new products.

The life code is a lever and perhaps the most powerful instrument human beings have ever used. It will make the Industrial Revolution seem simple, even quaint. It will become the world’s dominant language, and all of us will have to be literate to thrive.

About the Author

Juan Enriquez is a Managing Director of Excel Medical Ventures. He previously served as founding director of the Life Sciences Project at Harvard Business School and is the author of As the Future Catches You: How Genomics & Other Forces are Changing Your Life, Work, Health & Wealth.

[See references for Juan Enriquez' bestseller at "Genome Based Economy". Pellionisz, HolGenTech_at_gmail.com, July 7th, 2009]


Biology's Odd Couple

Jun 27, 2009
By Lily Huang

George Church (left), Craig Venter (right)

Healthy Competition Advances the Field of Biology

About 10 years ago, biology entered betting season. An upstart scientist named J. Craig Venter jolted the genetics establishment by launching his own gene-sequencing outfit, funded by commercial investment, and setting off toward biology's holy grail-the human genome-on his own. It was Venter versus the old guard-old because of where they got their money (governments and trusts) and the sequencing technique they wanted to hold onto. Venter won that race, and not because he got there first. By combining the freedom of academic inquiry and commercial capital, he came up with a new way of doing science so effective that it forced the old institutions to either ramp up or play second fiddle.

With Venter's momentum, biology has continued to surge into new territory, but now he's not alone in pushing the pace. In fact, with his staff of hundreds at the J. Craig Venter Institute, he is looking dangerously like the establishment he raced past almost a decade ago. Another maverick in the stable, Harvard biologist George Church, is a titan in the academic world, tackling the major challenges of genomic-age biology with an ingenuity distinct from Venter's. Both are building on the foundation of DNA sequencing, trying to drive down the cost of decoding individual genomes and-the more radical enterprise-using their digital control of cells and DNA to design new organisms. Between them, Venter and Church direct or influence a major portion of work in both sequencing and synthetic biology, including three different commercial efforts to develop bacteria that could produce the next generation of biofuels.

There's reason to believe that Church has a decent chance of unseating Venter as biology's next wunderkind. The field of genomics is only at the beginning of its growth spurt-sequencing, it turns out, was just phase one. Far from producing answers, the sequenced genome has instead led scientists into a thicket of questions: What exactly do combinations of genetic code produce in an organism over a lifetime? If we can read the script, can we also write it? Leading science out of the genomic wilderness arguably calls for a vision more deeply imaginative than the task of the Human Genome Project, which was clearly framed and, at heart, a code-reading slog. Radical invention-the kind of out-of-left-field inspiration that makes a thinker either brilliant or totally unrealistic-is the strength of Church, as opposed to Venter, who is more of an aggregator, a connector of existing ideas and methods. The script of this new biology is largely unwritten, and just because Venter turned the first page doesn't mean that in the end his vision will prevail. "Sometimes," Church says, "it's best to be second."

The quest for ideas farther afield may be one reason Venter joined the Harvard faculty this spring-his first academic post since 1982. (Venter declined to be interviewed for this article.) He and Church are even members of the same research initiative, called Origins of Life, where they're investigating life in its most basic genetic and molecular forms. Venter's participation is a sign of just how widely applicable the high-concept work of the university could be. More than ever, over the uncarved terrain of the new biology, Venter and Church are blurring the distinction between the academic and the commercial. Steven Shapin, a Harvard historian of science, says that at this point we must "stop categorizing-and just look at what these people are doing." On top of all the daring science, Venter and Church are also conducting a "sociology experiment": "They're making up their own social roles," Shapin says, "making up themselves." All the while, Church insists that he and Venter are "not right on top of each other" but are "part of the same ecosystem," fulfilling different roles. Then again, Shapin points out, "the lion and the wildebeest are in the same ecosystem." The question is, who's the lion?

If you were to speak of George Church as an underdog to any of his university peers, you would probably get a laugh: with more than a dozen graduate students and 18 postdoctoral researchers, he runs one of the biggest labs in the richest university in the world. Next to Venter's institute, though, his still feels like a scrappy outfit in the corner. But he likes it this way: "Sometimes-not always-the smaller operation is more nimble," he says. Church's group has produced prototypes for some of the second-generation DNA-sequencing machines, which he hopes will help bring down the cost of sequencing genomes to the point where your genes can be consulted as routinely as X-rays.

At the moment, both Venter and Church are working to construct rudimentary organisms. The promise of this technology is difficult to exaggerate. By altering the chemistry of organisms, manipulating genomes and even constructing parts of cells, they can engineer tools out of living things. Both Church and Venter think of cells as machinery. Announcing his latest breakthrough in March with the synthesis of ribosomes, the all-important protein generators of the cell, Church used a hot-rod analogy: "It's like the hood is off and you can tinker directly." Venter has described his own work with reengineering cells in terms of a PC: "We can boot up a chromosome ... boot up a cell."

As Church and Venter lay the groundwork for a new way of understanding and using biology, their respective approaches reveal their essential differences. Venter's great stride toward designing life forms was in transplanting the genome of one bacterium into another-two different species of the genus Mycoplasma. The transplanted genome took over its new cells and turned them into cells of its own species. Preceding Church at the Harvard lectern in March at an Origins of Life symposium, Venter described this as creating "software that makes its own hardware"-but in truth both software and hardware were already present and living; he came up with a different combination, and got it to do something completely novel. Church, in making ribosomes, has surmounted a different kind of barrier. The ribosome is regarded as the living cell's most irreducible part, and something common to every kind of cell-those that make up bacteria as well as plants and humans. The physicist Freeman Dyson has spoken of the ribosome as the key to the origin of life; two years ago, at an intimate gathering of some of the world's most imaginative scientists on a Connecticut farm, Dyson told Church, Venter and the three other researchers present that "the invention of the ribosome is the central mystery" of how living things ever came to be. Church has now managed to take a ribosome apart and build it up again, which means he can make something even more primitive-until, with a simple collection of atoms, he jump-starts a living organism of his own making. "I'm not quite ready to say that we have connected all the dots," he says, but it's now conceivable that "you can get from chemicals to RNAs, to smallish ribosomes, to full ribosomes, and then to a cell."

Right now, for both scientists, the bacterial equivalent of a hot rod is an organism that can consume carbon dioxide and make engine fuel. Last year Venter told newsweek that Synthetic Genomics, the commercial counterpart of his nonprofit research institute, was one or two years away from producing its first fuels. Church, though, had already founded a startup, LS9, in 2005 to develop a commercial product. The idea behind both ventures is to exploit the ability of natural bacteria to turn sugar into fatty acids, which is only a few chemical steps removed from diesel fuel.

At this stage, both Church and Venter welcome a crowded playing field, with different startups testing a variety of approaches, but this race, more than that for the human genome, has a far more tangible prize for whoever is first-or maybe, if they succeed better, second. "There will be convergence on whatever works," Church says. "Until there's actually somebody making a lot of money, there's not going to be convergence." In the meantime, Church cheerfully points out that Venter is manipulating the wrong kind of bacterium. While he and others are using E. coli, Venter has stuck with Mycoplasma, which has very few genes to manipulate but grows far more slowly and has a sensitive membrane, so that it is likely to come apart on contact with the fuel it's meant to produce. "He's like Captain Ahab," says Church of Venter. "The Mycoplasma is his white whale. He decided that small is beautiful and he's going to synthesize it. Partly because he wasn't prepared to change the technology enough so he could synthesize something bigger."

Church is, foremost, an inventor in the purest sense, someone who would make something completely new to perform a function that no one even thought might be helpful. His chief preoccupation in graduate school was making an automated DNA sequencer that could process vast amounts of data as quickly as possible. In 1979, even people in his own lab didn't see why you would ever want something like that. "That was really ridiculously out of touch with where the market was," Church admits now, but his eyes smile. Years later, Leroy Hood, at Caltech, made the prototype that became the ABI 3700, the first-generation automated sequencer that inspired Venter to crash through the gates of the genome. Hood disparaged that early model as the equivalent of a Ford Model A, but Venter couldn't wait; he pushed on with it, worked out the inevitable bugs and, by running 300 imperfect machines instead of 230 perfect ones, ground out the human genome. Church, though, was already working ahead.

Venter's genius lies in using invented technologies and techniques to produce unexpected breakthroughs. The ABI 3700s, those Model A's, nevertheless became famous because of what he got them to do. The shotgun sequencing technique didn't originate from him, but he showed the range of its utility, first by sequencing whole genomes, and then by taking genetic snapshots of the ocean and the earth's soil by sequencing samples of living things. When he saw how the ABI machine worked, he realized that all the parts needed for a new genomic age were now in place: a collection of complementary DNA plasmids; a company that purified those plasmids, so they could be sequenced; an automated sequencer; and a public database where sequences of genes could be stored. The connections Venter saw between these four groups gave way to his vision.

There is a price, though, to precipitous application: though Venter sequenced the first diploid human genome (his own, completed in 2007) for far less than the $3 billion originally projected by the federal budget, it was still on the order of $70 million-for one genome. Church, using his own second-generation sequencing instruments just two years later, has now sequenced 95 percent of his genome, while running a tab of about $5,000. He simultaneously sequenced the genomes of nine other people, too, to launch the Personal Genome Project, an open database of genomes matched with each individual's phenotypic traits and medical history. The aim is to amass a statistically significant pool of data that would begin to show the complex connections between a person's genes and the traits and diseases that actually manifest in one's life. The project now has more than 13,000 volunteers for sequencing, and Church hopes to collect 100,000. None of this would have been possible with first-generation sequencing technology, and, says Church, "I didn't really want to do it until the price was right."

When asked, at the Connecticut retreat, how their work was different, Church replied, "Craig is more productive." To which Venter graciously added, "I use George's techniques." As they build the new biology, they have moved closer and closer into each other's orbit, perhaps the better to see, in the work of the other, how the future is shaping up. And though their work gets at the core of living things-in ways that may give humans control over the very process that created life-they are capable of an almost comical diffidence. This isn't "playing God": "You're certainly not creating a universe," said Church at the discussion table in Connecticut. "You're constructing things."

"You're only so big," Venter added.

"Pretty small," agreed Church. "Pretty small."

[When I had the pleasure listening to both at the DOE Conference, I could not help branding George Church "The Edison of postmodern Genomics", and Craig Venter "The Tesla of postmodern Genomics". While every analogy is imperfect, neither of them seemed to mind, and I encountered many who found the analogy reasonable, though obviously with some major differences. Pellionisz, HolGenTech_at_gmail.com, June 30th, 2009]


Beyond the Book of Life

Published Jun 27, 2009
By Stephen S. Hall

[Genomics and EpiGenomics are the two sides of the same coin; HoloGenomics. A new science, based on recursion - translating into new business - AJP]

Roll over, Mendel. Watson and Crick? They are so your old man's version of DNA. And that big multibillion-dollar hullabaloo called the Human Genome Project? To some scientists, it's beginning to look like an expensive genetic floor pad for a much more intricate—and dynamic—tapestry of life that lies on top of it.

There's a revolution sweeping biology today—begrudged by a few, but accepted by more and more biologists—that is changing scientific thinking about the way genes work, the way diseases arise and the way some of the most dreadful among them, including cancer, might be diagnosed and treated. This revolution is called epigenetics, and it is not only beginning to explain some of the biological mysteries that deepened with the Human Genome Project. Because of a series of accidental events, it is already prolonging the lives of human patients with deadly diseases.

Over the past several years, and largely without much public notice, physicians have reported success using epigenetic therapies against cancers of the blood and have even made progress against intractable solid-tumor malignancies like lung cancer. The story is still preliminary and unfolding (dozens of clinical trials using epigenetic drugs are currently underway), but Dr. Margaret Foti, chief executive officer of the American Association for Cancer Research, recently noted that epigenetics is already resulting in "significant improvements" in cancer diagnosis and therapy. "It's really coming into its own now," she said. Leaping on the bandwagon, the National Institutes of Health made epigenetics the focus of one of its cutting-edge "Roadmap" initiatives announced last fall.

"I think we were all brought up to think the genome was it," says C. David Allis, a scientist at Rockefeller University whose research in the 1990s helped catalyze the current interest in epigenetics. "But even when the genome was a done deal, some people thought, 'Is that the whole story?' It's really been a watershed in understanding that there is something beyond the genome."

The emergence of epigenetics represents a fundamental rethinking of how molecular biology works. Scientists have learned that while DNA remains the basic text of life, the script is often controlled by stage directions embedded in a layer of biochemicals that, roughly speaking, sit on top of the DNA. These modifications, called epimutations, can turn genes on and off, often at inappropriate times. In other words, epigenetics has introduced the startling idea that it's not just the book of life (in the form of DNA) that's important, but how the book is packaged.

At one level, this higher order of control makes perfect sense. Biologists have long known that developing organisms—humans included—need a full complement of genes at the moment of fertilization, but that many genes subsequently get turned on and off as the embryo develops. In humans, this is a lifelong process. There are genes for a fetal version of hemoglobin, for example, and then an adult version that kicks in after birth; through epigenetic control, the fetal genes are permanently turned off at a certain stage of development, and the adult genes are permanently activated. As each one of us developed from a fertilized egg, stem cells in the early embryo matured into brain cells, liver cells and indeed several hundred specialized cells and tissues; at each step of that maturation process, our DNA was modified. When we entered puberty, quiescent genes were suddenly activated. And as we age, the dings of earlier life experiences seem to shape the activity of our DNA. Many if not most of those changes are epigenetic in nature, where the DNA itself remains unchanged, but the packaging has been dramatically perturbed; animal experiments suggest that environmental factors, from childhood diet and maternal care to stress, can play epigenetic havoc with our basic DNA hardware.

The interest in epigenetics has assumed critical mass in the past 10 years for several reasons. The Human Genome Project, often touted as "biology's moonshot," provided the basic text of life, in the form of the complete human sequence of DNA, but scientists have had a hard time linking specific genetic causes to many common illnesses. The role of "misspelled" DNA (in the form of both classic mutations and genetic variation, first teased out in the 19th century by the monk Gregor Mendel) has turned out to explain, in the words of a recent New England Journal of Medicine commentator, "only a small fraction of disease." "We were all raised on the Watson and Crick concept of DNA-driven inheritance," Allis says. "It turns out that epigenetics may be even more responsible for gene expression and disease than DNA alone, especially in more advanced multicellular organisms." In the 1990s, meanwhile, scientists like Allis reported basic but breathtaking discoveries that showed how several groups of enzymes, common to every cell, could create epimutations without ever changing the DNA script.

Basic research has shown that enzymes can tamper with genetic information in at least two distinct ways. In some cases, the on-off switch of a gene can be smothered when an enzyme attaches chemicals to the DNA; known as DNA methylation, this process essentially silences a gene that should be on. In other cases, a separate class of enzyme improperly disrupts the normal cellular packaging of DNA. Typically, the gossamer thread of DNA is wound around a spool of protein called histone; when this second class of enzymes strips away part of the packaging, the DNA becomes so tightly wound up that it can't loosen up enough to be read by the cell. In effect, the slip jacket for specific genes is so tight that it's impossible to crack open the spine and get a glimpse of the genetic text. Conversely, sometimes genes that should remain permanently interred in a tomb of histone suddenly come back to life, like some cellular version of Night of the Living Dead.

In the past five years, the evidence has become "absolutely rock solid" to cancer researchers that epigenetic changes play a fundamental role in cancer, according to Robert A. Weinberg, an elder statesman of cancer biology at the Whitehead Institute in Cambridge, Mass. DNA methylation, he adds, "may ultimately be far more important than gene mutation in shutting down tumor suppressor genes," one of the cell's main mechanisms to short-circuit an incipient cancer.

Each epigenetic change seems to leave a chemical flag, or "mark," on the DNA, and hence researchers are intensely cataloging these marks into "epigenomes" as a possible clue to diagnosis, prognosis and perhaps even prevention of disease. Unlike genetic markers, which reveal small "typographic" variations in the spelling of genes, epigenetic markers indicate places where entire genes have been silenced or activated. Paula Vertino of the Emory University School of Medicine, for example, has identified patches of DNA that seem especially prone to be inappropriately silenced or activated in breast and lung cancer; researchers at Johns Hopkins have used epigenetic markers in brain-cancer cells to predict which patients are likelier to benefit from chemotherapy. Recent laboratory findings suggest that deciphering the layers of genetic control modifying DNA has implications not just for cancer, but also for chronic diseases associated with aging, like heart disease and diabetes; for mental disorders like autism and depression; for stem-cell biology; and even for our notions of what constitutes an inherited disease. Everything is up for grabs.

"There's only one genome," says Wolf Reik, professor of epigenetics at the University of Cambridge in England, "but hundreds of epigenomes." And unlike string theory in physics, for example, epigenetics is neither an exotic intellectual idea nor a theory awaiting verificationby future data. The biology is real, and the practical effects have already reached the bedside.

In the 1990s, Stephen Baylin of Johns Hopkins University led the effort showing that epigenetic changes in DNA were associated with cancer; in fact, disruptions in tumor suppressor genes, which normally protect cells against cancer, are more often due to epigenetic silencing than outright mutation. In May, Baylin and Peter Jones of the University of Southern California received a three-year, $9.1 million grant to launch accelerated testing of epigenetic therapy in patients with lung, colon and breast cancer, with interim results promised within a year. The Hopkins group has presented preliminary results at recent meetings showing that a combination of two epigenetic drugs produced several responses (including one complete remission) in patients with advanced lung cancer. "The trials are still ongoing, and we don't know what percentage of patients will respond, if it will be 10 or 20 percent," says Baylin. "But we have had very robust responses, of both primary tumors and metastases, in non-small-cell lung cancer." "That's just extraordinary," says Foti of AACR, noting the poor prognosis for patients with these advanced cancers.

If the amount of clinical testing seems surprising, it's probably because the medical part of the epigenetics story is unfolding in reverse: doctors had the drugs long before they had a theory suggesting how to use them properly. Indeed, several of the drugs now being tested against cancer have been around for decades, but in the past were used in the wrong way for the wrong reason. Azacitidine, for example, was first discovered in Czechoslovakia in the 1960s as a traditional chemotherapy drug, and doctors used it to kill cancer cells the old-fashioned way: giving as much as patients could tolerate. Jones, a South African by birth who now heads the Norris Comprehensive Cancer Center at USC, discovered in the 1980s that the drug had another mode of action: it could turn genes back on by stripping away the "duct tape" of DNA methylation that muffled genes. This suggested a different kind of attack on cancer—not by killing cancer cells outright, but by reversing the epigenetic changes that make a cell cancerous in the first place.

In the 1980s, as a young oncology fellow at Mount Sinai School of Medicine in New York, Lewis Silverman proposed testing azacitidine as an epigenetic drug—that is, at lower doses than is typical for traditional chemotherapy, where it still might be effective reversing silenced genes. Silverman has since shown that low doses of the drug reduce the symptoms of a type of leukemia and allows patients to live longer. The Food and Drug Administration approved azacitidine in May 2004; the drug is now marketed as Vidaza.

A different class of epigenetic drug has emerged from work at Harvard, Columbia and Memorial Sloan-Kettering Cancer Center in New York. In addition to the silencing effect of methylation, genes can be turned on and off by enzymes that tighten or loosen the packaging of DNA. Paul Marks and Ronald Breslow at Columbia created a small molecule, called vorinostat, that blocks the action of the enzymes that tamper with DNA's packaging, thus turning inactivated genes back on. That drug was approved by the FDA in 2006 for a rare form of lymphoma and is now being tested against a number of other cancers; Merck markets the drug as Zolinza. Part of the current clinical excitement is that there are already hints that combinations of these and second-generation drugs may be more effective at reversing the epigenetic changes in cancer cells.

Researchers remain guarded in their optimism. Issa concedes that the first-generation epigenetic drugs have not included a home run like Gleevec, the molecular treatment for chronic myeloid leukemia that produces dramatic and lasting remissions. And it is not unusual for deleterious side effects to become more apparent as drugs are used more widely—a particular concern in the case of drugs that have the potential to modify gene expression broadly in normal cells. But people who have witnessed the explosion of promising results in the past year have difficulty suppressing their excitement. "The promise is staggering," says Allis.

The stakes in epigenetics go well beyond clinical therapies, however. There have been hints from laboratory experiments and epidemiological studies that epigenetic changes in one generation—caused, for example, by smoking or diet—can be passed on to children and even grandchildren. Reik, who is also associate director of the Babraham Institute in Cambridge, is investigating how the overlay of epigenetic changes is erased from DNA when mice make their germ cells—how all the epigenetic changes, like some microscopic version of duct tape, get stripped off the DNA that goes into the sperm in males and eggs in females. "People are now beginning to realize that there are probably things that don't get wiped out or erased in the germ cells," he says, "so these are so-called epimutations that can be passed on from parents to children and to grandchildren—not genetic changes passed on, like Mendel, but an epimutation.

"We don't know how common this might be," Reik adds, choosing his words carefully, "but it's potentially quite revolutionary. It's not only challenging Mendel, but potentially challenging even Darwin. We are very careful when we talk about these things."

[Well, both Francis Crick and Francis Collins were very, very explicit. Francis Crick, when he re-confirmed his 1956 mistake of “Central Dogma” in 1970 - rather than allowing it to gracefully expire – he upped the ante by stating “If it were shown that information could flow from proteins to nucleic acids, he said, then such a finding would ‘shake the whole intellectual basis of molecular biology’”. Also, when releasing the “ENCODE Pilot Results” in 2007, Francis Collins stated “The scientific community will need to rethink some long-held views”. With both the “Central Dogma” and “Junk DNA” theoretical mistakes gracefully dismissed, The Principle of Recursive Genome Function a year ago (June 20, 2008) did exactly what both Drs. Crick and Collins called for - Pellionisz, HolGenTech_at_gmail.com, June 29th, 2009]


See how shopping with your PDA and personal genome can advise you on best choices for you

DEMONSTRATION of Personal Genome Shopping

Click to enlarge

DISCUSSIONS of Personal Genome Shopping here here

VOICE your Contribution to the Discussion in e-mail

HolGenTech Demonstrates first-ever PDA Combination with High Performance Genome Computing at Boston Consumer Genetics Conference

PRweb Press Release
June 19, 2009

A stunned audience embraced the Genome Based Economy when HolGenTech Founder Dr. Andras Pellionisz debuted the first-ever consumer application for genome computing architected with High Performance hybrid GenomePC combined with a Personal Digital Assistant at the Consumer Genetics Conference in Boston on June 10. Affecting individual choices and creating new awareness and understanding of how the world around us impacts the one within us, the personal genome used by PDA presented a new vision of customers tooling up for the future imposed on us by breakthrough genomics and computing.

Sunnyvale, CA (PRWEB) June 19, 2009 -- The Google Phone demonstration introduced the imminent reality of the Genome Based Economy, as presented by HolGenTech Founder Dr. Andras Pellionisz at the "first ever" Consumer Genetics Conference in Boston the morning of June 10. Dr. Pellionisz demonstrated use of PDA for customers at the Consumer Genetics Conference in Boston in the morning of June 10. Within hours, Illumina's CEO Jay Flatley featured a different business model application for personal genomes in the Apple iPhone. The envisioned screenshots of handheld device applications intended for personal genomes stunned the audience of approximately 400 with a view into how practical applications of our personal genomes will change everything. Dr. Pellionisz illuminated the potential of the personal genome when applied to shopping in the Genome Based Economy.

Using the Google Phone's built-in bar code reader, Dr. Pellionisz demonstrated how personal genome computing can detect genome-friendly and genome-supportive products from foods to cosmetics to building materials and beyond. In the envisioned application, the PDA user was assumed to have a genomic proclivity to Parkinson's disease. The demonstration leveraged the Google Phone's bar code reader to capture product information and a product rating scale to identify the prevention efficacy of any product under consideration. The consumer is equipped to make immediate product comparisons based on both personal health-preferences and genomic information with special consideration of the disease or syndrome of concern.

PDA Recommended Consumer Activity in the Genome Based Economy.

The HolGenTech Google Phone demonstration illustrated practical application of the personal genome, emphasizing how Personal Genome Analysis can be used as a tool for an individual to pursue Personalized Consumer Activity based on one's DNA. The demonstration elucidated how disease prevention--or health--hinges on optimizing epigenomic pathways through foods, food additives, vitamins, cosmetics, chemicals, and environments to best fit or fix one's genome. In a completely different and contrasting model, Illumina's iPhone demonstration focused on reading one's genome information for the purpose of comparison to others'.

The Personal Genome Computer -- a tool for the Genome Based Economy

"The Personal Genome Computer is the catalyst tool for the Genome Based Economy. We look to chip-makers Intel, AMD, Xilinx, and Altera, and to integrators like HP, Dell, DRC, and even IT giants like Google and Microsoft, for next developments in parallel processing to produce HPC desktop- and server-lines as the IT infrastructure of the Genome Based Economy," said Dr. Andras Pellionisz. "As high performance computing, custom algorithms and post-ENCODE genomics align, more personalized medicine and health care, wellness, prevention, and DNA-informed personal and lifestyle choices are realized. With affordable access to and utility of our personalized genome we can experience personalized everything from health care to food to clothing to housing and environmental choices, even to friends....everything suited to one's personal genome."

In the HolGenTech presentation in Boston, Pellionisz outlined the unprecedented computing-challenge posed by the oncoming avalanche of affordable full genome sequences which require in-depth analysis to be useful. Today "Direct-to-Customers" genome testing companies, such as DeCodeMe, 23andME, and Navigenics, rely on Illumina/Affymetrix microarray-technology to interrogate up to 1.6 million SNP-s (single nucleotide polymorphisms, point mutations of the 6.2 billion A,C,T,G letters/amino acid bases of human DNA), though the field of genomics is already beyond SNP-s and awaits the next developments in nano-sequencing technology, which promise affordable and readily available personal genomes by the second half of 2009.

Landmark Principles in Action

The Consumer Genetics Conference coincides with the second anniversary of the epoch-making release of ENCODE results, when mastermind Francis Collins, the most prominent keynote speaker at the Boston conference, declared, "the scientific community had to re-think long-held fundamentals." In response to Francis Collins' direct call, Dr. Pellionisz published one year ago the landmark paper The Principle of Recursive Genome Function. Emerging from the conference, a new consensus demands advanced computational mechanisms to search for much more complex (mal)formations of full DNA sequences. While the new Illumina microarray extends utility up to 4 million points and includes the ability to spot over 10 thousand "copy number variations," Pellionisz' research points to the problem of algorithmic, recursive genome regulation, where genome regulation derailments require intricate searches for cancer-stopping microRNA-s, silent mutations, repeat motifs and fractal defects.

About HolGenTech

HolGenTech leads the way in accelerating genome computing analysis with software addressing the applications for a Genome Based Economy. The Silicon Valley based company was founded on the basis of HoloGenomics: the synthesis of Genomics and Epigenomics expressed in Informatics. The company is currently engaged in securing Series A or micro-funding for a rapid ignition to develop the personal genome based consumer shopping model.


Suzanne Matick: Media and Investor Relations

Google invests another $2.6 million into 23andMe, a biotech startup founded by Mrs. Sergei Brin

Tech Startups 3.0
Thursday, June 18, 2009

Google invested $2.6 million in 23andMe, which was started in 2006 by Sergei Brin's wife, Anne Wojcicki, two years ago and has invested another $2.6 million.

Mountain View-based 23andMe analyzes people's DNA to detect potential health problems before they occur.

Brin, 35, discovered from a 23andMe scan that he carries a genetic mutation increasing his risk of developing Parkinson's. He used a sliver of his $12 billion fortune to personally invest $10 million in his wife's company. 23andMe provides genetic testing for over 100 traits and diseases as well as DNA ancestry.

23andMe is a privately held personal genomics and biotechnology company based in Mountain View, California that is developing new methods and technologies which can enable consumers to understand their own genetic information. In December 2007, three companies, 23andMe, Navigenics and deCODE, announced the availability of $999 to $2500 tests for genome-wide, select single nucleotide polymorphisms. 23andme dropped their price to $399 in Sep 2008. Coriell, a nonprofit, offers its service for free. Google has invested $3.9M in 23andMe, whose co-founder Anne Wojcicki is married to Google co-founder Sergey Brin. Genentech is also reported to have invested in 23andMe.

[Blog entry]

KBWetters said...

If the money helps consumers better understand and prepare for their future health issues, so what, is that all bad? Is it bad business decision if he gained insight personally.Aren't we being a little judgemental perhaps? A large number of people could benefit from this investment in the long run and make the company a success to boot!

6/18/2009 05:08:00 PM


San Francisco Business Times

Google Inc. invested $2.6 million more in 23andMe Inc., a Mountain View personal genetics company started by the wife of Google co-founder Sergey Brin.

The investment this month in a Series B preferred stock financing raises Google’s investment in the company that Anne Wojcicki co-founded to $7 million.

The latest investment was disclosed in a Google filing Thursday with the Securities and Exchange Commission.

Google, also based in Mountain View, continues to hold a minority interest in 23andMe.

Privately held 23andMe charges $399 to tell consumers if they have a genetic predisposition to certain diseases. Results are delivered within four to six weeks.

“We believed 23andMe’s technology had promise the first time we invested and we continue to believe that now,” Google spokeswoman Jane Penner told the Wall Street Journal.

New Enterprise Associates of Menlo Park was among the other investors in the preferred stock financing, according to the SEC filing.

Prior to Google’s latest investment, Google said in the filing, Brin invested $10 million in a 23andMe convertible debt financing. That was converted to preferred stock through 23andMe’s recent fundraising.

Brin, Google’s president of technology, holds about 38 percent of Google’s Class B common stock.

Google said its audit committee reviewed and approved the investment.

According to the Journal, funds from Google’s first $3.9 million investment in 23andMe Series A preferred stock in May 2007 were used to repay a $2.6 million loan that Brin had made to 23andMe.

Google in November 2007 spent about $500,000 to buy additional Series A shares of 23andMe from an undisclosed investor.

Google also said it entered into a lease agreement with 23andMe, but it did not disclose further details beyond saying that the terms and conditions of the lease were reviewed by an independent real estate appraiser. That deal also was reviewed by Google’s audit committee.

The audit committee is made up of Kleiner Perkins Caufield & Byers general partner John Doerr; Ram Shriram, managing partner of Sherpalo, an angel venture investment company; and Pixar CFO Ann Mather.

Team homes in on genetic causes of neuroblastoma [the era of SNiP-s is OVER! - AJP]

Wed Jun 17, 2009 4:37pm EDT

By Julie Steenhuysen

CHICAGO (Reuters) - A missing stretch of DNA on a chromosome involved in nervous system development may help explain why some children are predisposed to a deadly type of tumor called neuroblastoma, researchers reported on Wednesday.

The study is the first to show that repeats or deletions of a genetic sequence, as opposed to "spelling mistakes" in the four-letter genetic code -- influence cancer risk, said Dr. John Maris of the Children's Hospital of Philadelphia, whose study appears in the journal Nature.

Hundreds of studies show that inherited genetic variations called single-nucleotide polymorphisms, or SNPs, play a role in the development of cancer. These are one-letter changes in the genetic code.

"This is the first paper to really show that copy number variation -- which is just another mechanism of evolution for why you and I are different -- can be involved in predisposition of cancer," Maris said in a telephone interview.

Maris and colleagues also identified an entirely new gene -- neuroblastoma breakpoint family 23 or NBPF23 -- that plays a role in neuroblastoma. It is part of a family that was previously associated with the cancer, he said.

The research adds to a growing understanding of the genetic causes of neuroblastoma, a cancer affecting developing nerve tissue that accounts for 15 percent of all cancer deaths in children.

Last year, Maris identified three genes that affect the risk of aggressive neuroblastoma.

"Only two years ago, we had very little idea of what causes neuroblastoma," Maris said in a statement. "Now we have unlocked a lot of the mystery of why neuroblastoma arises in some children and not others."

His team scanned the genetic code of 846 white children with neuroblastoma and 803 seemingly healthy children. They found that children who were missing a stretch of genetic code on chromosome 1 -- which is especially important for nerve development -- are predisposed to develop the cancer.

"What our paper shows is that if you have copy number variations on chromosome 1, you are more likely to develop neuroblastoma," Maris said.

Maris suspects copy number variation plays a role in how much of the NBPF23 gene is produced.

"It will take future study to understand whether or not it leads to a more aggressive or less aggressive form of the disease," he said.

Maris suspects copy number variation will be found to play a role in other cancers. He hopes the findings will lead to more targeted treatments for neuroblastoma.

["Copy number variations" - stretches of hundreds or thousands of A,C,T,G-s repeated in one's genome a certain times, and a different times in someone else's DNA, are a "mystery" - except for the fractal/chaotic (FractoGene) principle of recursive genome function. Functional recursion from proteins to DNA (for over half a Centry a "forbidden" regulatory mechanism by Crick's "Central Dogma" that claimed that Protein>DNA information "never happens"; 1956-2008) - not only led a year ago on June 20, 2008 to "The Principle of Recursive Genome Function", but also requires the material basis for such recursion to pick up supplementary information from the DNA at every turn. Formerly, those stretches were discarded as barren "Junk DNA" (devoid of information, by mistake of Ohno, 1972). The fractal/chaotic organization of DNA only now turns from "lucid heresy" running against two cardinal but mistaken dogmas into key of understanding derailed recursions (cancers). The field of "SNP-hunting" was a good beginning, but by means of the new Illumina microarray (also interrogating CNV-s in addition to SNP-s) and with the avalanche of affordable full DNA sequencing upon us "full fine-combing" of DNA for all kinds of (mal)formations will be possible, assuming that Personal Genome Computers will take the load of massive computation. - Pellionisz, HolGenTech_at_gmail.com, June 18th, 2009]


Personalized medicine: An interview with Esther Dyson

What Matters
McKinsey & Company
12 June 2009

Esther Dyson speaks about her participation in the Personal Genome Project, an initiative that aims to build and correlate genetic databases and personal risk factors, and 23andMe, her commercial venture that offers consumers the ability to read and understand their DNA. McKinsey’s director of publishing, Rik Kirkland, conducted this interview with Esther Dyson... Watch the video, or read the transcript below.

See interview with Esther Dyson here

Rik Kirkland: We are here today with Esther Dyson, who has a wonderful range of things she’s up to these days. A lot of companies you are investing in through EDventure, which is the firm you’ve created. You have put your genome up at the Personal Genome Project for the world to see, along with your health records. And you’re very excited, among other things, about the future of the potential of information technology and biotechnology to change the face of health care. So, thanks for being with us today. And I’d like to hear a bit about Esther’s excellent adventures in health care.

Esther Dyson: The biggest point, in a sense, is we need to focus not just on health care but on health. And that ranges all the way from helping people to change their own behavior—so that they end up healthier and need less care and also enjoy more health—to going to the bowels of the health care system and helping hospitals to become more efficient and avoid mistakes.

In another direction, there’s the kinds of things that more genetic information will enable us to do in creating better and more targeted drugs. Ryan Phelan, [founder and CEO of] DNA Direct, said to me once, “You’d no more think of getting a drug without knowing your genome than you’d think of getting a blood transfusion without knowing your blood type.”

Rik Kirkland: Shifting from delivering volumes of outputs, care, to actually creating health at the personal level—sort of the common thread of both these things?

Esther Dyson: Right. And everything the Internet tells you about personalization and personal data in comparing yourself to other people. And mass marketing turning into the market of one. That’s what we’re going to see in both health and health care.

Rik Kirkland: Talk a little bit about the genomics side and tell us why you did this and what you hope to get out of that. And of course, you’re also involved in the board and as an investor in 23andMe. So, tell us a little bit about that.

Esther Dyson: The idea behind the Personal Genome Project was created by George Church at Harvard Medical School. It begins with ten people who volunteer to put their entire genome and all their health records online for anybody to see, with our identities visible. And the idea there was, in a sense, to demystify it. And then, ultimately, to [map] hundreds of thousands of people, so that we’d have a lot more data to [use to] explore the genome and medical care and so forth and so on. Most medical research, certainly when it’s published, is with unidentified subjects. And the idea here was for us to be role models, to prove that this information wasn’t secret, or scary, or dangerous.

Rik Kirkland: Part of the barrier to getting electronic records is all the concern about releasing this stuff and all the forms you sign every time you go to the doctor. So, this is just a radical act of transparency, right?

Esther Dyson: Right. And you know, you go to any health insurance company, they don’t want all that detail. They certainly want to know about preexisting conditions, but you don’t need to know your genome to know those. And the genome itself was too much information and it doesn’t really tell you that much.

But if your mother died of a heart attack, or your father has colon polyps, as mine does, it’s pretty clear what the indications are. Genomes themselves give you only—with a few exceptions—percentages. So, it’s not like you put this information up and people are going to stick pins into it. I actually was expecting more medical spam about, you know, “We looked at your genome. You should buy such and such nutri-ceutical.” What will be exciting is when you have hundreds of thousands of these and you say, “Oh, wow. They’re these five genes that seem to interact.” Most things are not a gene. It’s usually a lot of different genes—and then, combined with what you eat and whether you sleep enough and whether you stay warm enough and all these other things—that actually produce a real outcome in a person of being in such and such condition.

Rik Kirkland: So, how far are we away from this kind of personalized medicine that knowing your genome and knowing more about the patterns the genome reveals. How far away is that moment?

Esther Dyson: Well, it’s like everything else. A few things we already know. Different types of breast cancer have different prognoses, different recommended treatments, that kind of thing. I think we’re pretty close to getting there on, sort of, overall things, like how fast you metabolize drugs in general and specific drugs. Originally, type 1 and type 2 diabetes, that was not a genetic discovery, but it was the same kind of discovery. “Oh, it looks like the same condition, but really they are two different causes and very different patterns.” And so forth.

We’re there in places, but we’re very, very far from being there in general. This is something that people should want to do or volunteer to do, not something that should be imposed on them.

Rik Kirkland: You said earlier that the harder thing was not getting your genome done—although that’s expensive right now and will get less so as the process scales—but just getting your medical records together. So, talk a little bit about the information technology possibilities and finally get our arms around this medical records piece.

Esther Dyson: There are different things. One, individuals assembling their own health record. Suddenly, when any doctor treats you, he can know what the other doctors have done, what drugs you were taking, what conditions you might have—your genetic information could be helpful, but it’s just one piece. And so, no doctor, especially in these days of specialists, can know everything that’s done to you. Nor can every doctor know all the medical knowledge that might apply to you and your condition. Everyone else uses electronic memory aides, why shouldn’t doctors?

There’s this sort of macho/priesthood notion in doctors that they should know everything. But there are companies now that are already using drug-interaction databases. And those are very useful in averting tragedies all the time, where patients take 2 or 3 drugs from different doctors and it kills them. More often, you just have somebody who’s taking 18 or 19 different drugs. And you take 10 of them away, and the patient gets better rather than worse. So, that’s the first part, just the patient having the data aggregated in one place.

Then you have, of course, the longitudinal thing—watching how things change over time, seeing the patient’s reaction to drugs. And if you aggregate all that information across lots of patients, lots of genomes, lots of treatments, suddenly you begin to see both correlations. And then you can actually figure out how the different forces interact. And that’s when it gets really exciting.

In the meantime, one hopes also that hospitals—and all kinds of care providers—will get more efficient. They won’t have to redo tests. They’ll make better treatment decisions. You’ll be able to reward people for better outcomes rather than simply pay for health care. What you really want to pay for is health, which is the difference between the expected outcome and a better outcome.

Rik Kirkland: As you said, the ultimate game here is better health, personalized health, from all this amassing of data, and then being able to apply it to your particular situation.

Esther Dyson: But there’s another impact, that if you know more about yourself and how things work, on the margin, chances are, you’re going to be more healthy. You’ll be better armed against the doughnut. Or maybe not the extra drink, that’s very seductive.

Rik Kirkland: Don’t underestimate doughnuts. They’re seductive too.

Esther Dyson: The stuff people eat is crazy. And they know it, sort of. It’s important if they know it scientifically, but also if they know it in relation to themselves. Businesses do projections all the time. You continue this spending rate, you’ll run out of cash next February. You continue this behavior rate, you’re probably going to get a heart attack before you’re 60.

That kind of specific prediction is really good in changing behavior. And so are reminders, so are social networks where patients, where healthy people encourage one another to exercise or remember to take their meds or just find comfort from people who are suffering from the same problem you have.

Rik Kirkland: Ten years from now, will we see a really dramatic transformation, do you think, in how health care is delivered? Will it really be far more personalized? Is it a 20-year process?

Esther Dyson: I think you’ll see a lot of changes in five to ten years. But not for everybody. Like everything else, this is going to benefit—let’s face it—the rich and the well educated first. They’re the ones right now who, if they have a problem, they probably have a friend at Mount Sinai [Hospital]. And so, they can get the best care anyway. You shouldn’t need a friend at Mount Sinai to get good cancer care. That’s the real difference. There will still be a leg for people who live in the wrong place or who are at the bottom of the pyramid. When you do things with computers and with IT, they actually scale very well. So, you can’t reproduce the friend at Mount Sinai. But you can reproduce an information system that gives you access to the same information.

Rik Kirkland: That was formerly only there for the friend in Mount Sinai, and now you can get it cheaper.

Esther Dyson: Yes, right.

Rik Kirkland: So, that’s the hope over a 10- to 20-year period?

Esther Dyson: Yes, that’s the hope.

Rik Kirkland: Great. Thank you, Esther.

[Esther Dyson is the mastermind of "Direct-to-Consumer" (DTC) business model, first for personalized medicine. We missed her in Boston! - Pellionisz, HolGenTech_at_gmail.com, June 18th, 2009]


Consumer Genetics Show, Boston, June 9-11, 2009 - As DNA sequencing melts from $50k to zero $, Genome Based Economy opens wide to PDA-s.

The fully attended jam-packed three days program of the “first ever” consumer genetics conference witnessed on the 10th of June morning program the demo presented in his talk by HolGenTech Startup Founder Andras Pellionisz an entry to the "Genomic PDA" era. Pellionisz presented HolGenTech' PDA-assisted shopping preferences, based on general personal conditions, partial DNA information (23andMe, Navigenics, DecodeMe SNiP interrogation) and eventual search of affordable personal genome of shopper. Holding the popular PDA (Android) by Google, with its build-in bar-code read reading capabilities, displayed on-line consumer products, how shoppers can get recommendation of more or less matching product of their DNA:

HolGenTech - Consumer Product Recommendation by Barcode Reader PDA , based on hereditary, SNiP and complete "DNA fine combing" analysis.  Demo on Google Android by HolGenTech Founder; Andras Pellionisz at "Consumer Genetics Conference" June 10, 9:45 am "Genome Computers for our Genome Based Economy"

The second similar announcement came on the last but one talk on the 10th of June late afternoon program presented by Illumina CEO Jay Flatley. He featured a demo “put together in the past 10 days by a programmer who never programmed an iPhone before” that envisioned human DNA information on Apple’s popular PDA, the iPhone.

Illumina demo - Genome sharing by iPhone PDA

The entries of genomics into PDA-s by HolGenTech and Illumina are different not only in the Google/Apple PDA-phone used, but also in their core philosophy and business model. HolGenTech directly uses personal genome-based information for consumer purchase recommendations - and the PDA, for data-security purposes does not contain the personal DNA information - it provides recommendations, arising from Personal Genome Computer (PGC) "fine combing" of one's secured Personal DNA at the customer's secured (encrypted and physically protected) PGC. Illumina's vision was projected as the PDA not only storing the personal DNA, but through the PDA interfaces sharing/comparing such personal DNA information - while no consumer guidance was evident from Illumina's demo.

Program of June 10th was wrapped up by Keynote speaker Francis Collins, whose conclusion was that “Personal genomics is here to stay”. As a surprise, he told the audience that he had obtained analysis by the top three SNP-testing providers, 23andMe, Navigenics, DecodeMe in a manner that they were not aware of testing the DNA of Francis Collins. This apparent openness of the DCT services to users who are reluctant to reveal their identification is likely to greatly increase the interest of the public in obtaining such services - perhaps not even wanting to wait those few months until the melting "full sequencing price" replaces present (microarray-based) limited probing. His comparison of the services gave the impression that the analyses by the three companies are rather coherent, with some providers that check for more parameters of a given syndrome, naturally providing a higher likelihood of predilection. Keynote speaker Francis Collins also projected a diagram of the "simply unsustainable" escalation of the price of US health care costs - unless genome-based prevention will rescue the system from collapse.

The oversubscribed program, buzzing with “first ever” excitement was not singularly focusing on “consumer genetics”. A general theme of the dramatic meltdown of the price of coming full DNA sequencing was contrasted by debates of the actual value of the “get info”, when it will come to “use info” in health-care, prevention – and in consumer society of the Genome Based Economy.

All in all, the Boston regioon emerged with with Harvard, MIT, Broad Institute, Boston University, Brigham Hospital and affiliated institutes as perhaps "the predominant PostModern Genome Medicine Center of the Nation" (with Michael F. Murray, M.D., Scott T. Weiss, M.D., and Robert Green, M.D., MPH spearheading the medical genomics initiative).

More Press Coverage of this formative "First Ever" meeting will be added.

Illustration from Report on the Conference by Futurist Melanie Swan

[Reporting from the meeting, Pellionisz, HolGenTech_at_gmail.com, June 12th, 2009]


MicroRNA Replacement Therapy May Stop Cancer In Its Tracks

ScienceDaily (June 12, 2009) — Scientists at Johns Hopkins have discovered a potential strategy for cancer therapy by focusing on what's missing in tumors. A new study suggests that delivering small RNAs, known as microRNAs, to cancer cells could help to stop the disease in its tracks. MicroRNAs control gene expression and are commonly lost in cancerous tumors.

Noticing the conspicuous absence of single-stranded genetic snippets called microRNAs in cancer cells, a team of researchers from Johns Hopkins and Nationwide Children's Hospital delivered these tiny regulators of genes to mice with liver cancer and found that tumor cells rapidly died while healthy cells remained unaffected. Researchers have shown that replacement of a single microRNA in mice with an extremely aggressive form of liver cancer can be enough to halt their disease.

Publishing results of the study June 12 in Cell, the researchers say they have provided one of the first demonstrations that microRNA replacement provides an effective therapy in an animal model of human disease.

"This work suggests that microRNA replacement may be a highly effective and nontoxic treatment strategy for some cancers or even other diseases," says Josh Mendell, M.D., Ph.D., an associate professor in the McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine....

His team had considered the possibility that the replacement of a single small RNA might have little if any effect, especially in the setting of all the complex changes that drive the aberrant behavior of a cancer cell. But the tumor cells in the mouse were indeed sensitive to the restoration of the microRNA—so much so that they died, rapidly.

"This concept of replacing microRNAs that are expressed in high levels in normal tissues but lost in diseases hasn't been explored before," Mendell says. "Our work raises the possibility of a more general therapeutic approach that is based on restoring microRNAs to diseased tissues."

The Hopkins team was building on precedent-setting research (published January 2008 in Nature Genetics) showing that in a Petri dish, replacing microRNAs in lymphoma cells stopped the formation of tumors when the cells were injected into mice.

The new study involves animals that develop liver tumors closely resembling the human disease. Researchers chose to target the liver because, according to Mendell, it is a large organ whose function is detoxification and therefore, is a relatively accessible target for the delivery of small molecules, compared to other tissues.

Using a "special delivery" virus that can deliver genes to tissues without causing them any disease or harm, the researchers intravenously injected a fluorescent microRNA-containing virus into one group of mice with aggressive liver cancer, and injected a control virus containing no microRNA into another group. The viral delivery system was developed by Mendell's father, Jerry Mendell, M.D., director of the Center for Gene Therapy at The Research Institute at Nationwide Children's Hospital in Columbus, and K. Reed Clark, Ph.D., associate professor and director of the Viral Vector Core Facility at Nationwide Children's Hospital.

After three weeks, six of eight mice treated with the control virus experienced aggressive disease progression with the majority of their livers replaced by cancerous tissue. In contrast, eight of 10 of animals treated with the microRNA were dramatically protected, exhibiting only small tumors or a complete absence of tumors. Liver body weight ratios were significantly lower in the treated mice, further documenting cancer suppression.

"The livers of the mice that received the microRNA virus glowed fluorescent green, showing that the microRNA ended up where it was supposed to go, and the cancer was largely suppressed," Mendell said.

Equally intriguing, he reported, "The tumor cells that received the microRNA were rapidly dying while the normal liver cells were completely spared. These findings, as well as the results of specific tests for liver damage, demonstrated that the microRNA selectively kills the cancer cells without causing any detectable toxic effects on the normal liver or other tissues."

Mendell points out that the microRNA is normally present at high levels in non-diseased tissues, and especially in the liver. Mendell speculates that this is why healthy cells are very tolerant to therapeutic delivery of even higher levels of this microRNA. However, the sensitivity of tumor cells to this microRNA suggests that loss of this molecule is a critical step as normal cells become cancer cells.

"Since we were able to demonstrate such dramatic therapeutic benefit in this extremely aggressive model of human liver cancer, we are hopeful that similar strategies will be effective for patients with this disease," says Mendell.

In addition to Joshua Mendell, authors of the paper are Jerry Mendell, K. Reed Clark, Janaiah Kota and Chrystal L. Montgomery, of The Research Institute at Nationwide Children's Hospital, Columbus, Ohio; and Raghu R. Chivukula, Kathryn A. O'Donnell, Erik A. Wentzel, Hun-Way Hwang, Tsung-Cheng Chang, Perumal Vivekanandan, and Michael Torbenson, all of Johns Hopkins University School of Medicine.

[MicroRNA mining (first in silico and then verifying them in vivo) is one of the greatest promises of PostModern Genomics. It is fully expected that with both affordable full genomes and suitable genome computers available as technologies, a better understanding of recursive genome regulation will greatly accelerate finding such microRNA-s on one day, verifying them shortly thereafter, and using harmless viruses for delivery, quickly translate genomic research into therapy and cure. Of course, microRNA-s represent tremendous IP - and thus algorithms are fiercely propriatery (just like algos in financial computing). Thus, Genome Computers should be made available for such discovery as generic tools - since e.g. Big Pharma will never make it public what algorithms are the bases of PostModern Genetic Therapy and Cure. Pellionisz, HolGenTech_at_gmail.com, June 12th, 2009]


First Ever: Final Program of Consumer Genetics Conference, Boston 9-11 June, 2009

"Schedule" lays out the 3-day program of a "first ever" kind of a conference, organized by John Boyce and Peter Miller. This almost exclusively USA-based event will blow open the floodgates towards a consumer-driven "Genome Based Economy". Other than novelty, what makes this conference formative is, that what is one day a new scientific discovery, the next day becomes a disruptive business application. (Reminds one of Leo Szilard, whose science of nuclear physics was turned into patenting the nuclear reactor). Author of FractoGene and The Principle of Recursive Genome Function will show HolGenTech searching with Personal Genome Computers (with a new architecture necessitated by the computing-demands of mining "fractal defects" associated with diseases), as well as the Principle identifying the HoloGenome as an open system to intrinsic and extrinsic protein > DNA recursion (heretofore "forbidden" by the Central Dogma) - customers of the Genome Based Economy will be enabled by Personal Genome Computer synced with barcode-reader PDA to follow purchase-recommendations based on their personal DNA.

[Pellionisz, HolGenTech_at_gmail.com, June 10th, 2009]


Life After GWAS: For Some Researchers, Focus Shifts to Rare Variants, CNVs [we are in the "beyond SNPs era" - AJP]

June 10, 2009
By Bernadette Toner

SAN FRANCISCO - Over the last several years, genome-wide association studies [of SNiP-s] have become the primary method for identifying variations associated with human disease, but the approach has shortcomings that are leading some in the genomics community to push more aggressively into the post-GWAS era.

At Cambridge Healthtech Institute's Genomic Tools and Technologies Summit held here this week, many speakers noted that even though GWA studies have linked hundreds of common SNPs to disease, these variants account for only a very small portion of disease heritability, which has raised doubts over their clinical value. A number of talks focused on two key alternatives to GWAS: the discovery of rare variants, as opposed to common variants, with a role in disease; and an increasing focus on copy number variants rather than SNPs.

Allen Roses, director of the Deane Drug Discovery Institute at Duke University, noted that GWAS has "largely disappointed its most enthusiastic proponents" because it has not been able to identify genes responsible for complex diseases. This disappointment, he said, is because "GWAS was never meant to substitute for fine genomic sequencing," but rather to identify regions of linkage disequilibrium in the genome that warrant further study.

GWAS data on its own represents a "statistical average" of populations, and is therefore of limited value in treating individual patients, he explained.

Roses encouraged researchers to carry out more "post-GWAS experiments" to hone in on genes that may have small effects in the overall population, but interact in complex ways to impact disease risk in certain individuals. As an example, he outlined a project in which his group carried out deep sequencing of the region around the APOE gene to identify other genetic players at work in Alzheimer's disease. His team used sequencing and phylogenetic analysis to determine that variants of APOE - in combination with certain variants of the TOMM40 gene - split into two distinct risk profiles that would not be evident from looking at APOE genotypes alone.

Jay Shendure of the University of Washington described a different approach to identify rare variants based on exome sequencing. A key challenge in the wake of GWAS, he said, is that the loci implicated in these studies are often not connected to functional regions of the genome....

Using a combination of Agilent target capture arrays and Illumina sequencing, Shendure said that his group can sequence the protein-coding regions of the genome for around 5 percent of what it would cost to sequence a whole genome. ...

The study successfully pinpointed MYH3 as the causal gene and also identified 13,000 novel coding variants and 400 novel coding insertions and deletions, Shendure said. ...

Su Yeon Kim, a researcher at the University of California, Berkeley, described another approach to apply next-generation sequencing to overcome the limitations of SNP arrays in association studies. ...

Likewise, Nicholas Schork of the Scripps Research Institute noted that whole-genome sequencing would be the best method for identifying rare variants that collectively explain complex diseases in the population, but he warned that correlating entire genomes with phenotypes in large-scale studies is a computational challenge. ...

While SNP arrays may have their shortcomings, there is plenty of promise for chips that can identify copy number variants and other structural variants in the genome, speakers said.

While some have proposed next-generation sequencing as a promising approach for detecting structural variation, arrays are still the "most cost-effective method" for detecting copy number alterations in cancer samples, said Jonathan Pollack of the Stanford University School of Medicine.

Likewise, Stephen Kingsmore, president and CEO of the National Center for Genome Resources, said that arrays are still the "gold standard" for detecting CNVs. ...

One reason that sequencing is limited in its ability to detect CNVs is the fact that reference genomes used for alignment are inadequate: they do not contain structural information, they have too many gaps, and they are not diploid. Because of this, several speakers suggested that short-read sequencing will not be an appropriate method for CNV analysis until routine de novo assembly is possible for a human genome.

The reference genome "is a mess," and the fact that it is haploid "reflects how naive we were with regard to copy number variation," during the Human Genome Project, said James Lupski of Baylor College of Medicine.

...Lupski said that next-generation sequencing is "coming around" when it comes to copy number variation, but noted that when Baylor sequenced the Watson genome with the Roche/454 platform, it used an array to identify copy number variants that sequencing did not detect...

Nevertheless, Lupski said that efforts like the 1000 Genomes Project will likely produce valuable information that will drive improvements in the use of sequencing for CNV detection.

[The field of genome analysis is in a state of crucial transition. With Illumina's new array (with up to 4 million data-points, searching in addition to SNiP-s also for over 10 thousand Copy Number Variations, not only theory, but the hardware is "beyond SNiP-s". I have long maintained that searching for genome glitches will remain to be at its infancy until fundamentals of a) algorithmic understanding of genome function (genome regulation) advances to theory such as The Principle of Recursive Genome Function, b) affordable full DNA sequencing provides the entire information, and c) Genome Computers with serial/parallel (hybrid) architecture will be robust enough for "full fine-combing" of DNA. There is no question that we are already in the "beyond SNiPs" era, with copy number variations, insertions/deletions, silent mutations, microRNA-s and yes, "fractal defects" according to FractoGene are on the increasingly wide scope of analysis are upon us - Pellionisz, HolGenTech_at_gmail.com, June 10th, 2009]


'Junk' DNA Proves To Be Highly Valuable [Think FractoGene - AJP]

ScienceDaily (June 6, 2009)

What was once thought of as DNA with zero value in plants--dubbed "junk" DNA--may turn out to be key in helping scientists improve the control of gene expression in transgenic crops.

That's according to Agricultural Research Service (ARS) plant pathologist Bret Cooper at the agency's Soybean Genomics and Improvement Laboratory in Beltsville, Md., and collaborators at Johns Hopkins University in Baltimore, Md.

For more than 30 years, scientists have been perplexed by the workings of intergenic DNA, which is located between genes. Scientists have since found that, among other functions, some intergenic DNA plays a physical role in protecting and linking chromosomes. But after subtracting intergenic DNA, there was still leftover or "junk" DNA which seemed to have no purpose.

Cooper and collaborators investigated "junk" DNA in the model plant Arabidopsis thaliana, using a computer program to find short segments of DNA that appeared as molecular patterns. When comparing these patterns to genes, Cooper's team found that 50 percent of the genes had the exact same sequences as the molecular patterns. This discovery showed a sequence pattern link between "junk" and coding DNA. These linked patterns are called pyknons, which Cooper and his team believe might be evidence of something important that drives genome expansion in plants.

The researchers found that pyknons are also the same in sequence and size as small segments of RNA that regulate gene expression through a method known as gene silencing. This evidence suggests that these RNA segments are converted back into DNA and are integrated into the intergenic space. Over time, these sequences repeatedly accumulate. Prior to this discovery, pyknons were only known to exist in the human genome. [In reality, "Pyknon"-s, highly repetitive non-random short sequences were found by Isidore Rigoutsos in 2006 both in human, and as well as AJP analyzed the pyknons of the smallest free-living organism, Mycoplasma Genitalum. The distribution of frequency of pyknons in the smallest organism was found by AJP to follow the Zipf-Mandelbrot Fractal Parabolic Distribution curve - AJP]. Thus, this discovery in plants illustrates that the link between coding DNA and junk DNA crosses higher orders of biology and suggests a universal genetic mechanism at play that is not yet fully understood. [Of course, not "fully understood" - but were found to be the underlying evidence for fractal iterative recursion - AJP]

The data suggest that scientists might be able to use this information to determine which genes are regulated by gene silencing, and that there may be some application for the improvement of transgenic plants by using the pyknon information.

This research was published online as an advance article on the Molecular BioSystems website, and will be published later this year in a special issue of Computational Systems Biology.


Journal reference:

Feng et al. Coding DNA repeated throughout intergenic regions of the Arabidopsis thaliana genome: evolutionary footprints of RNA silencing. Molecular BioSystems, 2009; Article citation: Jian Feng, Mol. BioSyst., 2009, DOI: 10.1039/b903031j

Jian Feng, Daniel Q. Naiman and Bret Cooper

Pyknons are non-random sequence patterns significantly repeated throughout non-coding genomic DNA that also appear at least once among coding genes. They are interesting because they portend an unforeseen connection between coding and non-coding DNA. Pyknons have only been discovered in the human genome, so it is unknown whether pyknons have wider biological relevance or are simply a phenomenon of the human genome. To address this, DNA sequence patterns from the Arabidopsis thaliana genome were detected using a probability-based method. 24654 statistically significant sequence patterns, 16 to 24 nucleotides long, repeating 10 or more times in non-coding DNA also appeared in 46% of A. thaliana protein-coding genes. A. thaliana pyknons exhibit features similar to human pyknons, including being distinct sequence patterns, having multiple instances in genes and having remarkable similarity to small RNA sequences with roles in gene silencing. Chromosomal position mapping revealed that genomic pyknon density has concordance with siRNA and transposable element positioning density. Because the A. thaliana and human genomes have approximately the same number of genes but drastically different amounts of non-coding DNA, these data reveal that pyknons represent a biologically important link between coding and non-coding DNA. Because of the association of pyknons with siRNAs and localization to silenced regions of heterochromatin, we postulate that RNA-mediated gene silencing leads to the accumulation of gene sequences in non-coding DNA regions.

[3 years after Rigoutsos found "pyknon"-s (May, 2006), and the International PostGenetics Society at its European Inaugural became the first organization to have officially abandoned "Junk DNA" as a scientifically valid notion (October 12, 2006), to be followed by the release of ENCODE 8 months later (June 14, 2007), PostModern Genomics is at cross-roads. Genomics is now augmented by Epigenomics, and is therefore integrated in HoloGenomics that expresses the interaction in terms of Informatics; with a novel axiom of "The Principle of Recursive Genome Function". Scientifically, we must break through towards re-interpreting our classic understanding of genome regulation (Operon regulation becomes FractOperon). As for technology, we are at the brink of affordable full human DNA sequencing. Together, R&D of HoloGenomics already provided one paradigm-shift, from biochemistry to informatics, and a second one from government-supported R&D to "consumer genomics" where patients, would be patients and consumers drive progress. All this amidst of a global recession, when US Health Care can only be rescued from financial breakdown if/when genome-based prevention becomes a mainstay. Welcome to a Genome Based (Global) Economy. - Pellionisz_at_junkdna.com, June 6th, 2009]


First Direct-to-Customers Genomics Conference - Personal Genome Computers for our Genome Based Economy

See "Schedule" for the list of Presentations of the 3-day Program

List of Presentations and Sponsors (as of June 1st, 2009)

23 & Me
Bioinformatics LLC
Boston University
Corriel Institute for Medical Research
CNN Television
Delphi Bio
DNA Like Me
DNA Direct
Duke University – Center of Genome Ethics
Feinstein Kean Healthcare
FDA - US Food and Drug Administration
Foley Hoag
Genetics Alliance
Genome Quebec
Genomic Healthcare Strategies
Generation Health
Harvard Medical School
Health Advances
Helicos Biosciences
Highland Capital
Interleukin Genetics
Journal of Biolaw & Business
Life Technologies
Montreal Heart Institute
MSNBC Television
National Society of Genetic Counselors
NBC Television
NHGRI - National Human Genome Research Institute (of NIH)
Northwestern University
OVP Ventures
Patients Like Me
Personalized Medicine Coalition
Physic Ventures
PMC - PubMed Central
Princeton University
Procter & Gamble
Robinson, Bradshaw & Hinson
Rodman & Renshaw
Sorenson Genomics
The Science Advisory Board
The Life Science Executive Exchange

[HolGenTech, introduced in YouTubes Google Tech Talk (October 30, 2009) and Churchill Club (January 22, 2009) will present the DTC tool and system "Genome Computing in our Genome Based Economy". Science is based on The Principle of Recursive Genome Function, more closely defined as Fractal Iterative Recursion - Pellionisz_at_junkdna.com, June 1st, 2009]


Genetics-based products stir concerns

By Carolyn Y. Johnson
The Boston Globe
Globe Staff / May 27, 2009

Whether it is a new skin care product that promises to "reactivate" the youth in your genes or tests that offer nutrition advice tailored to your DNA, the age of consumer genetics is here.

Lancome is selling "Genifique," a skin serum developed by identifying genes more active in young skin. Procter & Gamble, the world's largest consumer products company, has been investigating the genetics of everything from dandruff to the common cold. Startups are offering consumers full-genome scans and more targeted genetic tests to customize advice on weight loss or heart health...

"What has happened is some of these companies have cribbed from our sheets about where we hope to be going, and they've turned that into their business plans for where we are now," said Dr. Isaac Kohane, a Harvard Medical School professor. ...

The companies, on the other hand, contend that even if much of the complexity of genetics continues to be worked out in labs, enough is already known that they are providing valuable information to customers.

"It's an early science; it's in flux, it's changing . . . and I think that's the context by which people have to understand consumer genetics - this is something that's in the process of evolution, but that doesn't mean there isn't utility," said Lew Bender, chief executive of Interleukin Genetics, a Waltham company developing genetic tests for consumers.

The disconnect stems partly from ordinary people's expectations of genetics, which have been set by the powerful - but often oversimplified - idea they learned in high school that inherited genes determine traits such as blood type or eye color and that a single errant gene could be the culprit for a disease, such as cystic fibrosis or Huntington's disease. Such clear-cut examples of the power of genetics do not exist in most diseases, or complex phenomena like aging, where a confusing stew of genetic, lifestyle, and environmental factors seems to play a role.

Most of the genes or snippets of DNA that have so far been linked to diseases confer a small, or hard-to-interpret amount of risk for a disease. Dr. W. Gregory Feero, chief of the genomic healthcare branch at the National Human Genome Research Institute, said he is concerned that consumers' initial introduction to genetics could come in tests that find genes associated with a disease...

As a sign of genetics' arrival in the marketplace, the first Consumer Genetics Show will land in Boston in June. With panels of doctors and scientists as well as presentations on emerging technologies, it is a forum for establishing and questioning the field's legitimacy. Sessions include not only intellectual property and investment in such companies, but also, "Is the science ready yet?" and "Personal Genetics - is it Really Here?"

That time is here for Gayle Averyt, a 75-year-old from South Carolina who got a slate of genetic tests a few years ago during a visit to the upscale Berkshires resort Canyon Ranch. Averyt was determined to peer into his genome, but felt ill-equipped to interpret the results himself, so he was pleased that his longtime doctor at the resort went through the data with him.

"My intent was to find a doctor who could interpret the gene," said Averyt, who was told he has 15 times more risk for Alzheimer's disease because of his genes. Under the advice of the doctor, Averyt has changed his lifestyle to try and decrease his risk for the disease.

One of the companies betting that consumers will pay to figure out genetics themselves is Interleukin Genetics, which plans to next month launch a slate of tests that it says will give consumers a sense of their health through their genes. A nutritional test examines genes involved in metabolism of vitamin B or antioxidants, for example....

Interleukin Genetics tests for genes shown to elevate risk for different health conditions. Though he was not familiar with the company's products, Kohane pointed out that finding genes is generally less valuable than talking to your parents....

Procter & Gamble scientists sequenced the genome of the fungus that causes dandruff ... helped the development of a skincare product line called Olay Pro-X.

George Rivera, senior scientific liaison for L'Oreal, Lancome's parent, said his company has been measuring gene activity and proteins in skin for years to find ingredients that could influence genes. Genifique openly flaunts the genetic research that went into the product in its very title...

Dr. Robert Green, a genetics fellow at Harvard Medical School, said the potential is great...

Carolyn Y. Johnson can be reached at cjohnson_at_globe.com.

[ultimately customers who use their PGC for personalized health prevention now, and later as the Genome Based Economy unfolds to guide their consumer choices and preferences according to their own DNA. Pellionisz_at_junkdna.com, June 1st, 2009]


Will Consumers Sustain Direct-to-Customer Genomics?

Bio-IT World
By Kathie WrickMay 26, 2009

Welcome to the world of consumer genomics, where there are different rules for building successful businesses than in medical diagnostics. A host of companies are marketing or selling genetic tests directly to consumers. As expected, some companies are far more evidence-based than others in their test and product offerings.

Personal genomics companies first launched in November 2007, including 23andMe, deCODEme, and Navigenics, and have brought the latest in gene chip technologies to the marketplace. Knome will sequence your entire genome for $99,500. But genetic testing has been marketed to consumers in various ways for a long time. Most of the earlier firms built on past capabilities in doing forensic genetics and, as Internet retailing started to grow, began offering paternity and family relationship tests online to consumers.

The science and business have evolved rapidly in the past decade. Certainly, medical genetics will not completely evolve to a consumer business. But thanks to the confluence of two transformational technologies—the Internet, and the delineation and measurement of the human genome—procurement of some genetic tests has migrated from the scientist- or health professional-controlled domains to the world of cyberspace.

Entrepreneurs have many motivations for selling genetic tests directly to consumers. They have watched consumer disenchantment with the U.S. health care system and its institutions grow exponentially for two decades, and consumers are now more proactive than ever in making their own health care decisions. Some firms believe that genomic medicine is coming anyway, and consumers armed with their genetic information can help drive it even faster. Others feel that everyone should be aware of their own genetic information, regardless of what the medical profession thinks. The ability to make expensive purchases securely online became the bridge that made consumer genetics happen, as prices for these tests or services range from hundreds to many thousands of dollars. Though medical genetic testing will grow in its own right, and much of this work and associated business revenues will stay within the traditional clinical laboratory testing services market, the Internet has allowed a mini “distribution revolution” for genetic tests.

The Biggest Variable

There is a big consideration for the technologically savvy genetic testing companies who have elected to sell their services directly to consumers—the consumers themselves. The 2007 startups have the best scientists and latest technologies. Yet it is not clear that personal genomics companies have applied the same level of rigor and resources to understanding their consumer marketplace as they have to the technology developments which helped make genetic testing for consumers affordable. Had these startups solicited funds from investors specializing in consumer products businesses, they might have been told to come back when they had appropriately sized their market and could back up an estimate of sales revenues based on sound consumer research. Consumer goods and services are different businesses altogether than medical diagnostics, pharmaceuticals, or health care. The compelling genetic technology advances applied to direct-to-consumer testing services would likely take a back seat to well-done consumer research that reveals how many consumers of a certain psychographic profile show a strong intent to purchase.

It’s not clear that the current investors in personal genomics companies, who know technology businesses very well, are asking the right questions about what is important in a consumer products and services business. Even the publicly available surveys on consumer attitudes about genetics and genetic testing suggest strongly that consumers may not be the best targets for marketing and selling these tests. That’s not to say there won’t be a good number of consumers who will buy them. Rather, it just might mean that the vast majority of consumers, whose numbers are needed to sustain and grow a business long term, may not be likely to buy.

The secret could lie in identifying the consumer segment that is highly motivated to buy, learn what drives them, and design product offerings according to exactly what they are looking for—better than the competitors. The genetic test may not ultimately be the end product, but the vehicle for businesses to help consumers plan their lives after learning the results. Even then, without good consumer research, no one knows what the right product is or how big (or small) that product’s business might be.